A Closer Look At How Python f-strings Work
Photo ByFancycrave
PEP 498 introduced a new string formatting mechanism known as Literal String Interpolation or more commonly as F-strings (because of the leading f character preceding the string literal). F-strings provide a concise and convenient way to embed python expressions inside string literals for formatting:
We can execute functions inside f-strings:
F-strings are fast! Much faster than %-formatting and **str.format() **— the two most commonly used string formatting mechanisms:
Why are f-strings so fast and how do they actually work?PEP 498 provides a clue:
F-strings provide a way to embed expressions inside string literals, using a minimal syntax. It should be noted that an f-string is really an expression evaluated at run time, not a constant value. In Python source code, an f-string is a literal string, prefixed with ‘f’, which contains expressions inside braces. The expressions are replaced with their values.
The key point here is that an f-string is really an expression evaluated at run time, not a constant value . What this essentially means is that expressions inside f-strings are evaluated just like any other python expressions within the scope they appear in. The CPython compiler does the heavy lifting during the parsing stage to separate an f-string into string literals and expressions to generate the appropriate Abstract Syntax Tree (AST):
We use the the ast module to look at the abstract syntax tree associated with a simple expression a + b
within and outside of an f-string. We can see that the expression a + b
within the f-string f{a + b}
gets parsed into a plain old binary operation just as it does outside the f-string.
We can even see at the bytecode level that f-string expressions get evaluated just like any other python expressions:
The add_two function simply sums the local variables a and b and returns the results. The function add_two_fstring does the same but the addition happens within an f-string. Besides the FORMAT_VALUE instruction in the disassembled bytecode of the add_two_fstring function (this instruction is there because after all, an f-string needs to stringify the results of the enclosed expression), the bytecode instructions to evaluate a + b
within and outside an f-string are the same.
Processing f-strings simply breaks down into evaluating the expression (just like any other python expression) enclosed within the curly braces and then combining it with the string literal portion of the the f-string to return the value of the final string. There is no additional runtime processing required . This makes f-strings pretty fast and efficient.
Why is str.format() much slower than f-strings? The answer becomes clear once we look at the disassembled byte code for a function using str.format():
From the disassembled bytecode, two bytecode instructions immediately jump out: LOAD_ATTR and CALL_FUNCTION. When we use str.format(), the format function first needs to be looked up in the global scope. This is done via the LOAD_ATTR bytecode instruction. Global variable lookup is not really a cheap operation and involves a number of steps(take a look at one of my earlier post on how attribute lookup works if you are curious). Once the format function is located, the binary add operation (BINARY_ADD) is invoked to sum the variables a and b. Finally the format function is executed via the CALL_FUNCTION bytecode instruction and the stringified results are returned. Function invocation in python is not cheap and has considerable overhead. When using str.format(), the extra time spent in LOAD_ATTR and CALL_FUNCTION is what contributes to str.format() being much slower than f-strings.
What about %-string formatting? We saw that this is faster than str.format() but still slower than f-strings. Again, lets look at the disassembled byte code for a function using %-string formatting for clues:
Right off the bat, we don’t see the LOAD_ATTR and CALL_FUNCTION bytecode instructions — so %-string formatting avoids the overhead of global attribute lookup and python function invocation. This explains why it is faster than str.format(). But why %-string formatting is still slower than f-strings? One potential place where %-string formatting might be spending extra time is in the BINARY_MODULO bytecode instruction . I haven’t done thorough profiling of the BINARY_MODULO bytecode instruction but looking at the CPython source code, we can get a sense of why there might just be a tiny bit of overhead involved with invoking BINARY_MODULO:
From the python C source code snippet above, we see that the BINARY_MODULO operation is overloaded. Each time it is invoked, it needs to check the type of its operands (line 7 -13 in the above code snippet) to determine whether the operands are string objects or not. If they are, then the modulo operator performs string formatting operations. Else it computes the usual modulo(returns remainder from the division of the first argument from the second). Although small, this type checking does come with an overhead which f-strings avoid.
Hopefully, this post has helped shed some light on why f-strings stand out from the crowd when it comes to string formatting. F-strings are fast, simple to use, practical and lead to much cleaner code. Use them!