2  Writing Idiomatic Python in 2026

Key Idea: A look at how Python is typically written today: idioms, libraries, and tools.

What is pythonic?

All programming languages have the concept of idiomatic code, code that uses the language as intended.

While not necessary to get to a working solution, idiomatic code is preferable for several reasons:

  • Easier for other users of the language to read & understand.
  • When working on a team, leads to cohesive code that reads as if it was written by one team, not dozens of individuals.
  • Idiomatic code paths are often optimized by language & library authors. Idiomatic code is typically faster as a result.

In Python we call idiomatic code “pythonic”.

Of course, just like spoken language, what is idiomatic evolves over time.

An early easter-egg hidden in Python offers a suggestion of core values of what might be considered idiomatic Python:

>>> import this
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!

We’ll explore some idioms in Python to see how the core principles have been applied as the language has evolved.

string interpolation

One of the lines in The Zen of Python that often raises eyebrows is: There should be one– and preferably only one –obvious way to do it.

Over the past few decades, there have been improvements that have introduced new ways to do things, but old code & examples persist.

It is helpful though to understand that it is rare for new features added to Python that introduce a new way of doing things that aren’t almost entirely preferable to the old.

We can look at a few of these to enforce our understanding of what is pythonic in 2026, while taking a deeper look into why the new way is better:

f-strings vs. old and older string formatting

One of the worst “offenders” of the “one obvious way to do it” rule is string interpolation.

Reminder: str immutability & efficiency

Strings are immutable, once a string is created in memory it cannot be modified. When we do something like:

name = "Toby"
job = "baker"
age = 20

sentence = name + " is a " + str(age) + " year old " + job

We create many intermediate strings that are thrown away, wasting both time and memory:

  • tmp1 = "Toby " + " is a "
  • tmp2 = tmp1 + "20"
  • tmp3 = tmp2 + " year old "
  • sentence = tmp3 + "baker"

tmp1-3 are discarded quickly after being allocated, the more substrings we have the worse it becomes.

This is also why we prefer str.join(iterable) to building a string one piece at a time in a loop. This creates a single string all at once, forgoing len(iterable) intermediate steps.

%s style interpolation

String interpolation is a common task, it is built into most languages.

Many languages, especially those written in or derived from C, have used C-style interpolation syntax.

// C Code
#include <stdio.h>

int main() {
    char buffer[100];
    char name[] = "Alice";
    int age = 28;
    float height = 5.7;
    
    // using sprintf to format a string
    // requires different %escape codes to match types
    sprintf(buffer, "Name: %s, Age: %d, Height: %.1f feet", name, age, height);
    printf("%s\n", buffer);

    return 0;
}

Python’s original string interpolation syntax was based on this:

name = "Alice"
age = 28
height = 5.7

# using % formatting (old style)
output = "Name: %s, Age: %d, Height: %.1f feet" % (name, age, height)
print(output)
Name: Alice, Age: 28, Height: 5.7 feet

This old syntax proved:

  • error-prone (items need to be in the same order, same count, etc.)
  • duplicative (needing to print the same variable twice, passing it twice)
  • harder to read than more modern alternatives

{}-style interpolation

This was superceded by str.format:

name = "Alice"
age = 28
height = 5.7

# `.format`
output = "Name: {}, Age: {}, Height: {:.1f} feet".format(name, age, height)
print(output)
Name: Alice, Age: 28, Height: 5.7 feet

This is somewhat better and more flexible than the % method, but it has mostly been superceded by f-strings (introduced in 3.6).

f-strings

name = "Alice"
age = 28
height = 5.7

output = f"Name: {name}, Age: {age}, Height: {height:.1f} feet"
print(output)
Name: Alice, Age: 28, Height: 5.7 feet

Safer code with context managers

Though it was introduced in Python 2.5, the with statement is a piece of Python syntax that is often poorly understood or neglected by new developers.

infile = open("input.txt")
outfile = open("output.txt", "w")

for line in infile.readlines():
  parsed = parse_line(line)      # what if parse_line raises an Exception?
  outfile.write(parsed)
with open("input.txt") as infile, open("output.txt", "w") as outfile:
    for line in infile.readlines():
        parsed = parse_line(line)
        outfile.write(parsed)

# if an exception is raised, infile and outfile will be closed

Perhaps it would be more in line with explicit is better than implicit to always write the error-handling explicitly, but practicality beats purity.

__enter__, __exit__

Like all syntax in Python, with is implemented as

infile = open("input.txt")
outfile = open("output.txt", "w")
# __enter__ called on entry to with block
infile.__enter__()
outfile.__enter__()
try:
    for line in infile.readlines():
        parsed = parse_line(line)
        outfile.write(parsed + "\n")
finally:
    # __exit__ is called whether there is an exception or not
    # if exception is raised, these values represent the active exception
    # if no exception, this will pass None, None, None
    import sys
    exc_type, exc_value, exc_traceback = sys.exc_info()
    outfile.__exit__(exc_type, exc_value, exc_traceback)
    infile.__exit__(exc_type, exc_value, exc_traceback)

enum

Errors should never pass silently. / Explicit is better than implicit

It is common to have functions that use strings to represent a limited range of options:

def move_piece(direction: str):
    """ direction should be a cardinal direction N,S,E,W """
    if direction == "N":
        ...
    elif direction == "S":
        ...
    elif direction == "E":
        ...
    elif direction == "W":
        ...

A common source of bug is passing an incorrect value:

# these can all sneak through code review, they look reasonable!
move_piece("NORTH")
move_piece("s")
move_piece("SW")

Explicit exception handling can help, but leave lingering runtime errors that aren’t detected until the offending code is executed.

from enum import StrEnum

# A StrEnum (or just regular Enum for `int`)
# represents a list of possible values for a type
class Direction(StrEnum):
    NORTH = "N"
    SOUTH = "S"
    EAST = "E"
    WEST = "W"

# Now we can hint (to the developer & IDE) the exact options.
# A type checker will also catch errors.
def move_piece(direction: Direction):
    match direction:
        case Direction.NORTH:
            ...
        case Direction.SOUTH:
            ...
        case Direction.EAST:
            ...
        case Direction.WEST:
            ...

keyword-only arguments

It is common to have functions that take a relatively small number (or even zero) required arguments, and a large number of optional arguments.

Most Python libraries embrace explicit is better than implicit here, by way of keyword-only arguments:

httpx.request(
    method: 'str',
    url: 'URL | str',
    *,
    params: 'QueryParamTypes | None' = None,
    content: 'RequestContent | None' = None,
    data: 'RequestData | None' = None,
    files: 'RequestFiles | None' = None,
    json: 'typing.Any | None' = None,
    headers: 'HeaderTypes | None' = None,
    cookies: 'CookieTypes | None' = None,
    auth: 'AuthTypes | None' = None,
    proxy: 'ProxyTypes | None' = None,
    timeout: 'TimeoutTypes' = Timeout(timeout=5.0),
    follow_redirects: 'bool' = False,
    verify: 'ssl.SSLContext | str | bool' = True,
    trust_env: 'bool' = True
) -> 'Response'

The * marks all arguments after as keyword-only, forcing the caller to specify the name and not rely on the position:

# does not work!
httpx.request(
  "GET", "https://example.com",
  {"q": "search", "t", 5}, None, 5, None, True
)

# instead, less-common arguments are passed by name

httpx.request(
  "GET", "https://example.com",
  params={"q": "search", "t", 5},
  timeout=5,
  verify=True
)

Things We Avoid

Let’s take a look at some antipatterns, things we avoid in Python, and see if we can understand what makes them unpythonic.

import *

When we see a symbol in Python, we can almost always guarantee that it is declared earlier in the file: either as a variable, function/class definition, or import. This is true if we do not use import *.

from math import *
from statistics import *
from travel import *

# where did these functions come from?
mean()
distance()

Principles violated:

  • Readability counts
  • Explicit is better than implicit.
  • In the face of ambiguity, refuse the temptation to guess.

except:

Principles violated:

  • Explicit is better than implicit.
  • Errors should never pass silently, unless explicitly silenced.
# antipattern
try:
    print(smu(1, 2, 3))
except:
    print("invalid input to sum")

eval -> importlib/__getattribute__

It is possible to write dynamic code in Python using eval, but this is considered bad practice & banned in most codebases.

M_TO_FT = 3.28084

class Person:
    def __init__(self):
        self.height_m = 1.8

    def height_in_meters(self):
        return self.height_m

    def height_in_feet(self):
        return self.height_m * M_TO_FT


p = Person()
user_setting = "feet"
# DO NOT DO THIS
print(f"height in {user_setting}: ", eval(f"p.height_in_{user_setting}()"))
height in feet:  5.905512

There are plenty of better options for dynamic code:

p = Person()
user_setting = "feet"
height = getattr(p, f"height_in_{user_setting}")()
print(f"height in {user_setting}: ", height)
height in feet:  5.905512

Why is this better?

strict typing vs. duck typing

Another antipattern in Python is to strictly check types:

def find_item(needle, haystack):
    if type(haystack) != list:
        raise TypeError("haystack is not a list")
    for item in haystack:
        if needle == item:
            return True
    return False

# instead... isinstance support subclasses
def find_item(needle, haystack):
    if isinstance(haystack, list):
        raise TypeError("haystack is not a list")
    for item in haystack:
        if needle == item:
            return True
    return False

# best... just allow error to arise implicitly, allow all iterables
def find_item(needle, haystack):
    for item in haystack:
        if needle == item:
            return True
    return False

This is an exception to explicit is better than implicit, after all practicality beats purity.

Using the standard library well

os.path vs pathlib

Another case of explicit is better than implicit. Strings kind of work for file paths, but by explicitly using a type built for them, we can be type safe and take advantage of lots of powerful features of pathlib.Path.

If you’ve dealt with code that manipulates file paths at all, you know it is a surprisingly complex problem.

While it is possible to use raw string manipulation, you are likely to introduce errors into your code by not handling edge cases that the built in path manipulation functions have already accounted for.

(This is particularly useful when trying to ensure your code works well on Windows (which uses drive letters and backslash in paths: C:\windows\paths vs /everyone/elses/posix/paths).

Early versions of Python introduced os.path, a module with lots of functions like os.path.join(parent: str, child: str) -> str, and os.path.splitext(path: str) -> tuple[str, str] for splitting file names from their extensions.

While an improvement over str.split, treating paths as str is still error-prone and comes with a set of problems that were resolved with the introduction of pathlib.Path, a class meant to represent a path.

With this class, Paths are types of their own, allowing us to write code like:

import pathlib
import collections

# get a Path object of the current working dir
dir = pathlib.Path.cwd()
# a special dict subclass which defaults to values being 0
ftypes = collections.Counter()

for file in dir.iterdir():
    ftypes[file.suffix] += 1

for ft, count in ftypes.most_common(5):
    print(f"{ft}: {count}")

Using pathlib.Path helps us ensure correct path-handling, and can be thought of similarly to using datetime instead of storing dates as strings. While you might need to convert a date to/from a string at the boundary of your application (e.g. as a URL parameter, or when writing to a file)– within the application using the more specific type allows you to perform common date operations & ensures validity.

Lazy Evaluation

Principle: Now is better than never. Although never is often better than right now.

While we’ve seen that plenty of effort has gone into speeding Python up, there is no code faster than the code that we do not execute.

To this end, it is common to implement lazy evaluation: avoiding the execution of code until the last minute, until we know it needs to be run.

Generators are the most common implementation of this behavior:

def imm_squares(n):
    return [x**2 for x in range(n)]

def lazy_squares(n):
    for x in range(n):
        yield x**2

# does not eagerly compute the way that imm_squares does
# since we never actually finish iteration, only those
# needed will be computed (drastically reduced memory usage as well!)
sg = lazy_squares(1_000_000) # 1 billion?
for s in sg:
    if s < 10000:
        print(s)
0
1
4
9
16
25
36
49
64
81
100
121
144
169
196
225
256
289
324
361
400
441
484
529
576
625
676
729
784
841
900
961
1024
1089
1156
1225
1296
1369
1444
1521
1600
1681
1764
1849
1936
2025
2116
2209
2304
2401
2500
2601
2704
2809
2916
3025
3136
3249
3364
3481
3600
3721
3844
3969
4096
4225
4356
4489
4624
4761
4900
5041
5184
5329
5476
5625
5776
5929
6084
6241
6400
6561
6724
6889
7056
7225
7396
7569
7744
7921
8100
8281
8464
8649
8836
9025
9216
9409
9604
9801

@property

We can also use @property to lazily-evaluate (and/or cache) expensive computations.

class Sphere:
    def __init__(self, x, y, z, r):
        self.pos = (x, y, z)
        self.radius = r
        # computation done eagerly, could be expensive
        self.imm_volume = 4/3 * math.PI * (self.radius**3)
        # used for 3rd example
        self._cached_vol = None

    @property
    def lazy_volume(self):
        # potentially better, only computes when used, but repeatedly
        return 4/3 * math.PI * (self.radius**3)

    @property
    def lazy_volume_cached(self):
        # best, computes when used, but not repeatedly
        if self._cached_vol is None:
            self._cached_vol = 4/3 * math.PI * (self.radius**3)
        return self._cached_vol

Beyond the Standard Library

Python grew in popularity in part due to its standard library, but today almost any Python program of signifigance relies upon the enormous ecosystem of third-party libraries.

This repository is called PyPI, the Python Package Index.

https://pypi.org

Tools like pip and conda have been used in the past to access this library.

Most modern Python packages, and this class, will use uv. See: