Key Idea: Understanding how Python has evolved (& continues to)– as a window into the broader evolution of programming over the last decade.
Python is 30 years old, and has come a long way from the initial release as a “teaching language”. For the last decade or so, it has been one of the most used languages in the world.
Over this time, the language has not remained static. A vibrant open source community continues to iterate on the language and built-in libraries. In the late 2000s/early 2010s the language & community underwent a monumental transition from Python 2 to Python 3. The changes, many intended to ditch historical mistakes & baggage, required lots of projects to spend time upgrading between incompatible versions. The transition met resistance, frustrated some developers, and took longer than anyone initially anticipated.
During this time, much was written about Python’s likely demise. Of course we know now the language survived & came out the other side a stronger language. (This was not a given! Other languages, like Perl and PHP, suffered greatly in similar transitions.)
Today, the language is unlikely to ever undergo a transition of that size again, but is far from stagnant. Recent versions of Python have introduced consequential changes as the language is both shaped by and shapes the programming landscape.
Governance
When you are learning a new programming language, you think of it as a static set of ideas– perhaps becoming aware of some historical context as you see older code in examples.
When you use a language over the course of years and even decades, you see that the language grows and your own code & understanding must grow along with it.
Understanding how the language changes can be helpful–
how often are changes introduced?
will backwards-incompatible changes be introduced? will things be deprecated & removed from the language?
who gets to make decisions about the future of the language? what are their values & philosophy of design?
Understanding these about your languages & tools of choice is an important step towards true mastery, and helps you make informed decisions about technology choices in an organization.
Python is led by a community steering council in a process that is outlined in a document known as PEP 13.
PEPs
A key instrument of governance for Python is the PEP: Python Enhancement Proposal.
These are documents that describe proposed changes to the Python language. (Some, like PEP 13 are used for special purposes.)
These are the “legislation” of the Python ecosystem, the same way a legislature proposes a bill, Python developers can propose PEPs which are then debated & voted upon.
If you want to stay current with Python, reading upcoming PEPs will let you know what is happening in the next version. It has also become common for PEPs to include examples and even tutorials on new features, instead of relying on a third-party source of questionable quality, you can learn the how & why of a feature from the source itself.
Evolution of Python
Python 1.x
Python was still a very small language, originally designed for teaching. Before 2.0 it was missing many of the features we identify as “Pythonic” today.
Python 2.x
The language grew from thousands to millions of users, but was still far from the dominant language it has become in 2026.
Features like decorators and other familiar features were added throughout the history of 2.x.
The final version of Python 2, 2.7 was released in 2010, after Python 3.0. This was a release made to port fixes & ease the transition between Python 2 and 3. This version was supported for a somewhat incredible nine years, ending support fully in 2019.
Python 3.x
The first version of Python 3 was released in 2008, intended to begin the migration process. At release, it was slower than Python 2, and had significant backwards-incompatible changes making porting code a time consuming task.
In time, improvements in Python 3.x and 2.y versions, tools like 2to3, and libraries like six– all helped the transition progress. Today almost all Python development is on a 3.x version, though doubtless legacy code still exists.
Python 3.0-3.4 (2008-2014)
Focused on establishing Python 3 as a successor to Python 2, largely on easing transition and fixing performance regressions.
Looking over the release notes for recent releases, the most common theme is probably performance improvements. From improvements to internal data structures to interpreter-wide speedups, this has been a major theme of the language.
In the final weeks of this course we’ll look at a few of these improvements, and how to make our code faster when we’ve already pushed Python to its limit.
Python 3.11
This was a major performance release, 10-60% faster. (PEP 659: Specializing Adaptive Interpreter)
Zero-cost exceptions - If no exception is raised, no price paid for try/except blocks.
Python’s growth as a language in areas like web development meant that performance isn’t purely a measure of how many operations-per-second can be performed.
Python applications are often I/O-bound, spending more time waiting for input and output than actual processor cycles.
While part of a program is waiting on disk reads or network traffic, other portions of the program not dependent upon that result can be executing.
This is asynchronous programming, introduced to Python in a series of experimental changes in the 3.5-3.8 series, and now formalized in the async/await keywords and asyncio module in the standard library.
import asyncioasyncdef load_source_a():print("loading from source A...")await asyncio.sleep(2) # simulate doing workprint("source A done!")return"A"asyncdef load_source_b():print("loading from source B...")await asyncio.sleep(1) # simulate waiting for responseprint("source B done!")return"B"asyncdef merge_sources(): a, b =await asyncio.gather(load_source_a(), load_source_b())print("got", a, b, "merging...")return a + b# this finishes in ~2 seconds,# not the 3 that it would have taken to do this work sequentiallyasyncio.run(merge_sources())
Type Hinting
Python 3.0 introduced annotation syntax, which was formalized into type hinting syntax in Python 3.5. Since then, every version of Python has added improvements to the gradual typing system.
As programs grew, the usefulness of types became more and more apparent– and smarter compilers & interpreters allowed finding a balance between the verbosity of languages like C and the untyped chaos that a 50,000 line Python program can become.
Python’s approach of gradual typing, introducing type annotations where useful for clarity & correctness, and leaving them optional when unimportant or hard-to-define– is perhaps the obvious choice for a language as rooted in pragmatism-over-purity as Python.
As of 2026, the most important things to know about type hints are:
Type hints are optional and unenforced: just like a docstring or comment can be incorrect & misleading, a function that is annotated in a particular way might be called with the “wrong” types without an automatic error.
Type hints can be accessed by Python code, allowing for libraries (& the language itself) to use the hints to determine behavior. (dataclass, pydantic, typer, etc.)
Tools exist to check & enforce type hints. (pyright, ty)
import dataclass@dataclassclass UserProfile:id: int username: str age: int tags: list[str]def get_user_details(username_or_id: str|int) -> UserProfile: ...def get_users(count: int) ->list[UserProfile]: ...
We’ll explore typing in more detail in the next few weeks.
New Syntax
Borrowing from other languages is part of what makes Python great. Python tends to borrow from functional languages like Haskell as well as languages in common use like JavaScript and Rust.
(This is far from a one way street, Python’s influence on these languages is also apparent!)
Python programs are a series of statements: if True:, x = 4, etc. are units known to the interpreter as ‘statements’.
ll = []
z = 5
def f(x): pass
if x:
for i in range(k):
An expression is a kind of statement that is valid as a variable assignment, that is to say, if it is valid on the right hand side of x = ?, then it is an expression:
4
[1, 2, 3]
3 * 5
"hello world"
f(a, b)
None
c.g(h) + r
All of these result in a value that can be assigned to a variable/name in Python. The same is not true of all statements.
There are many places in Python where an expression is expected, for example:
# if statements need a value to evaluateif<expression>:<statement>
# list comprehensions need an expression term (or two)[<expression>for x in iterable if<expression2>]
# return statements need an expression return<expression>
Sometimes it can be benefical to put an assignment where an expression is expected, for example:
# this code calls `len()` twiceiflen(items) >10:print(f"too many items, expected <= 10, got {len(items)}")# we could rewrite asif (num :=len(items)) >10:print(f"too many items, expected <= 10, got {num}")
We assign to num within the expression itself, allowing us to use the variable later.
Another example from the original PEP shows where this could lead to a significant performance improvement:
[clean_name.title() for name in namesif (clean_name := normalize_unicoede(name)) in allowed_names]
normalize_unicode is likely not a cheap function, presumably it is O(len(s)) for a given string.
Without := we’d be tempted to write:
# processes each string twicereturn [normalize_unicode(name).title() for name in namesif (clean_name := normalize_unicode(name)) in allowed_names]
Or perhaps pre-process the list, going to two complete iterations & additional memory:
normalized = [normalize_unicode(name) for name in names]return [name.title() for name in normalized if name in allowed_names]
Inspired by functional languages: Haskell, Scala, etc. have similar syntax. Notably, so does Rust.
Can be thought of an extension of unpacking syntax:
# iterables can be unpacked into equal number of variablesa, b = ["one", "two"]print(f"{a=}{b=}")# this also works with packing syntaxx, *rest = [1, 2, 3, 4, 5]print(f"{x=}{rest=}")# this can lead to some interesting idiomsx, *rest, y, z = [1, 2, 3, 4, 5]print(f"{x=}{rest=}{y=}, {z=}")
import maththings = [ ("add", 1, 2), ("div", 5, 2), ("div", 5, 0), (7, "!"), ("mul", 5, 4),"just a string",]for val in things:match val: case ("add", a, b):print(f"{a} + {b} = {a+b}") case ("div", a, 0):print(f"cannot divide by zero") case ("div", a, b):print(f"{a} / {b} = {a/b}") case (n, "!"):print(f"{n}! =", math.factorial(n)) case [*args]:print("invalid command:", args)case single:print(f"should provide a tuple, got: {single!r}")
1 + 2 = 3
5 / 2 = 2.5
cannot divide by zero
7! = 5040
invalid command: ['mul', 5, 4]
should provide a tuple, got: 'just a string'
An attempt is made to match the contents of val to the cases in the order they appear. Only first matching case will execute. (This is different from the switch statement in some C-like languages.)
Allows raising & handling multiple unrelated exceptions. Mainly introduced due to specific challenges with writing asyncio code where different tasks raise different exceptions.
While it won’t be something we focus on in this class, another significant theme of the improvements of the past few years has been developer experience.
Improved exceptions: AttributeError/NameError/etc. now make suggestions and show context.
An improved python interpeter REPL: syntax highlighting, error handling, and other modern conviniences.
f-string debugging
One notable developer experience improvement is a special debugging syntax for f-strings.
Often, when print-debugging, you print many variables and can lose track of which are which. This leads to needing to write code like:
x =1ll = [1, 2, 3]# this repetition is time-consuming & error-proneprint(f"DEBUG: x={x} ll={ll}")# instead append = within the bracketsprint(f"DEBUG: {x=}{ll=}")
dictionary merges
In web development & configuration contexts it is very common to need to combine dictionaries:
A small quality-of-life improvement when catching exceptions from underlying code and annotating it with additional context:
def parse_contents(contents: str):# simplified example to demonstrate exceptioniflen(contents) >10:raiseException("error processing file")def process_files(filenames):for name in filenames:try:withopen(name) as f: parse_contents(f.read())exceptExceptionas e: e.add_note(f"while processing filename {name}")raisefilenames = ["one.txt", "two.txt", "three.txt"]try: process_files(filenames)exceptExceptionas e:# manually printing notes for demonstration# they automatically appear as part of tracebacksprint(e, "\n", "\n".join(e.__notes__))
error processing file
while processing filename two.txt