timeit and its default_timer completely disagree

2024/11/20 20:34:33

I benchmarked these two functions (they unzip pairs back into source lists, came from here):

n = 10**7
a = list(range(n))
b = list(range(n))
pairs = list(zip(a, b))def f1(a, b, pairs):a[:], b[:] = zip(*pairs)def f2(a, b, pairs):for i, (a[i], b[i]) in enumerate(pairs):pass

Results with timeit.timeit (five rounds, numbers are seconds):

f1 1.06   f2 1.57   
f1 0.96   f2 1.69   
f1 1.00   f2 1.85   
f1 1.11   f2 1.64   
f1 0.95   f2 1.63   

So clearly f1 is a lot faster than f2, right?

But then I also measured with timeit.default_timer and got a completely different picture:

f1 7.28   f2 1.92   
f1 5.34   f2 1.66   
f1 6.46   f2 1.70   
f1 6.82   f2 1.59   
f1 5.88   f2 1.63   

So clearly f2 is a lot faster, right?

Sigh. Why do the timings totally differ like that, and which timing method should I believe?

Full benchmark code:

from timeit import timeit, default_timern = 10**7
a = list(range(n))
b = list(range(n))
pairs = list(zip(a, b))def f1(a, b, pairs):a[:], b[:] = zip(*pairs)def f2(a, b, pairs):for i, (a[i], b[i]) in enumerate(pairs):passprint('timeit')
for _ in range(5):for f in f1, f2:t = timeit(lambda: f(a, b, pairs), number=1)print(f.__name__, '%.2f' % t, end='   ')print()print('default_timer')
for _ in range(5):for f in f1, f2:t0 = default_timer()f(a, b, pairs)t = default_timer() - t0print(f.__name__, '%.2f' % t, end='   ')print()
Answer

As Martijn commented, the difference is Python's garbage collection, which timeit.timeit disables during its run. And zip creates 10 million iterator objects, one for each of the 10 million iterables it's given.

So, garbage-collecting 10 million objects simply takes a lot of time, right? Mystery solved!

Well... no. That's not really what happens, and it's way more interesting than that. And there's a lesson to be learned to make such code faster in real life.

Python's main way to discard objects no longer needed is reference counting. The garbage collector, which is being disabled here, is for reference cycles, which the reference counting won't catch. And there aren't any cycles here, so it's all discarded by reference counting and the garbage collector doesn't actually collect any garbage.

Let's look at a few things. First, let's reproduce the much faster time by disabling the garbage collector ourselves.

Common setup code (all further blocks of code should be run directly after this in a fresh run, don't combine them):

import gc
from timeit import default_timer as timern = 10**7
a = list(range(n))
b = list(range(n))
pairs = list(zip(a, b))

Timing with garbage collection enabled (the default):

t0 = timer()
a[:], b[:] = zip(*pairs)
t1 = timer()
print(t1 - t0)

I ran it three times, took 7.09, 7.03 and 7.09 seconds.

Timing with garbage collection disabled:

t0 = timer()
gc.disable()
a[:], b[:] = zip(*pairs)
gc.enable()
t1 = timer()
print(t1 - t0)

Took 0.96, 1.02 and 0.99 seconds.

So now we know it's indeed the garbage collection that somehow takes most of the time, even though it's not collecting anything.

Here's something interesting: Already just the creation of the zip iterator is responsible for most of the time:

t0 = timer()
z = zip(*pairs)
t1 = timer()
print(t1 - t0)

That took 6.52, 6.51 and 6.50 seconds.

Note that I kept the zip iterator in a variable, so there isn't even anything to discard yet, neither by reference counting nor by garbage collecting!

What?! Where does the time go, then?

Well... as I said, there are no reference cycles, so the garbage collector won't actually collect any garbage. But the garbage collector doesn't know that! In order to figure that out, it needs to check!

Since the iterators could become part of a reference cycle, they're registered for garbage collection tracking. Let's see how many more objects get tracked due to the zip creation (doing this just after the common setup code):

gc.collect()
tracked_before = len(gc.get_objects())
z = zip(*pairs)
print(len(gc.get_objects()) - tracked_before)

The output: 10000003 new objects tracked. I believe that's the zip object itself, its internal tuple to hold the iterators, its internal result holder tuple, and the 10 million iterators.

Ok, so the garbage collector tracks all these objects. But what does that mean? Well, every now and then, after a certain number of new object creations, the collector goes through the tracked objects to see whether some are garbage and can be discarded. The collector keeps three "generations" of tracked objects. New objects go into generation 0. If they survive a collection run there, they're moved into generation 1. If they survive a collection there, they're moved into generation 2. If they survive further collection runs there, they remain in generation 2. Let's check the generations before and after:

gc.collect()
print('collections:', [stats['collections'] for stats in gc.get_stats()])
print('objects:', [len(gc.get_objects(i)) for i in range(3)])
z = zip(*pairs)
print('collections:', [stats['collections'] for stats in gc.get_stats()])
print('objects:', [len(gc.get_objects(i)) for i in range(3)])

Output (each line shows values for the three generations):

collections: [13111, 1191, 2]
objects: [17, 0, 13540]
collections: [26171, 2378, 20]
objects: [317, 2103, 10011140]

The 10011140 shows that most of the 10 million iterators were not just registered for tracking, but are already in generation 2. So they were part of at least two garbage collection runs. And the number of generation 2 collections went up from 2 to 20, so our millions of iterators were part of up to 20 garbage collection runs (two to get into generation 2, and up to 18 more while already in generation 2). We can also register a callback to count more precisely:

checks = 0
def count(phase, info):if phase == 'start':global checkschecks += len(gc.get_objects(info['generation']))gc.callbacks.append(count)
z = zip(*pairs)
gc.callbacks.remove(count)
print(checks)

That told me 63,891,314 checks total (i.e., on average, each iterator was part of over 6 garbage collection runs). That's a lot of work. And all this just to create the zip iterator, before even using it.

Meanwhile, the loop

for i, (a[i], b[i]) in enumerate(pairs):pass

creates almost no new objects at all. Let's check how much tracking enumerate causes:

gc.collect()
tracked_before = len(gc.get_objects())
e = enumerate(pairs)
print(len(gc.get_objects()) - tracked_before)

Output: 3 new objects tracked (the enumerate iterator object itself, the single iterator it creates for iterating over pairs, and the result tuple it'll use (code here)).

I'd say that answers the question "Why do the timings totally differ like that?". The zip solution creates millions of objects that go through multiple garbage collection runs, while the loop solution doesn't. So disabling the garbage collector helps the zip solution tremendously, while the loop solution doesn't care.

Now about the second question: "Which timing method should I believe?". Here's what the documentation has to say about it (emphasis mine):

By default, timeit() temporarily turns off garbage collection during the timing. The advantage of this approach is that it makes independent timings more comparable. The disadvantage is that GC may be an important component of the performance of the function being measured. If so, GC can be re-enabled as the first statement in the setup string. For example:

timeit.Timer('for i in range(10): oct(i)', 'gc.enable()').timeit()

In our case here, the cost of garbage collection doesn't stem from some other unrelated code. It's directly caused by the zip call. And you do pay this price in reality, when you run that. So in this case, I do consider it an "important component of the performance of the function being measured". To directly answer the question as asked: Here I'd believe the default_timer method, not the timeit method. Or put differently: Here the timeit method should be used with enabling garbage collection as suggested in the documentatiion.

Or... alternatively, we could actually disable garbage collection as part of the solution (not just for benchmarking):

def f1(a, b, pairs):gc.disable()a[:], b[:] = zip(*pairs)gc.enable()

But is that a good idea? Here's what the gc documentation says:

Since the collector supplements the reference counting already used in Python, you can disable the collector if you are sure your program does not create reference cycles.

Sounds like it's an ok thing to do. But I'm not sure I don't create reference cycles elsewhere in my program, so I finish with gc.enable() to turn garbage collection back on after I'm done. At that point, all those temporary objects have already been discarded thanks to reference counting. So all I'm doing is avoiding lots of pointless garbage collection checks. I find this a valuable lesson and I might actually do that in the future, if I know I only temporarily create a lot of objects.

Finally, I highly recommend reading the gc module documentation and the Design of CPython’s Garbage Collector in Python's developer guide. Most of it is easy to understand, and I found it quite interesting and enlightening.

https://en.xdnf.cn/q/26283.html

Related Q&A

How to pickle and unpickle to portable string in Python 3

I need to pickle a Python3 object to a string which I want to unpickle from an environmental variable in a Travis CI build. The problem is that I cant seem to find a way to pickle to a portable string …

Using RabbitMQ is there a way to look at the queue contents without a dequeue operation?

As a way to learn RabbitMQ and python Im working on a project that allows me to distribute h264 encodes between a number of computers. The basics are done, I have a daemon that runs on Linux or Mac th…

Pip default behavior conflicts with virtualenv?

I was following this tutorial When I got to virtualenv flask command, I received this error message: Can not perform a --user install. User site-packages are not visible in this virtualenv.This makes s…

How to update user password in Django Rest Framework?

I want to ask that following code provides updating password but I want to update password after current password confirmation process. So what should I add for it? Thank you.class UserPasswordSeriali…

ImportError: No module named xlrd

I am currently using PyCharm with Python version 3.4.3 for this particular project.This PyCharm previously had Python2.7, and I upgraded to 3.4.3.I am trying to fetch data from an Excel file using Pand…

How do I automatically fix lint issues reported by pylint?

Just like we have "eslint --fix" to automatically fix lint problems in Javascript code, do we have something for pylint too for Python code?

Conversion from JavaScript to Python code? [closed]

Closed. This question is seeking recommendations for books, tools, software libraries, and more. It does not meet Stack Overflow guidelines. It is not currently accepting answers.We don’t allow questi…

connect to a DB using psycopg2 without password

I have a postgres database on my localhost I can access without a password$ psql -d mwt psql (8.4.12) Type "help" for help.mwt=# SELECT * from vatid;id | requester_vatid |...-----+---------…

Shebang doesnt work with python3

I have the following program:#!/usr/local/bin/python3print("Hello")Via terminal I do test.py and I get:Traceback (most recent call last):File "/usr/lib/python3.3/site.py", line 629,…

Why does pip freeze list pkg-resources==0.0.0?

On Ubuntu 16.04 with virtualenv 15.0.1 and Python 3.5.2 (both installed with apt) when I create and activate new Python virtual environment withvirtualenv .virtualenvs/wtf -p $(which python3) --no-site…