Description
Feature or enhancement
Proposal:
If an iterator is repeatedly chained, speed slows down the longer the chain of chains gets. For example,
from itertools import count, chain, repeat
from time import perf_counter as now
it = chain([None], count(0))
expect = 0
while True:
start = now()
for _ in repeat(None, 1000):
x = next(it)
assert x is None
x = next(it)
assert x == expect
expect += 1
it = chain([None], it)
finish = now()
print(format(expect, '_'), finish - start)
on my box just now produced (Win64, Python 3.14.5):
1_000 0.0058611
2_000 0.0192824
3_000 0.026536800000000003
4_000 0.0335665
5_000 0.03840320000000001
6_000 0.049022399999999994
7_000 0.061446399999999984
8_000 0.07017669999999998
9_000 0.0710173
10_000 0.08084079999999999
11_000 0.09169510000000003
12_000 0.09909350000000006
13_000 0.1044349
14_000 0.12177879999999996
15_000 0.12580650000000015
16_000 0.13390919999999995
Traceback (most recent call last):
...
x = next(it)
^^^^^^^^
RecursionError: maximum recursion depth exceeded
Could the chain
constructor peek inside chain
object arguments to skip over their exhausted iterators somehow at construction time?
In the example, it
6475
starts as chain([None], count(0))
. By the time the next chain is done, the [None]
part has already been exhausted. So the following chain([None], it)
could conceivably notice that it
is already a chain object, but that it's been reduced to wrapping only a single still-active iterator, and grab that remaining iterator itself rather than wrap all of the input it
chain object.
I haven't thought through all this, but wanted to put this out here before I forget again.
Has this already been discussed elsewhere?
This is a minor feature, which does not need previous discussion elsewhere
Links to previous discussion of this feature:
No response