8000 gh-122881: Reduce asyncio heapq scheduling overhead by bdraco · Pull Request #122882 · python/cpython · GitHub
[go: up one dir, main page]

Skip to content

gh-122881: Reduce asyncio heapq scheduling overhead #122882

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 14 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
access self more often
  • Loading branch information
bdraco committed Aug 12, 2024
commit a5b66476566019f2b643a61b5fd815d5b3276e84
19 changes: 8 additions & 11 deletions Lib/asyncio/base_events.py
Original file line number Diff line number Diff line change
Expand Up @@ -1980,14 +1980,12 @@ def _run_once(self):
handle._scheduled = False

timeout = None
ready = self._ready
scheduled = self._scheduled

if ready or self._stopping:
if self._ready or self._stopping:
timeout = 0
elif scheduled:
elif self._scheduled:
# Compute the desired timeout.
timeout = scheduled[0][0] - self.time()
timeout = self._scheduled[0][0] - self.time()
if timeout > MAXIMUM_SELECT_TIMEOUT:
timeout = MAXIMUM_SELECT_TIMEOUT
elif timeout < 0:
Expand All @@ -2000,21 +1998,20 @@ def _run_once(self):

# Handle 'later' callbacks that are ready.
end_time = self.time() + self._clock_resolution
while scheduled and scheduled[0][0] < end_time:
_, handle = heapq.heappop(scheduled)
while self._scheduled and self._scheduled[0][0] < end_time:
_, handle = heapq.heappop(self._scheduled)
handle._scheduled = False
ready.append(handle)
self._ready.append(handle)

# This is the only place where callbacks are actually *called*.
# All other places just add them to ready.
# Note: We run all currently scheduled callbacks, but not any
# callbacks scheduled by callbacks run this time around --
# they will be run the next time (after another I/O poll).
# Use an idiom that is thread-safe without using locks.
ntodo = len(ready)
ready_popleft = ready.popleft
ntodo = len(self._ready)
for i in range(ntodo):
handle = ready_popleft()
handle = self._ready.popleft()
if handle._cancelled:
continue
if self._debug:
Expand Down
Loading
0