8000 On older versions of Python, skip benchmarks that use features introduced in newer Python versions by AlexWaygood · Pull Request #283 · python/pyperformance · GitHub
[go: up one dir, main page]

Skip to content

On older versions of Python, skip benchmarks that use features introduced in newer Python versions #283

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Apr 27, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
On older versions of Python, skip benchmarks that use features introd…
…uced in newer Python versions
  • Loading branch information
AlexWaygood committed Apr 26, 2023
commit 2eea05b7edf420309bff0685f0d1e59e96d5dc7c
7 changes: 6 additions & 1 deletion pyperformance/_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
import sys

import pyperf
from packaging.specifiers import SpecifierSet

from . import _utils, _benchmark_metadata

Expand Down Expand Up @@ -164,9 +165,13 @@ def runscript(self):
def extra_opts(self):
return self._get_metadata_value('extra_opts', ())

@property
def python(self):
req = self._get_metadata_value("python", None)
return None if req is None else SpecifierSet(req)

# Other metadata keys:
# * base
# * python
# * dependencies
# * requirements

Expand Down
6 changes: 5 additions & 1 deletion pyperformance/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -241,11 +241,15 @@ def parse_entry(o, s):

# Get the selections.
selected = []
this_python_version = ".".join(map(str, sys.version_info[:3]))
for bench in _benchmark_selections.iter_selections(manifest, parsed_infos):
if isinstance(bench, str):
logging.warning(f"no benchmark named {bench!r}")
continue
selected.append(bench)
# Filter out any benchmarks that can't be run on the Python version we're running
if bench.python is not None and this_python_version in bench.python:
selected.append(bench)

return selected


Expand Down
0