-
-
Notifications
You must be signed in to change notification settings - Fork 10.8k
CI: test_requirements is not used consistently #24655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
There is not a good way to tell pyodide to ignore ninja, so it has to be installed for testing outside the requirements file. What else is not installed via the requirements file? |
Oh, that one is fair enough, I mostly was referring to the handling of pytest plugins. E.g in #23812 I'm adding pyodide itself is not listed in |
Doctesting should only run in a single job though, so how can it be not enough? |
Should it, why? Either case, this is a bit of an inconsistency, independently of doctesting, I don't think there should be two different ways how e.g. |
It's there to check the docs, which needs to be done exactly once. It's not platform or Python version dependent. And if it fails, we want it to fail in a job dedicated to docs/refguide-check.
We have other jobs that explicitly avoid |
Also to check that the examples in the docs are indeed not version or platform dependent. Why should the examples in the documentation treated differently from the other unit tests?
Yet I don't see any jobs that use only those two. |
I like the idea myself. |
They are not unit tests. We've been over that for many years again and again. Doctests are code examples, and should not be treated as tests. The only purpose of "doc testing" is to verify that all the examples run and the repr's are reasonable.
I'll note that we are still in the middle of a major CI overhaul. First we need to finish making it work with Meson, then drop all
I hope that we can get there indeed. Requirements files are not ideal, but there's no better alternative yet (installing directly from |
I think this is where the opinions differ. Ideally examples in the narrative docs should be indeed platform independent, but that's not always the case. How do you suppose to ensure that without actually running them? So far I was assuming overall support in enabling a more comprehensive doctesting, as the status quo apparently leaves a bit of gaps here and there. |
That's both unlikely to happen and very low-impact if it does happen for some function at some point. If so, opening an issue and then adressing that should be fine. This is not the kind of thing that should be tested in CI on every PR.
I am certainly not in favor of that. Imho it's both too slow and too fragile, for little benefit. The questions we've had for a long time around doctesting are around maintainability of the tooling and whether that can be shared between the various refguide checkers and pytest plugins. Some unification seemed in order. Running more doctests or treating them as tests has never been a goal, at least as far as I am aware. |
You could break the requirements file down into more specific groups, i.e., |
This would be part of it. But enabling it and letting it run is in fact way easier than just hacking it around to run on one job. |
This should work fine:
A plain
It's trivial to do this in a single job. It's literally two lines right now:
And if it fails, we get a single CI job failing, rather than a ton of them. To change the above to the external pytest plugin, it's a matter of installing that plugin with |
I'm fixing the issue with |
Some of the jobs manually install test dependencies rather than using the requirement file (while other ones install using the file). This introduces the potential of missing out newly added dependencies, etc.
Is this intentional? If not, I'll go ahead and open a PR that uses the requirements file instead.
The text was updated successfully, but these errors were encountered: