-
-
Notifications
You must be signed in to change notification settings - Fork 11k
DEV, BUILD: add pypy3 to azure CI #12594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
fb56570
to
b9280ed
Compare
The azure pipeline succeeded, but tests failed. This is the expected outcome, at least until PyPy passes 100% of the tests for a while. |
Was the plan to merge this now, or just to have a place where we a quick restart shows whether there are problems? If the plan is to merge, I think this should not run on every PR - that is just a waste of electricity as long as there are known failures. |
Shouldn't these test be run on the PyPy side? |
Once |
@mattip I'm happy if NumPy is made more PyPy compatible, but I don't see PyPy as a blocker for anything. CPython is our primary platform. |
Well, I think it would be useful to add to our CI and keep it green. We can add Disclaimer: SciPy's PyPy CI has been fairly consistently red which is annoying. |
Yes, this is my preferred approach. |
I'm +1 on this. I thought there was still some interest in "swappable back-ends" in the ecosystem too, so staying in touch with the JIT community via some measure of cross-testing seems like a good idea. The electrical footprint of becoming completely incompatible with the JIT community & later trying to remedy a massive divergence could also be quite high. That said, I do have to agree that the maintenance burden recently observed for PyPy with SciPy CI has been a little scarier recently. |
Re CI issues, codecov is red yet again. This passed my annoyance threshold a long time ago, I'm starting to favor nuking it completely. Do we want to have one more go at fixing it? |
-1 on my end for removing codecov until there's a viable replacement in the ecosystem; there's a PR open that demonstrates it can be green if we use the SciPy config verbatim as Pauli suggested -- see #12549, but that won't alert us to PRs that have poor test coverage in any way. If there's a strong preference among devs to have the CI green & expect reviewers to manually check coverage vs. CI often questionably red but more explicitly reporting patch diff %, perhaps this should be expressed clearly in that PR & I'll clean it up / it can be merged. |
I am also -1 on removing codecov entirely unless we find an alternative. It's an important check to have easy access to. |
Yes, do keep code coverage. At astropy we use |
Actually, never mind, we seem to be using |
thanks, will move the comment over there. don't get me wrong, having coverage reports nicely formatted at hand is very nice, I'm just tired of useless CI failures. looking at the PR list, it's mostly red. for various other reasons besides codecov, travisci and circleci have timeouts for example. but the current state is simply NOK, and codecov is the easiest to fix and offers least value of all the flaky CI we have. |
As this does not report failure, how is it intended to be used? |
I would guess the intention is to keep an eye on it and start gradually reducing the failure count until it can be turned on. I suppose one could grep out the current failure count and fail if it increases in new PRs & as Matti closes down the issues the threshold (i.e., currently 25 failures allowed) can be adjusted accordingly. |
If we really want to commit to maintaining at least the present level of
support for pypy, we really should mark all these failures as xfail (on
pypy only) and fail CI if any new failures appear.
…On Sun, Dec 30, 2018 at 7:39 PM Tyler Reddy ***@***.***> wrote:
As this does not report failure, how is it intended to be used?
I would guess the intention is to keep an eye on it and start gradually
reducing the failure count until it can be turned on.
I suppose one could grep out the current failure count and fail if it
increases in new PRs & as Matti closes down the issues the threshold (i.e.,
currently 25 failures allowed) can be adjusted accordingly.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#12594 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABKS1i8PD9oBLD6RSj5iuItFNktYNoRnks5u-YbjgaJpZM4ZZw4t>
.
|
Marked this as WIP. We could xfail the missing pypy support for creating pep3118 buffers from ctype structures, but I think PyPy should fix that before we move forward with this PR. |
26e3d66
to
c233900
Compare
This is ready, so I took off the WIP and the "always pass" check at the end of tests. PyPy (latest HEAD) has fixed everything except the tp_doc overwriting, which now is marked xfail, just like running |
On windows, |
xref Microsoft/azure-pipelines-image-generation#871 |
As I noted over in Microsoft/azure-pipelines-image-generation#871, the |
If |
That works for me
…On Thu, Apr 18, 2019 at 10:23 AM Matti Picus ***@***.***> wrote:
If test.support.gc_collect is not officially blessed, should I go back to
break_cycles
<6bc3199#diff-df6a19f2a24d64b95d8e50abb7583e2bR2329>
?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#12594 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAJJFVVQ2FGTUR6BTTP3ZWLPRCVBJANCNFSM4GLHBYWQ>
.
|
added a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems useful to me--added some minor comments. It is the third fastest job in Azure for us it seems, and we still have two free parallel slots left, so seems reasonable.
44bfdf2
to
5daa9d4
Compare
Looks good to me, but I'll let someone with more CI background approve/merge |
All review discussions are resolved, and I double checked the CI logs for the new PyPy Azure job & looks great. Also, looks like Eric, Stephan, and I are in favor here. In it goes, thanks Matti & reviewers. |
This PR adds testing for PyPy to our CI. I chose to use the latest nightly rather than a stable release since PyPy will be fixing test failures between releases.
We should also add a benchmark run at some point.
Things that are known not to work on PyPY
I have forced the test script to return 0 so CI does not fail even though there are failing testsEdit: PyPy latests nightly HEAD passes all tests except the ones that modify docstrings in C