-
-
Notifications
You must be signed in to change notification settings - Fork 60
Run Codecov with the right version #462
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This isn't completely accurate, there's a few things with similar names doing different jobs here. pytest is running the tests using the pytest-cov plugin, which hooks up the Coverage.py tool that does the instrumentation and measuring. Codecov is a website and doesn't instrument or measure coverage, instead existing coverage results are uploaded and it helps track coverage changes over time, report back for PRs, and provide a UI for exploring coverage for runs.
So:
The Codecov https://github.com/codecov/codecov-action isn't using Python, it uses Node.js (but it's just uploading data so doesn't really matter. It used to be a bash script.)
pytest/pytest-cov/Coverage.py create the results, Codecov uploads them.
Can do, although it might be more useful to generate and upload And/or we could use Codecov for this, it's possible to add flags for each Python version in the matrix, and Codecov adds a filter box at the top right for each file. For example: |
Thanks for clarifying!
I thought Codecov ran the coverage tests again and that it was successful because it used a different version of Python instead of using 3.11. However it was successful not because the coverage check passed during the step (since that was performed in the previous step), but because the upload of the xml report created by
This actually added to my confusion and confirmed my assumption -- since I was not able to find the report for 3.11 (or any failure), I thought that it never ran/uploaded the 3.11 report. Is it accessible at all from the Codecov website or does it need flagging? If so, it would be nice to add flags as you suggested. |
Unfortunately it's not shown as an individual report on Codecov. PR #464 adds flags, and has an example of how you can filter by version. A downside: you do have to be in the file view before filtering to a 3.11 report. The overview aggregates all reports: https://codecov.io/gh/hugovk/bedevere/tree/300b7d0ed649e9535c59db9824e9d1b041080839/bedevere PR #464 also adds We could generate the individual HTML reports for each version on the CI, and upload those as artifacts, if you think that would be useful? |
Uh oh!
There was an error while loading. Please reload this page.
It seems that for every PR, codecov is run 16 times:
push
andpull_request
(Avoid running CI twice on PRs. #461 fixes that)pytest
in the "Run tests" stepI think that during the
pytest
step the specified Python version is used, whereas in thecodecov
step the same default version is used for all runs. This can be seen e.g. in https://github.com/python/bedevere/runs/6655983357 where it fails during thepytest
step but it succeeds in thecodecov
step. Because of this, it is not possible to see where the failure is (see #463).There are three improvements that can be made:
codecov
step should use the same python version used by thepytest
steppytest
should be reused without running codecov againxml
report created by pytest as an artifactThe text was updated successfully, but these errors were encountered: