8000 Run Codecov with the right version · Issue #462 · python/bedevere · GitHub
[go: up one dir, main page]

Skip to content

Run Codecov with the right version #462

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ezio-melotti opened this issue May 30, 2022 · 3 comments · Fixed by #464
Closed

Run Codecov with the right version #462

ezio-melotti opened this issue May 30, 2022 · 3 comments · Fixed by #464

Comments

@ezio-melotti
Copy link
Member
ezio-melotti commented May 30, 2022

It seems that for every PR, codecov is run 16 times:

  • 2 times because the workflow is triggered by both push and pull_request (Avoid running CI twice on PRs. #461 fixes that)
  • 4 times for each workflow because the tests are run with 4 different Python versions
  • 2 times for each check
    • first it''s run with pytest in the "Run tests" step
    • then it's run on its own in the "Run codecov/codecov-action@v2" step

I think that during the pytest step the specified Python version is used, whereas in the codecov step the same default version is used for all runs. This can be seen e.g. in https://github.com/python/bedevere/runs/6655983357 where it fails during the pytest step but it succeeds in the codecov step. Because of this, it is not possible to see where the failure is (see #463).

There are three improvements that can be made:

  1. the codecov step should use the same python version used by the pytest step
  2. if possible the results created by pytest should be reused without running codecov again
  3. possibly upload the xml report created by pytest as an artifact
@hugovk
Copy link
Member
hugovk commented May 30, 2022

This isn't completely accurate, there's a few things with similar names doing different jobs here.

pytest is running the tests using the pytest-cov plugin, which hooks up the Coverage.py tool that does the instrumentation and measuring.

Codecov is a website and doesn't instrument or measure coverage, instead existing coverage results are uploaded and it helps track coverage changes over time, report back for PRs, and provide a UI for exploring coverage for runs.

  • Each job runs the tests with pytest once, which measures the coverage once.
  • Each job uploads the coverage results to Codecov once.
  • The Codecov website combines these all together for each PR.

So:

There are three improvements that can be made:

  1. the codecov step should use the same python version used by the pytest step

The Codecov https://github.com/codecov/codecov-action isn't using Python, it uses Node.js (but it's just uploading data so doesn't really matter. It used to be a bash script.)

  1. if possible the results created by pytest should be reused without running codecov again

pytest/pytest-cov/Coverage.py create the results, Codecov uploads them.

  1. possibly upload the xml report created by pytest as an artifact

Can do, although it might be more useful to generate and upload html reports if we want to see them for individual Python versions.

And/or we could use Codecov for this, it's possible to add flags for each Python version in the matrix, and Codecov adds a filter box at the top right for each file. For example:

@ezio-melotti
Copy link
Member Author

Thanks for clarifying!

pytest/pytest-cov/Coverage.py create the results, Codecov uploads them.

I thought Codecov ran the coverage tests again and that it was successful because it used a different version of Python instead of using 3.11. However it was successful not because the coverage check passed during the step (since that was performed in the previous step), but because the upload of the xml report created by pytest was successful. The whole issue was based on a wrong assumption, and is therefore invalid.

And/or we could use Codecov for this, it's possible to add flags for each Python version in the matrix, and Codecov adds a filter box at the top right for each file.

This actually added to my confusion and confirmed my assumption -- since I was not able to find the report for 3.11 (or any failure), I thought that it never ran/uploaded the 3.11 report. Is it accessible at all from the Codecov website or does it need flagging? If so, it would be nice to add flags as you suggested.

@hugovk
Copy link
Member
hugovk commented May 31, 2022

This actually added to my confusion and confirmed my assumption -- since I was not able to find the report for 3.11 (or any failure), I thought that it never ran/uploaded the 3.11 report. Is it accessible at all from the Codecov website or does it need flagging? If so, it would be nice to add flags as you suggested.

Unfortunately it's not shown as an individual report on Codecov.

PR #464 adds flags, and has an example of how you can filter by version.

https://app.codecov.io/gh/hugovk/bedevere/blob/300b7d0ed649e9535c59db9824e9d1b041080839/bedevere/stage.py

A downside: you do have to be in the file view before filtering to a 3.11 report.

The overview aggregates all reports:

https://codecov.io/gh/hugovk/bedevere/tree/300b7d0ed649e9535c59db9824e9d1b041080839/bedevere

PR #464 also adds --cov-report term to print out the summary in the CI run too, so that helps know which files to check.

We could generate the individual HTML reports for each version on the CI, and upload those as artifacts, if you think that would be useful?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants
0