-
Notifications
You must be signed in to change notification settings - Fork 29.8k
Open
Labels
TestsRelated to testsRelated to testsWIPLabel your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progressLabel your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress
Description
Initialized by @LysandreJik, we ran the tests with previous PyTorch/TensorFlow versions. The goal is to determine if we should drop (some) earlier PyTorch/TensorFlow versions.
- This is not exactly the same as the scheduled daily CI (
torch-scatter
,accelerate
not installed, etc.) - Currently we only have the global summary (i.e. there is no number of test failures per model)
Here is the results (running on ~June 20, 2022):
- PyTorch testing has ~27100 tests
- TensorFlow testing has ~15700 tests
Framework | No. Failures |
---|---|
PyTorch 1.10 | 50 |
PyTorch 1.9 | 710 |
PyTorch 1.8 | 1301 |
PyTorch 1.7 | 1567 |
PyTorch 1.6 | 2342 |
PyTorch 1.5 | 3315 |
PyTorch 1.4 | 3949 |
TensorFlow 2.8 | 118 |
TensorFlow 2.7 | 122 |
TensorFlow 2.6 | 122 |
TensorFlow 2.5 | 128 |
TensorFlow 2.4 | 167 |
It looks like the number of failures in TensorFlow testing doesn't increase much.
So far my thoughts:
- All TF >= 2.4 should be (still) kept in the list of supported versions
Questions
- What's you opinion regarding which versions to drop support?
- Would you like to see the number of test failures per model?
- TensorFlow 2.3 needs CUDA 10.1 and requires the build of a special docker image. Do you think we should make the effort on it to have the results for
TF 2.3
?
Metadata
Metadata
Assignees
Labels
TestsRelated to testsRelated to testsWIPLabel your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progressLabel your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress