8000 [jiterator] where: complex by khushi-411 · Pull Request #75216 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

[jiterator] where: complex #75216

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 10 commits into from
Closed

Conversation

khushi-411
Copy link
Contributor

Follows: #74748

@facebook-github-bot
Copy link
Contributor
facebook-github-bot commented Apr 4, 2022

🔗 Helpful links

❌ 4 New Failures

As of commit 18700c0 (more details on the Dr. CI page):

Expand to see more
  • 4/4 failures introduced in this PR

🕵️ 4 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (default, 1, 4, linux.4xlarge.nvidia.gpu) (1/4)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-06-02T12:56:05.1080974Z FAIL [1.137s]: test_dtypes_where_cuda (__main__.TestCommonCUDA)
2022-06-02T12:56:05.1079281Z     return fn(slf, *args, **kwargs)
2022-06-02T12:56:05.1079627Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 979, in only_fn
2022-06-02T12:56:05.1079751Z     return fn(self, *args, **kwargs)
2022-06-02T12:56:05.1079893Z   File "test_ops.py", line 303, in test_dtypes
2022-06-02T12:56:05.1079993Z     self.fail(msg)
2022-06-02T12:56:05.1080363Z AssertionError: The supported dtypes for renorm on device type cuda are incorrect!
2022-06-02T12:56:05.1080631Z The following dtypes did not work in backward but are listed by the OpInfo: {torch.complex128, torch.complex64}.
2022-06-02T12:56:05.1080650Z 
2022-06-02T12:56:05.1080669Z 
2022-06-02T12:56:05.1080801Z ======================================================================
2022-06-02T12:56:05.1080974Z FAIL [1.137s]: test_dtypes_where_cuda (__main__.TestCommonCUDA)
2022-06-02T12:56:05.1081239Z ----------------------------------------------------------------------
2022-06-02T12:56:05.1081371Z Traceback (most recent call last):
2022-06-02T12:56:05.1081721Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py", line 1818, in wrapper
2022-06-02T12:56:05.1081832Z     method(*args, **kwargs)
2022-06-02T12:56:05.1082204Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 377, in instantiated_test
2022-06-02T12:56:05.1082320Z     result = test(self, **param_kwargs)
2022-06-02T12:56:05.1082685Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 786, in test_wrapper
2022-06-02T12:56:05.1082805Z     return test(*args, **kwargs)
2022-06-02T12:56:05.1083153Z   File "/opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_device_type.py", line 821, in dep_fn
2022-06-02T12:56:05.1083269Z     return fn(slf, *args, **kwargs)

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (default, 2, 4, linux.4xlarge.nvidia.gpu) (2/4)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-06-02T12:14:55.6879567Z RuntimeError: test_decomp failed!
2022-06-02T12:14:55.0211566Z 
2022-06-02T12:14:55.0211733Z FAILED (errors=43, skipped=50, expected failures=131)
2022-06-02T12:14:55.0211752Z 
2022-06-02T12:14:55.0211876Z Generating XML reports...
2022-06-02T12:14:55.0212288Z Generated XML report: test-reports/python-unittest/test_decomp/TEST-TestDecompCUDA-20220602112422.xml
2022-06-02T12:14:55.6872572Z Traceback (most recent call last):
2022-06-02T12:14:55.6872952Z   File "test/run_test.py", line 1077, in <module>
2022-06-02T12:14:55.6875792Z     main()
2022-06-02T12:14:55.6876080Z   File "test/run_test.py", line 1055, in main
2022-06-02T12:14:55.6879245Z     raise RuntimeError(err_message)
2022-06-02T12:14:55.6879567Z RuntimeError: test_decomp failed!
2022-06-02T12:14:56.2113306Z + cleanup
2022-06-02T12:14:56.2113565Z + retcode=1
2022-06-02T12:14:56.2113797Z + set +x
2022-06-02T12:14:56.2164425Z ##[error]Process completed with exit code 1.
2022-06-02T12:14:56.2211549Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master
2022-06-02T12:14:56.2211894Z with:
2022-06-02T12:14:56.2212426Z   github-token: ***
2022-06-02T12:14:56.2212669Z env:
2022-06-02T12:14:56.2212888Z   IN_CI: 1
2022-06-02T12:14:56.2213092Z   IS_GHA: 1

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (default, 3, 4, linux.4xlarge.nvidia.gpu) (3/4)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-06-02T12:16:57.7959786Z RuntimeError: test_meta failed!
2022-06-02T12:16:56.3662767Z FAILED (errors=32, skipped=34, expected failures=96)
2022-06-02T12:16:56.3662790Z 
2022-06-02T12:16:56.3662903Z Generating XML reports...
2022-06-02T12:16:57.0345830Z Generated XML report: test-reports/python-unittest/test_meta/TEST-TestMetaCUDA-20220602112231.xml
2022-06-02T12:16:57.0359125Z Generated XML report: test-reports/python-unittest/test_meta/TEST-TestMetaConverter-20220602112231.xml
2022-06-02T12:16:57.7952632Z Traceback (most recent call last):
2022-06-02T12:16:57.7953123Z   File "test/run_test.py", line 1077, in <module>
2022-06-02T12:16:57.7956079Z     main()
2022-06-02T12:16:57.7956370Z   File "test/run_test.py", line 1055, in main
2022-06-02T12:16:57.7959476Z     raise RuntimeError(err_message)
2022-06-02T12:16:57.7959786Z RuntimeError: test_meta failed!
2022-06-02T12:16:58.3675604Z + cleanup
2022-06-02T12:16:58.3675914Z + retcode=1
2022-06-02T12:16:58.3676396Z + set +x
2022-06-02T12:16:58.3720674Z ##[error]Process completed with exit code 1.
2022-06-02T12:16:58.3767570Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master
2022-06-02T12:16:58.3767931Z with:
2022-06-02T12:16:58.3768453Z   github-token: ***
2022-06-02T12:16:58.3768694Z env:
2022-06-02T12:16:58.3768910Z   IN_CI: 1
2022-06-02T12:16:58.3769112Z   IS_GHA: 1

See GitHub Actions build pull / linux-xenial-cuda11.3-py3.7-gcc7 / test (default, 4, 4, linux.4xlarge.nvidia.gpu) (4/4)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-06-02T11:48:47.1073956Z RuntimeError: test_ops_gradients failed!
2022-06-02T11:48:47.0918013Z 
2022-06-02T11:48:47.0918178Z FAILED (errors=93, skipped=3662, expected failures=364)
2022-06-02T11:48:47.0918198Z 
2022-06-02T11:48:47.0918319Z Generating XML reports...
2022-06-02T11:48:47.0918739Z Generated XML report: test-reports/python-unittest/test_ops_gradients/TEST-TestGradientsCUDA-20220602112230.xml
2022-06-02T11:48:47.1066765Z Traceback (most recent call last):
2022-06-02T11:48:47.1067164Z   File "test/run_test.py", line 1077, in <module>
2022-06-02T11:48:47.1070538Z     main()
2022-06-02T11:48:47.1070876Z   File "test/run_test.py", line 1055, in main
2022-06-02T11:48:47.1073796Z     raise RuntimeError(err_message)
2022-06-02T11:48:47.1073956Z RuntimeError: test_ops_gradients failed!
2022-06-02T11:48:47.6529375Z + cleanup
2022-06-02T11:48:47.6529641Z + retcode=1
2022-06-02T11:48:47.6529870Z + set +x
2022-06-02T11:48:47.6575666Z ##[error]Process completed with exit code 1.
2022-06-02T11:48:47.6621963Z ##[group]Run pytorch/pytorch/.github/actions/get-workflow-job-id@master
2022-06-02T11:48:47.6622303Z with:
2022-06-02T11:48:47.6622861Z   github-token: ***
2022-06-02T11:48:47.6623097Z env:
2022-06-02T11:48:47.6623313Z   IN_CI: 1
2022-06-02T11:48:47.6623515Z   IS_GHA: 1

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@khushi-411 khushi-411 marked this pull request as draft April 4, 2022 20:04
@khushi-411
Copy link
Contributor Author

where cannot be jiterated because it takes arguments of different dtypes, which is currently not supported. Hence, closing. Thanks!

@khushi-411 khushi-411 closed this Jul 2, 2022
@khushi-411 khushi-411 deleted the jiterator/where branch July 2, 2022 06:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0