8000 DISABLED test_dtensor_op_db_vstack_cpu_float32 (__main__.TestDTensorOpsCPU) · Issue #126868 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content
DISABLED test_dtensor_op_db_vstack_cpu_float32 (__main__.TestDTensorOpsCPU) #126868
@pytorch-bot

Description

@pytorch-bot

Platforms: linux

This test was disabled because it is failing in CI. See recent examples and the most recent trunk workflow logs.

Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 15 failures and 5 successes.

Debugging instructions (after clicking on the recent samples link):
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:

  1. Click on the workflow logs linked above
  2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
  3. Grep for test_dtensor_op_db_vstack_cpu_float32
  4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Sample error message
Traceback (most recent call last):
  File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 985, in wrapper
    self._join_threads(self.threads, fn)
  File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 1106, in _join_threads
    cls._check_return_codes(failed_ranks, timeout, fn)
  File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 1143, in _check_return_codes
    raise RuntimeError(error_msg)
RuntimeError: Thread 0 exited with exception:
Traceback (most recent call last):
  File "distributed/_tensor/test_dtensor_ops.py", line 668, in run_dtensor_crossref
    self.assert_ref_dtensor_equal(dtensor_rs, rs)
  File "distributed/_tensor/test_dtensor_ops.py", line 601, in assert_ref_dtensor_equal
    self.assertEqualOnRank(dtensor_r, r)
  File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 1173, in assertEqualOnRank
    self.assertEqual(x, y, msg)
  File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py", line 3639, in assertEqual
    raise error_metas.pop()[0].to_error(
AssertionError: Tensor-likes are not close!

Mismatched elements: 2 / 12 (16.7%)
Greatest absolute difference: 4.185099124908447 at index (1, 1) (up to 1e-05 allowed)
Greatest relative difference: 1.1587920188903809 at index (1, 1) (up to 1.3e-06 allowed)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_device_type.py", line 971, in test_wrapper
    return test(*args, **kwargs)
  File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py", line 1932, in wrapper
    fn(*args, **kwargs)
  File "distributed/_tensor/test_dtensor_ops.py", line 577, in test_dtensor_op_db
    self.check_dtensor_func(test, op)
  File "distributed/_tensor/test_dtensor_ops.py", line 683, in check_dtensor_func
    test_func()
  File "distributed/_tensor/test_dtensor_ops.py", line 570, in test
    self.run_dtensor_crossref(op.op, args, kwargs)
  File "distributed/_tensor/test_dtensor_ops.py", line 675, in run_dtensor_crossref
    raise RuntimeError(
RuntimeError: failed to run: torch.vstack, with (*[(tensor([-8.4784, -1.7658, -4.3228, -2.4005], requires_grad=True), tensor([-7.9506,  3.6116, -8.0676, -0.5735], requires_grad=True), tensor([ 3.1285, -3.0337,  5.1067,  1.1351], requires_grad=True))], **{})

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 1069, in run_test_with_threaded_pg
    getattr(self, test_name)()
  File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_distributed.py", line 987, in wrapper
    fn()
  File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py", line 2756, in wrapper
    method(*args, **kwargs)
  File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_device_type.py", line 419, in instantiated_test
    result = test(self, **param_kwargs)
  File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_utils.py", line 1361, in wrapper
    fn(*args, **kwargs)
  File "/opt/conda/envs/py_3.8/lib/python3.8/site-packages/torch/testing/_internal/common_device_type.py", line 977, in test_wrapper
    raise Exception(  # noqa: TRY002
Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(4,), device="cpu", dtype=torch.float32], Tensor[size=(4,), device="cpu", dtype=torch.float32], Tensor[size=(4,), device="cpu", dtype=torch.float32]], args=(), kwargs={}, broadcasts_input=False, name='')

To execute this test, run the following from the base repo dir:
     python test/distributed/_tensor/test_dtensor_ops.py -k TestDTensorOpsCPU.test_dtensor_op_db_vstack_cpu_float32

This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0

Test file path: distributed/_tensor/test_dtensor_ops.py

cc @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @osalpekar @jiayisuse @H-Huang @kwen2501 @awgu @penguinwu @fegin @XilunWu @wanchaol @fduwjj @wz337 @tianyu-l @wconstab @yf225 @chauhang @d4l3k @clee2000

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: flaky-testsProblem is a flaky test in CIoncall: distributedAdd this issue/PR to distributed oncall triage queueskippedDenotes a (flaky) test currently skipped in CI.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0