8000 Pytorch PP requires all parameters to have grad in backward · Issue #153484 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content
Pytorch PP requires all parameters to have grad in backward #153484
@liangluofb

Description

@liangluofb

🐛 Describe the bug

When running Pytorch PP, the backward step code requires all parameters to have grad. This is too strict and perhaps should relax to at least one parameter requires grad.

def forward(self, param1, param2, ...):
  # currently, both param1 and param2 require grad

Error message:

  RuntimeError: [7] for chunk 0 has gradients None and is expecting to send gradients to stage 6,

Versions

Pytorch trunk

cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0