ExponentialLR unexpectedly calls step()
when init argument last_epoch
is larger than -1
#102261
Labels
actionable
module: LrScheduler
module: optimizer
Related to torch.optim
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
🐛 Describe the bug
Currently, the init function of
torch.optim.lr_scheduler._LRSchedule
will callself.step()
once. This causes a mismatch between the learning rate used by the optimizer and the closed_form_lr of ExponentialLR, when init argumentlast_epoch
is larger than -1.As the result shows,
optim2
has lr 0.000998001, but the closed form lr ofsched2
is 0.000999. This behavior causes confusion and inconsistency when one resumes training from a checkpoint.Versions
Collecting environment information...
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar
The text was updated successfully, but these errors were encountered: