Modify the Trainer class to handle simultaneous execution of Ray Tune and Weights & Biases#10823
Merged
sgugger merged 2 commits intohuggingface:masterfrom Mar 22, 2021
Merged
Conversation
… wandb argument passed by Ray Tune to model config.
sgugger
approved these changes
Mar 22, 2021
Collaborator
There was a problem hiding this comment.
Thanks for fixing! Waiting for @richardliaw or @amogkam approval before merging.
Iwontbecreative
pushed a commit
to Iwontbecreative/transformers
that referenced
this pull request
Jul 15, 2021
… and Weights & Biases (huggingface#10823) * Modify the _hp_search_setup method on the Trainer class to handle the wandb argument passed by Ray Tune to model config. * Reformat single quotes as double quotes.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
The proper way to integrate Ray Tune and Weights & Biases is to pass a
wandbparameter totune.run.However, this parameter is handled as a dictionary inside the
configargument, and there is no distinction betweenwandbparameters and standard model optimization parameters. The following code comes from their docs:This is not a problem for Ray Tune. However, it is a problem for the
transformersintegration because it treats wandb as a model parameter, and therefore configuring wandb in this way will raise an error message claiming thatwandb is not a training argument.The following code will raise such an error:
One way to work around this is to instantiate a subclass based on the Trainer:
However, this looks like a hack because throwing away
wandbarguments in model config on_hp_search_setupshould be standard Trainer behavior.That's why I'm submitting a PR that directly modifies the
_hp_search_setupof the Trainer class to ignorewandbarguments if Ray is chosen as a backend.Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
I'm tagging @richardliaw and @amogkam as they're directly involved in Ray Tune.