-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
DOC Increase execution speed of plot_sgd_comparison #21610
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…n to increase execution speed
@sveneschlbeck it'd be useful to post the output of the example (text, and image if it produces any) before and after your change, and the time it takes to run on your machine (you can do there's also failing lint tests in your PR, you should enable |
@adrinjalali I ran test using the |
…nto speed_increased_example_modlinear
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the PR @sveneschlbeck !
I think that setting warm_start=True
changes the narrative of the example. The model trained on a previous round will influence the model in the next.
Co-authored-by: Thomas J. Fan <thomasjpfan@gmail.com>
@thomasjpfan Agreed, we also had a similar
8000
discussion at another example and concluded that it may change the example's character. Also, some examples are never run multiple times in a row, making the param toothless. Will remove Thanks for the input! |
Please apply |
@adrinjalali Will do |
The error on circleci should be fixed by syncing with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Co-authored-by: Thomas J. Fan <thomasjpfan@gmail.com>
Co-authored-by: Thomas J. Fan <thomasjpfan@gmail.com>
#21598 @sply88 @adrinjalali @cakiki
Adapted a few things in the module
examples/linear_model/plot_sgd_comparison
to make it a bit faster:max_iter
parameter to a point where execution is still fast but parameter doesn't kill the optimization processwarm_start
to a bunch of classifiers