[MRG] Better convergence warnings for lbfgs solver in LogisticRegression #11767
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
When the
lbfgs
solver inLogisticRegression
fails to converge, the resulting ConvergenceWarning is not very informative. This PR increases it's verbosity so we have a better estimation of how bad the convergence is.This is particularly relevant if
lbfgs
is to become the default solver (#11476)The tricky part is that, as far as I understood,
scipy.optimize.fmin_l_bfgs_b
only returns the evaluated gradient at the minimum, while the convergence criterion is the max |projected gradient|. Still IMO providing at least some information about the final gradient is better than nothing..Example
Output on master
Output with this PR