-
-
Notifications
You must be signed in to change notification settings - Fork 25.9k
Raise ValueError instead of RuntimeWarning for LinearModelLoss #27332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
5601fb5
09e27d2
428c736
0721f6b
9585d68
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -291,13 +291,29 @@ | |||||
|
||||||
if not self.base_loss.is_multiclass: | ||||||
grad = np.empty_like(coef, dtype=weights.dtype) | ||||||
grad[:n_features] = X.T @ grad_pointwise + l2_reg_strength * weights | ||||||
with np.errstate(all="raise"): | ||||||
try: | ||||||
grad[:n_features] = X.T @ grad_pointwise + l2_reg_strength * weights | ||||||
except FloatingPointError: | ||||||
raise ValueError( | ||||||
"Overflow detected. Try scaling the target variable or" | ||||||
" features, or using a different solver" | ||||||
) from None | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
if self.fit_intercept: | ||||||
grad[-1] = grad_pointwise.sum() | ||||||
else: | ||||||
grad = np.empty((n_classes, n_dof), dtype=weights.dtype, order="F") | ||||||
# grad_pointwise.shape = (n_samples, n_classes) | ||||||
grad[:, :n_features] = grad_pointwise.T @ X + l2_reg_strength * weights | ||||||
with np.errstate(all="raise"): | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If this is an overflow, I think that using There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I tried using |
||||||
try: | ||||||
grad[:, :n_features] = ( | ||||||
grad_pointwise.T @ X + l2_reg_strength * weights | ||||||
) | ||||||
except FloatingPointError: | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
raise ValueE 67E6 rror( | ||||||
"Overflow detected. Try scaling the target variable or" | ||||||
" features, or using a different solver" | ||||||
) from None | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
if self.fit_intercept: | ||||||
grad[:, -1] = grad_pointwise.sum(axis=0) | ||||||
if coef.ndim == 1: | ||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I use
exc
as you have in this suggestion, testing on the dataset from I get the following:so it still includes the original exception that I am "wrapping" with a ValueError and message. Do you reckon it's better this way?