-
-
Notifications
You must be signed in to change notification settings - Fork 25.9k
[MRG] Changed examples so they produce the same values on OS X #11289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] Changed examples so they produce the same values on OS X #11289
Conversation
lgtm if it passes ;) |
Almost there, fails only on one build:
|
So that's a numpy version thing? Should we add a fixture? seems a bit overkill... this doesn't happen in other places? |
Doc tests pass even if I change those 0.00001s to scientific notation. I'll do that. |
Py3.6 failing:
|
That seems like a reasonable level of precision for adding ellipsis, though :) |
Personally, I don't like making examples complex (e.g., using something like |
The point of the cross-validation example was to show off the scoring met 8000 hods, right? I think this was a simplification because it doesn't use probability=True which makes the SVM use platt scaling which is a pretty weird thing to do, imho, in particular inside the cross-validation. The problem was that with the default tolerance there are no digits that are equal, I think. |
"All checks have passed" - so elusive, but apparently still attainable. |
sklearn/svm/classes.py
Outdated
@@ -352,7 +351,7 @@ class LinearSVR(LinearModel, RegressorMixin): | |||
|
|||
sklearn.linear_model.SGDRegressor | |||
SGDRegressor can optimize the same cost function as LinearSVR | |||
by adjusting the penalty and loss parameters. In addition it requires | |||
by adjusting the penalty and loss parameters. In addition it .requires |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
small mistake
@georgipeev Could you take some time to confirm? I would be surprised if this is the case. Thanks in advance. |
@amueller's recollection is correct - at the default tolerance many values in these examples only matched 1-2 digits after the decimal point, and one value matched none when running against whichever RNG library gets used under OS X. |
Thanks. I think I'll vote +0 here, since I still doubt whether it's good to make examples complex. E.g., I think |
CI failure seems unrelated. How should we proceed? |
@georgipeev Master branch is failing now. Please merge master branch in after #11318 is merged. |
…enerator functions (scikit-learn#11276)
Recommend to use of `filename.joblib` instead of `filename.pkl` for models persisted via the joblib library to reduce confusion when it comes to time load a model, as it will be more clear whether a file was saved using the `pickle` or `joblib` library.
…_takes_y() to work with @deprecated in Python 2 (scikit-learn#11277)
e704a24
to
6492d8a
Compare
Is anything else required before this PR can be merged? |
Something weird happened with your history but I think it'll go away if we squash and merge. looks good otherwise. |
LGTM merging. |
I tested this issue before the above merge with a conda env that included mkl in an attempt to get the same result on macOS, but the test still failed. While the above merge fixes the tests by removing precision, it doesn't align the actual results. The liblinear code is directly compiled against the system blas as in the output below. This likely explains the system specific discrepancy.
|
@TheAtomicOption yes, so there's no way to align the results without increasing the precision a lot. You can increase the precision and align the results but we found that wasn't worth it for a doctest. |
Reference Issues/PRs
Fixes #10213.
What does this implement/fix? Explain your changes.
Doctests were failing because of different math libraries used for random number generation. Decreasing the tolerance fixed the problem. Also switched the scoring method for cross validation and the corresponding results.