8000 Merge branch 'scikit-learn/master' into 'jakirkham/use_ptrs__enet_coo… · scikit-learn/scikit-learn@3b5acf9 · GitHub
[go: up one dir, main page]

Skip to content

Commit 3b5acf9

Browse files
committed
Merge branch 'scikit-learn/master' into 'jakirkham/use_ptrs__enet_coordinate_descent'
2 parents 1b5ef15 + fc521a0 commit 3b5acf9

File tree

123 files changed

+9687
-1790
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

123 files changed

+9687
-1790
lines changed

doc/developers/contributing.rst

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -140,6 +140,14 @@ feedback:
140140
your **Python, scikit-learn, numpy, and scipy versions**. This information
141141
can be found by running the following code snippet::
142142

143+
>>> import sklearn
144+
>>> sklearn.show_versions() # doctest: +SKIP
145+
146+
.. note::
147+
148+
This utility function is only available in scikit-learn v0.20+.
149+
For previous versions, one has to explicitly run::
150+
143151
import platform; print(platform.platform())
144152
import sys; print("Python", sys.version)
145153
import numpy; print("NumPy", numpy.__version__)

doc/developers/tips.rst

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -121,15 +121,11 @@ Issue: Self-contained example for bug
121121
Issue: Software versions
122122
::
123123

124-
To help diagnose your issue, could you please paste the output of:
124+
To help diagnose your issue, please paste the output of:
125125
```py
126-
import platform; print(platform.platform())
127-
import sys; print("Python", sys.version)
128-
import numpy; print("NumPy", numpy.__version__)
129-
import scipy; print("SciPy", scipy.__version__)
130-
import sklearn; print("Scikit-Learn", sklearn.__version__)
126+
import sklearn; sklearn.show_versions()
131127
```
132-
? Thanks.
128+
Thanks.
133129

134130
Issue: Code blocks
135131
::

doc/modules/classes.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,7 @@ Functions
4848
config_context
4949
get_config
5050
set_config
51+
show_versions
5152

5253
.. _calibration_ref:
5354

doc/modules/compose.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -342,7 +342,7 @@ and ``value`` is an estimator object::
342342
>>> estimators = [('linear_pca', PCA()), ('kernel_pca', KernelPCA())]
343343
>>> combined = FeatureUnion(estimators)
344344
>>> combined # doctest: +NORMALIZE_WHITESPACE, +ELLIPSIS
345-
FeatureUnion(n_jobs=1,
345+
FeatureUnion(n_jobs=None,
346346
transformer_list=[('linear_pca', PCA(copy=True,...)),
347347
('kernel_pca', KernelPCA(alpha=1.0,...))],
348348
transformer_weights=None)
@@ -357,7 +357,7 @@ and ignored by setting to ``None``::
357357

358358
>>> combined.set_params(kernel_pca=None)
359359
... # doctest: +NORMALIZE_WHITESPACE, +ELLIPSIS
360-
FeatureUnion(n_jobs=1,
360+
FeatureUnion(n_jobs=None,
361361
transformer_list=[('linear_pca', PCA(copy=True,...)),
362362
('kernel_pca', None)],
363363
transformer_weights=None)
@@ -423,7 +423,7 @@ By default, the remaining rating columns are ignored (``remainder='drop'``)::
423423
... remainder='drop')
424424

425425
>>> column_trans.fit(X) # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS
426-
ColumnTransformer(n_jobs=1, remainder='drop', sparse_threshold=0.3,
426+
ColumnTransformer(n_jobs=None, remainder='drop', sparse_threshold=0.3,
427427
transformer_weights=None,
428428
transformers=...)
429429

@@ -496,7 +496,7 @@ above example would be::
496496
... ('city', CountVectorizer(analyzer=lambda x: [x])),
497497
... ('title', CountVectorizer()))
498498
>>> column_trans # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS
499-
ColumnTransformer(n_jobs=1, remainder='drop', sparse_threshold=0.3,
499+
ColumnTransformer(n_jobs=None, remainder='drop', sparse_threshold=0.3,
500500
transformer_weights=None,
501501
transformers=[('countvectorizer-1', ...)
502502

doc/modules/kernel_approximation.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,10 +64,9 @@ a linear algorithm, for example a linear SVM::
6464
SGDClassifier(alpha=0.0001, average=False, class_weight=None,
6565
early_stopping=False, epsilon=0.1, eta0=0.0, fit_intercept=True,
6666
l1_ratio=0.15, learning_rate='optimal', loss='hinge', max_iter=5,
67-
n_iter=None, n_iter_no_change=5, n_jobs=1, penalty='l2',
67+
n_iter=None, n_iter_no_change=5, n_jobs=None, penalty='l2',
6868
power_t=0.5, random_state=None, shuffle=True, tol=None,
6969
validation_fraction=0.1, verbose=0, warm_start=False)
70-
7170
>>> clf.score(X_features, y)
7271
1.0
7372

doc/modules/linear_model.rst

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,9 @@ and will store the coefficients :math:`w` of the linear model in its
4545
>>> from sklearn import linear_model
4646
>>> reg = linear_model.LinearRegression()
4747
>>> reg.fit ([[0, 0], [1, 1], [2, 2]], [0, 1, 2])
48-
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
48+
... # doctest: +NORMALIZE_WHITESPACE
49+
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None,
50+
normalize=False)
4951
>>> reg.coef_
5052
array([0.5, 0.5])
5153

doc/modules/model_evaluation.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ Usage examples:
105105
>>> model = svm.SVC()
106106
>>> cross_val_score(model, X, y, cv=5, scoring='wrong_choice')
107107
Traceback (most recent call last):
108-
ValueError: 'wrong_choice' is not a valid scoring value. Valid options are ['accuracy', 'adjusted_mutual_info_score', 'adjusted_rand_score', 'average_precision', 'balanced_accuracy', 'brier_score_loss', 'completeness_score', 'explained_variance', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'fowlkes_mallows_score', 'homogeneity_score', 'mutual_info_score', 'neg_log_loss', 'neg_mean_absolute_error', 'neg_mean_squared_error', 'neg_mean_squared_log_error', 'neg_median_absolute_error', 'normalized_mutual_info_score', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'r2', 'recall', 'recall_macro', 'recall_micro', 'recall_samples', 'recall_weighted', 'roc_auc', 'v_measure_score']
108+
ValueError: 'wrong_choice' is not a valid scoring value. Use sorted(sklearn.metrics.SCORERS.keys()) to get valid options.
109109

110110
.. note::
111111

doc/modules/sgd.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ for the training samples::
6464
SGDClassifier(alpha=0.0001, average=False, class_weight=None,
6565
early_stopping=False, epsilon=0.1, eta0=0.0, fit_intercept=True,
6666
l1_ratio=0.15, learning_rate='optimal', loss='hinge', max_iter=5,
67-
n_iter=None, n_iter_no_change=5, n_jobs=1, penalty='l2',
67+
n_iter=None, n_iter_no_change=5, n_jobs=None, penalty='l2',
6868
power_t=0.5, random_state=None, shuffle=True, tol=None,
6969
validation_fraction=0.1, verbose=0, warm_start=False)
7070

doc/tutorial/statistical_inference/model_selection.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -269,9 +269,9 @@ parameter automatically by cross-validation::
269269
>>> y_diabetes = diabetes.target
270270
>>> lasso.fit(X_diabetes, y_diabetes)
271271
LassoCV(alphas=None, copy_X=True, cv=3, eps=0.001, fit_intercept=True,
272-
max_iter=1000, n_alphas=100, n_jobs=1, normalize=False, positive=False,
273-
precompute='auto', random_state=None, selection='cyclic', tol=0.0001,
274-
verbose=False)
272+
max_iter=1000, n_alphas=100, n_jobs=None, normalize=False,
273+
positive=False, precompute='auto', random_state=None,
274+
selection='cyclic', tol=0.0001, verbose=False)
275275
>>> # The estimator chose automatically its lambda:
276276
>>> lasso.alpha_ # doctest: +ELLIPSIS
277277
0.01229...

doc/tutorial/statistical_inference/supervised_learning.rst

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ Scikit-learn documentation for more information about this type of classifier.)
9595
>>> knn = KNeighborsClassifier()
9696
>>> knn.fit(iris_X_train, iris_y_train) # doctest: +NORMALIZE_WHITESPACE
9797
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
98-
metric_params=None, n_jobs=1, n_neighbors=5, p=2,
98+
metric_params=None, n_jobs=None, n_neighbors=5, p=2,
9999
weights='uniform')
100100
>>> knn.predict(iris_X_test)
101101
array([1, 2, 1, 0, 0, 0, 2, 1, 2, 0])
@@ -176,13 +176,16 @@ Linear models: :math:`y = X\beta + \epsilon`
176176
>>> from sklearn import linear_model
177177
>>> regr = linear_model.LinearRegression()
178178
>>> regr.fit(diabetes_X_train, diabetes_y_train)
179-
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
179+
... # doctest: +NORMALIZE_WHITESPACE
180+
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None,
181+
normalize=False)
180182
>>> print(regr.coef_)
181183
[ 0.30349955 -237.63931533 510.53060544 327.73698041 -814.13170937
182184
492.81458798 102.84845219 184.60648906 743.51961675 76.09517222]
183185

184186
>>> # The mean square error
185-
>>> np.mean((regr.predict(diabetes_X_test)-diabetes_y_test)**2)# doctest: +ELLIPSIS
187+
>>> np.mean((regr.predict(diabetes_X_test)-diabetes_y_test)**2)
188+
... # doctest: +ELLIPSIS
186189
2004.56760268...
187190

188191
>>> # Explained variance score: 1 is perfect prediction
@@ -257,8 +260,11 @@ diabetes dataset rather than our synthetic data::
257260
>>> from __future__ import print_function
258261
>>> print([regr.set_params(alpha=alpha
259262
... ).fit(diabetes_X_train, diabetes_y_train,
260-
... ).score(diabetes_X_test, diabetes_y_test) for alpha in alphas]) # doctest: +ELLIPSIS
261-
[0.5851110683883..., 0.5852073015444..., 0.5854677540698..., 0.5855512036503..., 0.5830717085554..., 0.57058999437...]
263+
... ).score(diabetes_X_test, diabetes_y_test)
264+
... for alpha in alphas])
265+
... # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
266+
[0.5851110683883..., 0.5852073015444..., 0.5854677540698...,
267+
0.5855512036503..., 0.5830717085554..., 0.57058999437...]
262268

263269

264270
.. note::
@@ -372,7 +378,7 @@ function or **logistic** function:
372378
>>> logistic.fit(iris_X_train, iris_y_train)
373379
LogisticRegression(C=100000.0, class_weight=None, dual=False,
374380
fit_intercept=True, intercept_scaling=1, max_iter=100,
375-
multi_class='ovr', n_jobs=1, penalty='l2', random_state=None,
381+
multi_class='ovr', n_jobs=None, penalty='l2', random_state=None,
376382
solver='liblinear', tol=0.0001, verbose=0, warm_start=False)
377383

378384
This is known as :class:`LogisticRegression`.

0 commit comments

Comments
 (0)
0