@@ -140,19 +140,19 @@ parameters to maximize the cross-validation score. This object takes an
140
140
estimator during the construction and exposes an estimator API::
141
141
142
142
>>> from sklearn.grid_search import GridSearchCV
143
- >>> gammas = np.logspace(-6, -1, 10)
144
- >>> clf = GridSearchCV(estimator=svc, param_grid=dict(gamma=gammas ),
143
+ >>> Cs = np.logspace(-6, -1, 10)
144
+ >>> clf = GridSearchCV(estimator=svc, param_grid=dict(C=Cs ),
145
145
... n_jobs=-1)
146
146
>>> clf.fit(X_digits[:1000], y_digits[:1000]) # doctest: +ELLIPSIS
147
147
GridSearchCV(cv=None,...
148
148
>>> clf.best_score_ # doctest: +ELLIPSIS
149
- 0.924 ...
150
- >>> clf.best_estimator_.gamma == 1e-6
151
- True
149
+ 0.925 ...
150
+ >>> clf.best_estimator_.C # doctest: +ELLIPSIS
151
+ 0.0077...
152
152
153
153
>>> # Prediction performance on test set is not as good as on train set
154
- >>> clf.score(X_digits[1000:], y_digits[1000:])
155
- 0.94228356336260977
154
+ >>> clf.score(X_digits[1000:], y_digits[1000:]) # doctest: +ELLIPSIS
155
+ 0.943...
156
156
157
157
158
158
By default, the :class: `GridSearchCV ` uses a 3-fold cross-validation. However,
@@ -165,7 +165,7 @@ a stratified 3-fold.
165
165
166
166
>>> cross_validation.cross_val_score(clf, X_digits, y_digits)
167
167
... # doctest: +ELLIPSIS
168
- array([ 0.935 ..., 0.958 ..., 0.937 ...])
168
+ array([ 0.938 ..., 0.963 ..., 0.944 ...])
169
169
170
170
Two cross-validation loops are performed in parallel: one by the
171
171
:class: `GridSearchCV ` estimator to set ``gamma `` and the other one by
0 commit comments