10000 fix doctests · scikit-learn/scikit-learn@e75a6ff · GitHub
[go: up one dir, main page]

Skip to content

Commit e75a6ff

Browse files
committed
fix doctests
1 parent 6776eb6 commit e75a6ff

File tree

2 files changed

+28
-21
lines changed

2 files changed

+28
-21
lines changed

doc/tutorial/basic/tutorial.rst

Lines changed: 24 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -176,9 +176,10 @@ which produces a new array that contains all but
176176
the last entry of ``digits.data``::
177177

178178
>>> clf.fit(digits.data[:-1], digits.target[:-1]) # doctest: +NORMALIZE_WHITESPACE
179-
SVC(C=100.0, cache_size=200, class_weight=None, coef0=0.0, degree=3,
180-
gamma=0.001, kernel='rbf', max_iter=-1, probability=False,
181-
random_state=None, shrinking=True, tol=0.001, verbose=False)
179+
SVC(C=100.0, cache_size=200, class_weight=None, coef0=0.0,
180+
compact_decision_function=None, degree=3, gamma=0.001, kernel='rbf',
181+
max_iter=-1, probability=False, random_state=None, shrinking=True,
182+
tol=0.001, verbose=False)
182183

183184
Now you can predict new values, in particular, we can ask to the
184185
classifier what is the digit of our last image in the ``digits`` dataset,
@@ -214,9 +215,10 @@ persistence model, namely `pickle <http://docs.python.org/library/pickle.html>`_
214215
>>> iris = datasets.load_iris()
215216
>>> X, y = iris.data, iris.target
216217
>>> clf.fit(X, y) # doctest: +NORMALIZE_WHITESPACE
217-
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0,
218-
kernel='rbf', max_iter=-1, probability=False, random_state=None,
219-
shrinking=True, tol=0.001, verbose=False)
218+
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
219+
compact_decision_function=None, degree=3, gamma=0.0, kernel='rbf',
220+
max_iter=-1, probability=False, random_state=None, shrinking=True,
221+
tol=0.001, verbose=False)
220222

221223
>>> import pickle
222224
>>> s = pickle.dumps(clf)
@@ -288,17 +290,19 @@ maintained::
288290
>>> iris = datasets.load_iris()
289291
>>> clf = SVC()
290292
>>> clf.fit(iris.data, iris.target)
291-
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0,
292-
kernel='rbf', max_iter=-1, probability=False, random_state=None,
293-
shrinking=True, tol=0.001, verbose=False)
293+
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
294+
compact_decision_function=None, degree=3, gamma=0.0, kernel='rbf',
295+
max_iter=-1, probability=False, random_state=None, shrinking=True,
296+
tol=0.001, verbose=False)
294297

295298
>>> list(clf.predict(iris.data[:3]))
296299
[0, 0, 0]
297300

298301
>>> clf.fit(iris.data, iris.target_names[iris.target])
299-
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0,
300-
kernel='rbf', max_iter=-1, probability=False, random_state=None,
301-
shrinking=True, tol=0.001, verbose=False)
302+
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
303+
compact_decision_function=None, degree=3, gamma=0.0, kernel='rbf',
304+
max_iter=-1, probability=False, random_state=None, shrinking=True,
305+
tol=0.001, verbose=False)
302306

303307
>>> list(clf.predict(iris.data[:3])) # doctest: +NORMALIZE_WHITESPACE
304308
['setosa', 'setosa', 'setosa']
@@ -325,16 +329,18 @@ more than once will overwrite what was learned by any previous ``fit()``::
325329

326330
>>> clf = SVC()
327331
>>> clf.set_params(kernel='linear').fit(X, y)
328-
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0,
329-
kernel='linear', max_iter=-1, probability=False, random_state=None,
330-
shrinking=True, tol=0.001, verbose=False)
332+
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
333+
compact_decision_function=None, degree=3, gamma=0.0, kernel='linear',
334+
max_iter=-1, probability=False, random_state=None, shrinking=True,
335+
tol=0.001, verbose=False)
331336
>>> clf.predict(X_test)
332337
array([1, 0, 1, 1, 0])
333338

334339
>>> clf.set_params(kernel='rbf').fit(X, y)
335-
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0,
336-
kernel='rbf', max_iter=-1, probability=False, random_state=None,
337-
shrinking=True, tol=0.001, verbose=False)
340+
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
341+
compact_decision_function=None, degree=3, gamma=0.0, kernel='rbf',
342+
max_iter=-1, probability=False, random_state=None, shrinking=True,
343+
tol=0.001, verbose=False)
338344
>>> clf.predict(X_test)
339345
array([0, 0, 0, 1, 0])
340346

doc/tutorial/statistical_inference/supervised_learning.rst

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -453,9 +453,10 @@ classification --:class:`SVC` (Support Vector Classification).
453453
>>> from sklearn import svm
454454
>>> svc = svm.SVC(kernel='linear')
455455
>>> svc.fit(iris_X_train, iris_y_train) # doctest: +NORMALIZE_WHITESPACE
456-
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0,
457-
kernel='linear', max_iter=-1, probability=False, random_state=None,
458-
shrinking=True, tol=0.001, verbose=False)
456+
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
457+
compact_decision_function=None, degree=3, gamma=0.0, kernel='linear',
458+
max_iter=-1, probability=False, random_state=None, shrinking=True,
459+
tol=0.001, verbose=False)
459460

460461

461462
.. warning:: **Normalizing data**

0 commit comments

Comments
 (0)
0