@@ -964,7 +964,8 @@ The following example shows how to fit the majority rule classifier::
964964 >>> iris = datasets.load_iris()
965965 >>> X, y = iris.data[:, 1:3], iris.target
966966
967- >>> clf1 = LogisticRegression(random_state=1)
967+ >>> clf1 = LogisticRegression(solver='lbfgs', multi_class='multinomial',
968+ ... random_state=1)
968969 >>> clf2 = RandomForestClassifier(n_estimators=50, random_state=1)
969970 >>> clf3 = GaussianNB()
970971
@@ -973,10 +974,10 @@ The following example shows how to fit the majority rule classifier::
973974 >>> for clf, label in zip([clf1, clf2, clf3, eclf], ['Logistic Regression', 'Random Forest', 'naive Bayes', 'Ensemble']):
974975 ... scores = cross_val_score(clf, X, y, cv=5, scoring='accuracy')
975976 ... print("Accuracy: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label))
976- Accuracy: 0.90 (+/- 0.05 ) [Logistic Regression]
977+ Accuracy: 0.95 (+/- 0.04 ) [Logistic Regression]
977978 Accuracy: 0.94 (+/- 0.04) [Random Forest]
978979 Accuracy: 0.91 (+/- 0.04) [naive Bayes]
979- Accuracy: 0.95 (+/- 0.05 ) [Ensemble]
980+ Accuracy: 0.95 (+/- 0.04 ) [Ensemble]
980981
981982
982983Weighted Average Probabilities (Soft Voting)
@@ -1049,7 +1050,8 @@ The `VotingClassifier` can also be used together with `GridSearch` in order
10491050to tune the hyperparameters of the individual estimators::
10501051
10511052 >>> from sklearn.model_selection import GridSearchCV
1052- >>> clf1 = LogisticRegression(random_state=1)
1053+ >>> clf1 = LogisticRegression(solver='lbfgs', multi_class='multinomial',
1054+ ... random_state=1)
10531055 >>> clf2 = RandomForestClassifier(random_state=1)
10541056 >>> clf3 = GaussianNB()
10551057 >>> eclf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='soft')
0 commit comments