8000 docstring fixes · seckcoder/scikit-learn@8367109 · GitHub
[go: up one dir, main page]

Skip to content

Commit 8367109

Browse files
jaquesgrobleramueller
authored andcommitted
docstring fixes
1 parent e50fad6 commit 8367109

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

examples/svm/plot_svm_scale_c.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
increase.
2828
2929
When using, for example, :ref:`cross validation <cross_validation>`, to
30-
set amount of regularization with `C`, there will be a different
30+
set the amount of regularization with `C`, there will be a different
3131
amount of samples between every problem that we are using for model
3232
selection, as well as for the final problem that we want to use for
3333
training.
@@ -38,16 +38,16 @@
3838
account for the different training samples?`
3939
4040
The figures below are used to illustrate the effect of scaling our
41-
`C` to compensate for the change in the amount of samples, in the
41+
`C` to compensate for the change in the number of samples, in the
4242
case of using an `L1` penalty, as well as the `L2` penalty.
4343
4444
L1-penalty case
4545
-----------------
4646
In the `L1` case, theory says that prediction consistency
4747
(i.e. that under given hypothesis, the estimator
4848
learned predicts as well as an model knowing the true distribution)
49-
is not possible because of the biasof the `L1`. It does say, however,
50-
that model consistancy, in terms of finding the right set of non-zero
49+
is not possible because of the bias of the `L1`. It does say, however,
50+
that model consistency, in terms of finding the right set of non-zero
5151
parameters as well as their signs, can be achieved by scaling
5252
`C1`.
5353
@@ -64,7 +64,7 @@
6464
fractions of a generated data-set.
6565
6666
In the `L1` penalty case, the results are best when scaling our `C` with
67-
the amount of samples, `n`, which can be seen in the first figure.
67+
the number of samples, `n`, which can be seen in the first figure.
6868
6969
For the `L2` penalty case, the best result comes from the case where `C`
7070
is not scaled.

0 commit comments

Comments
 (0)
0