8000 DOC Fix typo in Gaussian Process docs (#19039) · thomasjpfan/scikit-learn@9780abd · GitHub
[go: up one dir, main page]

Skip to content

Commit 9780abd

Browse files
authored
DOC Fix typo in Gaussian Process docs (scikit-learn#19039)
1 parent 54375d2 commit 9780abd

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

doc/modules/gaussian_process.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -156,9 +156,9 @@ required for fitting and predicting: while fitting KRR is fast in principle,
156156
the grid-search for hyperparameter optimization scales exponentially with the
157157
number of hyperparameters ("curse of dimensionality"). The gradient-based
158158
optimization of the parameters in GPR does not suffer from this exponential
159-
scaling and is thus considerable faster on this example with 3-dimensional
159+
scaling and is thus considerably faster on this example with 3-dimensional
160160
hyperparameter space. The time for predicting is similar; however, generating
161-
the variance of the predictive distribution of GPR takes considerable longer
161+
the variance of the predictive distribution of GPR takes considerably longer
162162
than just predicting the mean.
163163

164164
GPR on Mauna Loa CO2 data
@@ -294,7 +294,7 @@ with different choices of the hyperparameters. The first figure shows the
294294
predicted probability of GPC with arbitrarily chosen hyperparameters and with
295295
the hyperparameters corresponding to the maximum log-marginal-likelihood (LML).
296296

297-
While the hyperparameters chosen by optimizing LML have a considerable larger
297+
While the hyperparameters chosen by optimizing LML have a considerably larger
298298
LML, they perform slightly worse according to the log-loss on test data. The
299299
figure shows that this is because they exhibit a steep change of the class
300300
probabilities at the class boundaries (which is good) but have predicted
@@ -384,7 +384,7 @@ equivalent call to ``__call__``: ``np.diag(k(X, X)) == k.diag(X)``
384384

385385
Kernels are parameterized by a vector :math:`\theta` of hyperparameters. These
386386
hyperparameters can for instance control length-scales or periodicity of a
387-
kernel (see below). All kernels support computing analytic gradients
387+
kernel (see below). All kernels support computing analytic gradients
388388
of the kernel's auto-covariance with respect to :math:`log(\theta)` via setting
389389
``eval_gradient=True`` in the ``__call__`` method.
390390
That is, a ``(len(X), len(X), len(theta))`` array is returned where the entry

0 commit comments

Comments
 (0)
0