@@ -156,9 +156,9 @@ required for fitting and predicting: while fitting KRR is fast in principle,
156
156
the grid-search for hyperparameter optimization scales exponentially with the
157
157
number of hyperparameters ("curse of dimensionality"). The gradient-based
158
158
optimization of the parameters in GPR does not suffer from this exponential
159
- scaling and is thus considerable faster on this example with 3-dimensional
159
+ scaling and is thus considerably faster on this example with 3-dimensional
160
160
hyperparameter space. The time for predicting is similar; however, generating
161
- the variance of the predictive distribution of GPR takes considerable longer
161
+ the variance of the predictive distribution of GPR takes considerably longer
162
162
than just predicting the mean.
163
163
164
164
GPR on Mauna Loa CO2 data
@@ -294,7 +294,7 @@ with different choices of the hyperparameters. The first figure shows the
294
294
predicted probability of GPC with arbitrarily chosen hyperparameters and with
295
295
the hyperparameters corresponding to the maximum log-marginal-likelihood (LML).
296
296
297
- While the hyperparameters chosen by optimizing LML have a considerable larger
297
+ While the hyperparameters chosen by optimizing LML have a considerably larger
298
298
LML, they perform slightly worse according to the log-loss on test data. The
299
299
figure shows that this is because they exhibit a steep change of the class
300
300
probabilities at the class boundaries (which is good) but have predicted
@@ -384,7 +384,7 @@ equivalent call to ``__call__``: ``np.diag(k(X, X)) == k.diag(X)``
384
384
385
385
Kernels are parameterized by a vector :math: `\theta ` of hyperparameters. These
386
386
hyperparameters can for instance control length-scales or periodicity of a
387
- kernel (see below). All kernels support computing analytic gradients
387
+ kernel (see below). All kernels support computing analytic gradients
388
388
of the kernel's auto-covariance with respect to :math: `log(\theta )` via setting
389
389
``eval_gradient=True `` in the ``__call__ `` method.
390
390
That is, a ``(len(X), len(X), len(theta)) `` array is returned where the entry
0 commit comments