8000 DOC better comment on performance bottleneck · scikit-learn/scikit-learn@d8c98a2 · GitHub
[go: up one dir, main page]

Skip to content

Commit d8c98a2

Browse files
committed
DOC better comment on performance bottleneck
1 parent 34e297e commit d8c98a2

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

sklearn/linear_model/_linear_loss.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -456,8 +456,9 @@ def gradient_hessian(
456456
# Exit early without computing the hessian.
457457
return grad, hess, hessian_warning
458458

459-
# TODO: This "sandwich product", X' diag(W) X, can be greatly improved by
460-
# a dedicated Cython routine.
459+
# TODO: This "sandwich product", X' diag(W) X, is the main computational
460+
# bottleneck for solvers. A dedicated Cython routine might improve it
461+
# exploiting the symmetry (as opposed to, e.g., BLAS gemm).
461462
if sparse.issparse(X):
462463
hess[:n_features, :n_features] = (
463464
X.T
@@ -467,9 +468,8 @@ def gradient_hessian(
467468
@ X
468469
).toarray()
469470
else:
470-
# np.einsum may use less memory but the following is by far faster.
471-
# This matrix multiplication (gemm) is most often the most time
472-
# consuming step for solvers.
471+
# np.einsum may use less memory but the following, using BLAS matrix
472+
# multiplication (gemm), is by far faster.
473473
WX = hess_pointwise[:, None] * X
474474
hess[:n_features, :n_features] = np.dot(X.T, WX)
475475
# flattened view on the array

0 commit comments

Comments
 (0)
0