8000 MAINT Copy latest version of file from upstream by DimitriPapadopoulos · Pull Request #21163 · scikit-learn/scikit-learn · GitHub
[go: up one dir, main page]

Skip to content

MAINT Copy latest version of file from upstream #21163

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 27, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 30 additions & 38 deletions sklearn/externals/_lobpcg.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""
scikit-learn copy of scipy/sparse/linalg/eigen/lobpcg/lobpcg.py v1.3.0
scikit-learn copy of scipy/sparse/linalg/eigen/lobpcg/lobpcg.py v1.7.1
to be deleted after scipy 1.3.0 becomes a dependency in scikit-lean
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG).
Expand All @@ -10,41 +10,28 @@
Toward the Optimal Preconditioned Eigensolver: Locally Optimal
Block Preconditioned Conjugate Gradient Method.
SIAM Journal on Scientific Computing 23, no. 2,
pp. 517-541. http://dx.doi.org/10.1137/S1064827500366124
pp. 517-541. :doi:`10.1137/S1064827500366124`

.. [2] A. V. Knyazev, I. Lashuk, M. E. Argentati, and E. Ovchinnikov (2007),
Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX)
in hypre and PETSc. https://arxiv.org/abs/0705.2626
in hypre and PETSc. :arxiv:`0705.2626`

.. [3] A. V. Knyazev's C and MATLAB implementations:
https://bitbucket.org/joseroman/blopex
https://github.com/lobpcg/blopex
"""

from __future__ import division, print_function, absolute_import
import numpy as np
from scipy.linalg import (inv, eigh, cho_factor, cho_solve, cholesky, orth,
from scipy.linalg import (inv, eigh, cho_factor, cho_solve, cholesky,
LinAlgError)
from scipy.sparse.linalg import aslinearoperator
from numpy import block as bmat

__all__ = ['lobpcg']


def bmat(*args, **kwargs):
import warnings
with warnings.catch_warnings(record=True):
warnings.filterwarnings(
'ignore', '.*the matrix subclass is not the recommended way.*')
return np.bmat(*args, **kwargs)


def _save(ar, fileName):
# Used only when verbosity level > 10.
np.savetxt(fileName, ar)


def _report_nonhermitian(M, name):
"""
Report if `M` is not a hermitian matrix given its type.
Report if `M` is not a Hermitian matrix given its type.
"""
from scipy.linalg import norm

Expand Down Expand Up @@ -118,7 +105,7 @@ def _b_orthonormalize(B, blockVectorV, blockVectorBV=None, retInvR=False):
else:
blockVectorBV = None
except LinAlgError:
# raise ValueError('Cholesky has failed')
#raise ValueError('Cholesky has failed')
blockVectorV = None
blockVectorBV = None
VBV = None
Expand All @@ -142,7 +129,7 @@ def _get_indx(_lambda, num, largest):

def lobpcg(A, X,
B=None, M=None, Y=None,
tol=None, maxiter=20,
tol=None, maxiter=None,
largest=True, verbosityLevel=0,
retLambdaHistory=False, retResidualNormsHistory=False):
"""Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG)
Expand Down Expand Up @@ -172,7 +159,7 @@ def lobpcg(A, X,
Solver tolerance (stopping criterion).
The default is ``tol=n*sqrt(eps)``.
maxiter : int, optional
Maximum number of iterations. The default is ``maxiter=min(n, 20)``.
Maximum number of iterations. The default is ``maxiter = 20``.
largest : bool, optional
When True, solve for the largest eigenvalues, otherwise the smallest.
verbosityLevel : int, optional
Expand Down Expand Up @@ -213,8 +200,7 @@ def lobpcg(A, X,
It is not that ``n`` should be large for the LOBPCG to work, but rather the
ratio ``n / m`` should be large. It you call LOBPCG with ``m=1``
and ``n=10``, it works though ``n`` is small. The method is intended
for extremely large ``n / m``, see e.g., reference [28] in
https://arxiv.org/abs/0705.2626
for extremely large ``n / m`` [4]_.

The convergence speed depends basically on two factors:

Expand All @@ -234,15 +220,21 @@ def lobpcg(A, X,
Toward the Optimal Preconditioned Eigensolver: Locally Optimal
Block Preconditioned Conjugate Gradient Method.
SIAM Journal on Scientific Computing 23, no. 2,
pp. 517-541. http://dx.doi.org/10.1137/S1064827500366124
pp. 517-541. :doi:`10.1137/S1064827500366124`

.. [2] A. V. Knyazev, I. Lashuk, M. E. Argentati, and E. Ovchinnikov
(2007), Block Locally Optimal Preconditioned Eigenvalue Xolvers
(BLOPEX) in hypre and PETSc. https://arxiv.org/abs/0705.2626
(BLOPEX) in hypre and PETSc. :arxiv:`0705.2626`

.. [3] A. V. Knyazev's C and MATLAB implementations:
https://bitbucket.org/joseroman/blopex

.. [4] S. Yamada, T. Imamura, T. Kano, and M. Machida (2006),
High-performance computing for exact numerical approaches to
quantum many-body problems on the earth simulator. In Proceedings
of the 2006 ACM/IEEE Conference on Supercomputing.
:doi:`10.1145/1188455.1188504`

Examples
--------

Expand Down Expand Up @@ -270,7 +262,8 @@ def lobpcg(A, X,
Initial guess for eigenvectors, should have linearly independent
columns. Column dimension = number of requested eigenvalues.

>>> X = np.random.rand(n, 3)
>>> rng = np.random.default_rng()
>>> X = rng.random((n, 3))

Preconditioner in the inverse of A in this example:

Expand Down Expand Up @@ -302,7 +295,8 @@ def lobpcg(A, X,
blockVectorX = X
blockVectorY = Y
residualTolerance = tol
maxIterations = maxiter
if maxiter is None:
maxiter = 20

if blockVectorY is not None:
sizeY = blockVectorY.shape[1]
Expand Down Expand Up @@ -429,7 +423,7 @@ def lobpcg(A, X,
iterationNumber = -1
restart = True
explicitGramFlag = False
while iterationNumber < maxIterations:
while iterationNumber < maxiter:
iterationNumber += 1
if verbosityLevel > 0:
print('iteration %d' % iterationNumber)
Expand Down Expand Up @@ -487,15 +481,13 @@ def lobpcg(A, X,
##
# B-orthogonalize the preconditioned residuals to X.
if B is not None:
activeBlockVectorR = activeBlockVectorR - \
np.matmul(blockVectorX,
np.matmul(blockVectorBX.T.conj(),
activeBlockVectorR))
activeBlockVectorR = activeBlockVectorR - np.matmul(blockVectorX,
np.matmul(blockVectorBX.T.conj(),
activeBlockVectorR))
else:
activeBlockVectorR = activeBlockVectorR - \
np.matmul(blockVectorX,
np.matmul(blockVectorX.T.conj(),
activeBlockVectorR))
activeBlockVectorR = activeBlockVectorR - np.matmul(blockVectorX,
np.matmul(blockVectorX.T.conj(),
activeBlockVectorR))

##
# B-orthonormalize the preconditioned residuals.
Expand Down
0