You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use the graphical lasso in torch, as my backend uses GPU.
I am comparing my implementation with the one of scikit-learn.covariance.graphical_lasso, but I find that my implementation gets a better minimum, and that the output are quite different concerning the support of the precision matrix.
Here is my implementation:
import torch
from sklearn.covariance import graphical_lasso as GL
def loss(S, Theta, alpha=0.1):
return -torch.logdet(Theta) + torch.trace(S @ Theta) + alpha * torch.norm(Theta, p=1)
def graphical_lasso_torch(S, alpha=0.1, max_iter=100, tol=1e-4):
n = S.shape[0]
C = torch.eye(n, requires_grad=True)
optimizer = torch.optim.Rprop([C], lr=0.01, step_sizes = (1e-10, 50))
for _ in range(max_iter):
optimizer.zero_grad()
Theta = C@(C.T)
loss_value = loss(S, Theta, alpha=alpha)
loss_value.backward()
optimizer.step()
if loss_value.item() < tol:
break
return Theta.detach()
torch.manual_seed(0)
n = 5
p = 10
S = torch.rand(n, p)
S = (S@S.T) / 2
S += torch.eye(n)
alpha = 1
Theta_torch = graphical_lasso_torch(S, alpha=alpha)
_, Theta_sk = GL(S.numpy(), alpha = alpha)
print('loss torch:', loss(S, Theta_torch, alpha=alpha).item())
print('loss scikit-learn:', loss(S, torch.tensor(Theta_sk), alpha=alpha).item())
wich outputs on my machine:
loss torch: 11.097545623779297
loss scikit-learn: 11.462298393249512
Results are even more strange with greater values of alpha, for example 7. There is no warning when calling GL so that it seems it has converged, and using much more iteration (max_iter = 3000 with 0 tolerance) does not help.
Note that I use Theta = C@C.T to make sure the output matrix is symmetric.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am trying to use the graphical lasso in torch, as my backend uses GPU.
I am comparing my implementation with the one of scikit-learn.covariance.graphical_lasso, but I find that my implementation gets a better minimum, and that the output are quite different concerning the support of the precision matrix.
Here is my implementation:
wich outputs on my machine:
Results are even more strange with greater values of
alpha
, for example 7. There is no warning when calling GL so that it seems it has converged, and using much more iteration (max_iter = 3000
with 0 tolerance) does not help.Note that I use Theta = C@C.T to make sure the output matrix is symmetric.
Is there something I am missing here ?
Beta Was this translation helpful? Give feedback.
All reactions