Closed
Description
Description
In its current version (master), the dict_learning
function doesn't use the input param code_init
at all.
Reproduce
import numpy as np
from sklearn.decomposition.dict_learning import dict_learning
from sklearn.utils import check_random_state
from sklearn.utils.testing import assert_array_equal
rng = check_random_state(0)
n_samples, n_features, n_components = 3, 4, 2
X = rng.randn(n_samples, n_features)
U_init = rng.randn(len(X), n_components)
# create garbage initialization for the codes
U_init_nans = np.zeros_like(U_init)
U_init_nans /= 0.
U1, V1, errors1 = dict_learning(X, n_components, .1, random_state=rng,
code_init=U_init, max_iter=1)
U2, V2, errors2 = dict_learning(X, n_components, .1, random_state=rng,
code_init=U_init_nans, max_iter=1)
for a, b in zip([U1, V1, errors1], [U2, V2, errors2]):
# these things shouldn't be the same (to say the least)
assert_array_equal(a, b)
else:
raise RuntimeError("This place is really werid!")
Fix
- if code_init is provided, then do a dict update before entering main loop
Amongst other things, this issue will be addressed in PR #9036
Metadata
Metadata
Assignees
Labels
No labels