[MRG+1] Make return_std faster in Gaussian Processes#9236
[MRG+1] Make return_std faster in Gaussian Processes#9236minghui-liu wants to merge 9 commits intoscikit-learn:masterfrom
Conversation
|
flake8 failures. Please mark as MRG when fixed, unless you feel this requires more tests. |
sklearn/gaussian_process/gpr.py
Outdated
| L_inv = solve_triangular(self.L_.T, np.eye(self.L_.shape[0])) | ||
| K_inv = L_inv.dot(L_inv.T) | ||
| # cache result of K_inv computation | ||
| if not hasattr(self,'K_inv_'): |
There was a problem hiding this comment.
If we're going to memoize this, we should not be storing a new public attribute at predict time. If be more comfortable if this were a private attribute, i.e. _K_inv
|
LGTM, thanks! |
| # decomposition L and its inverse L_inv | ||
| L_inv = solve_triangular(self.L_.T, | ||
| np.eye(self.L_.shape[0])) | ||
| self._K_inv = L_inv.dot(L_inv.T) |
There was a problem hiding this comment.
self._K_inv needs to be deleted if self.L_ is changed during a second fit
There was a problem hiding this comment.
Hi. Thank you. How do I test if self.L_ has changed?
There was a problem hiding this comment.
self.L_ is changed only in this line, so you should add for example self._K_inv = Nonejust after, and change the condition to if self._K_inv is None
There was a problem hiding this comment.
Apologize for the delay. I've made the changes.
|
Thanks for the update. Could you also add a test in After that it should be ready for merging. |
|
Hi. I have added a new test. |
|
Maybe a better test would be that doing Needs a whatsnew entry? |
|
@amueller I have updated my test. Could you please have a look and let me know if it is right? |
sklearn/gaussian_process/gpr.py
Outdated
| y_var = self.kernel_.diag(X) | ||
| y_var -= np.einsum("ij,ij->i", np.dot(K_trans, K_inv), K_trans) | ||
| sum1 = np.dot(K_trans, self._K_inv).T | ||
| y_var -= np.einsum("ki,ik->k", K_trans, sum1) |
There was a problem hiding this comment.
Is there a reason you changed this? I thought this PR was only about caching self._K_inv. As @TomDLT mentioned in #9234 (comment), the einsum optimization was arleady done in #8591.
There was a problem hiding this comment.
Reverted the change. Thank you @lesteve.
0169d2a to
d698126
Compare
|
flake8 is failing |
fix white line
ea05b00 to
2a37685
Compare
|
Fixed flake8 |
|
@minghui-liu Can you add an entry in doc/whats_new.rst? |
|
Rebased, added whats new and merged as: 6e01fef. |
Reference Issue
Issue #9234
What does this implement/fix? Explain your changes.
K_invto
Any other comments?
No.