-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
ENH Adds Array API support to LinearDiscriminantAnalysis #22554
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, @thomasjpfan.
I can get a similar ×2 speed-up ratio on a machine with one NVIDIA Quadro RTX 6000 using the provided notebook.
This PR, with the Array API dispatch:
CPU times: user 12 s, sys: 858 ms, total: 12.8 s
Wall time: 14 s
This PR, without the Array API dispatch:
CPU times: user 1min 20s, sys: 1min 6s, total: 2min 27s
Wall time: 23.3 s
To me, this PR is clear and does not introduce too much complexity.
Do you think we could (if it's worth it) come up with adaptors for the few API mismatches (e.g add.at
)?
self.intercept_ = -0.5 * np.sum(coef**2, axis=1) + np.log(self.priors_) | ||
self.coef_ = np.dot(coef, self.scalings_.T) | ||
self.intercept_ -= np.dot(self.xbar_, self.coef_.T) | ||
rank = xp.sum(xp.astype(S > self.tol * S[0], xp.int32)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Side-note: What is the intend of specifying xp.int32
? Does it makes the sum faster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ArrayAPI is very strict when it comes to bools. S > self.tol
returns a boolean array which can not be summed.
I suspect it is because there is no type promotion rules for bools.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that sounds right. There are also no int-float mixed type casting rules for arrays, because erroring out is a valid design choice and something at least TensorFlow does (PyTorch also limits what it wants to allow without being explicit).
There could perhaps be a rule for Python bool
to Python int
, but there's probably little appetite for array dtype cross-kind casting rules.
Are your timings reversed? It looks like Array API makes it slower. |
You are right, I just have corrected it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still familiarizing my-self with the Array API standard and the NumPy implementation but here is a first pass of comments for this PR.
pytest.importorskip("numpy", minversion="1.22", reason="Requires Array API") | ||
|
||
X_np = numpy.asarray([[1, 2, 3]]) | ||
xp = pytest.importorskip("numpy.array_api") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can simplify the 2 importorskip
into a single one, right? NumPy 1.21 did not expose a numpy.array_api
submodule (I checked).
pytest.importorskip("numpy", minversion="1.22", reason="Requires Array API") | |
X_np = numpy.asarray([[1, 2, 3]]) | |
xp = pytest.importorskip("numpy.array_api") | |
# This test requires NumPy 1.22 or later for its implementation of the | |
# Array API specification: | |
with warnings.catch_warnings(): | |
warnings.simplefilter("ignore") # ignore experimental warning | |
xp = pytest.importorskip("numpy.array_api") | |
X_np = numpy.asarray([[1, 2, 3]]) |
sklearn/utils/_array_api.py
Outdated
return getattr(self._namespace, name) | ||
|
||
def astype(self, x, dtype, *, copy=True, casting="unsafe"): | ||
# support casting for NumPy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# support casting for NumPy | |
# Extend Array API to support `casting` for NumPy containers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any issue to track the support of custom casting in the spec?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can not find a discussion on astype
& casting
. Maybe @rgommers has a link?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect it's because other libraries do not really implement casting for astype
. For example, cupy.astype does not support casting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this specific PR, we are using casting
in check_array
(which was added in #14872):
scikit-learn/sklearn/utils/validation.py
Line 840 in cd5385e
array = array.astype(dtype, casting="unsafe", copy=False) |
I think we do not need to set casting
here since the default is unsafe. For reference, the casting behavior of nans and infs are not specified in the ArrayAPI spec
For example:
import numpy as np
np_float_arr = np.asarray([1, 2, np.nan], dtype=np.float32)
print(np_int_float.astype(np.int32))
# On a x86 machine:
# [ 1 2 -2147483648]
# But on a M1 mac:
# [1, 2, 0]
# Cupy cast to zeros.
import cupy
cp_float_arr = cupy.asarray([1, 2, cupy.nan], dtype=cupy.float32)
print(cp_float_arr.astype(cupy.int32))
# [1, 2, 0]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For reference, the casting behavior of nans and infs are not specified in the ArrayAPI spec
That seems like something to specify - even if just to say it's undefined behavior. Which it is I think, as evidenced by the inconsistent NumPy results here.
I suspect it's because other libraries do not really implement casting for
astype
.
That is typically the reason. The PR that added astype
lists all supported keywords across libraries, and only NumPy and Dask have casting
.
A question is if the concept of casting modes is useful enough to include. I'm not sure to be honest (but I didn't think about it very hard yet). The default in numpy.ndarray.astype
is unsafe anyway, which is the only reasonable choice probably - because code like astype(x_f64, float32)
shouldn't raise as it is very explicit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity I did a a quick survey of our current use of the casting
kwarg.
git grep "casting="
sklearn/preprocessing/_polynomial.py: casting="no",
sklearn/tree/_tree.pyx: return n_classes.astype(expected_dtype, casting="same_kind")
sklearn/tree/_tree.pyx: return value_ndarray.astype(expected_dtype, casting='equiv')
sklearn/tree/_tree.pyx: return node_ndarray.astype(expected_dtype, casting="same_kind")
sklearn/tree/tests/test_tree.py: return node_ndarray.astype(new_dtype, casting="same_kind")
sklearn/tree/tests/test_tree.py: return node_ndarray.astype(new_dtype, casting="same_kind")
sklearn/tree/tests/test_tree.py: new_n_classes = n_classes.astype(new_dtype, casting="same_kind")
sklearn/utils/validation.py: # inf (numpy#14412). We cannot use casting='safe' because
sklearn/utils/validation.py: array = array.astype(dtype, casting="unsafe", copy=False)
We can ignore the tree files because they are written in Cython and will never benefit from Array API compat.
So we just have sklearn/preprocessing/_polynomial.py
with casting="no"
and sklearn/utils/validation.py
with casting="unsafe"
.
So it's probably indeed not worth exposing the casting
argument in our xp
wrapper and always use unsafe
.
You report a ~2x speed-up instead of a 14x speed-up in @thomasjpfan notebook though. I am not sure if this is expected or not. |
It could be because of different hardware. I ran my benchmarks using a Nvidia 3090 and a 5950x (16 core 32 thread) single CPU. It was also in a workstation environment where I can supply the GPU with 400 Watts of power. |
Exact, I did a mistake comparing based on "total". Updated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't look too complicated, better than what I imagined.
I do think we need to test against something which is not numpy, and figure out the performance implications.
sklearn/utils/extmath.py
Outdated
X = xp.exp(X) | ||
else: | ||
np.exp(X, out=X) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one of the things that worries me is the performance implications of not having in-place operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, there is not a great way around this without extending our own version of ArrayAPI further and special case NumPy
.
The reasons for not having out=
is in https://github.com/scikit-learn/scikit-learn/pull/22554/files#r825086171
sklearn/discriminant_analysis.py
Outdated
X = np.sqrt(fac) * (Xc / std) | ||
X = math.sqrt(fac) * (Xc / std) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are they strictly equivalent? Shouldn't this be xp.sqrt
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
xp.sqrt
does not work on Python scalars such as frac
. We would need to call xp.asarray(fac)
before calling xp.sqrt
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I see. It makes sense, but it also means a developer would need to know when to use xp.
and when math.
, which I guess can be confusing. Should array api actually handle this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but it also means a developer would need to know when to use xp. and when math., which I guess can be confusing. Should array api actually handle this?
From my understanding, this was by design for ArrayAPI. Only NumPy has the concept of "array scalars", while all other array libraries use a 0d array. (np.sqrt(python_scalar)
returns an NumPy scalar, while math.sqrt(python_scalar)
is a Python scalar)
In our case, the ArrayAPI forces us to think about using math
for Python scalars. From a developer point of view, I agree it is one more think to think about, but I think it's better to be more strict about these types.
A side benefit is that the Python scalar version is faster:
%%timeit
_ = np.log(431.456)
# 315 ns ± 5.46 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%%timeit
_ = math.log(431.456)
# 68.5 ns ± 0.519 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
which can be a difference for code that run in loops.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at this again, I think it's better to do xp.asarray()
on the scalar, so we can use xp.sqrt
on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lolol, why? I was convinced with you last comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me, the pros and cons of math
vs xp.asarray
on Python scalars is balanced. The argument for using xp.asarray
is that it forces us to be in "array land" and not need to think about any Python scalar + Array interactions. Although, the ArrayAPI spec does state that that python_scalar * array
is the same as xp.asarray(python_scalar, dtype=dtype_of_array) * array
.
def test_lda_array_api(X, y): | ||
"""Check that the array_api Array gives the same results as ndarrays.""" | ||
pytest.importorskip("numpy", minversion="1.22", reason="Requires Array API") | ||
xp = pytest.importorskip("numpy.array_api") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we kinda still have numpy array api specific code. I think we need something here which is not numpy's array api implementation to test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will be able to do this with pytorch in CPU mode once pytorch's compliance as improved.
Progress is tracked here:
The other mature enough candidate is CuPy but this one requires maintaining a GPU CI runner. I would rather start with numpy only on our CI in the short term.
But we could improve this test to make it work with CuPy with a non-default parametrization:
@pytest.mark.parametrize("array_api_namespace", ["numpy.array_api", "cupy.array_api"])
@pytest.mark.parametrize("X, y", [(X, y), (X, y3)])
def test_lda_array_api(X, y, array_api_namespace):
"""Check that the array_api Array gives the same results as ndarrays."""
xp = pytest.importorskip(array_api_namespace)
...
and this way it would be easy to run those compliance tests manually on a cuda enabled host.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would make sense for the array api effort to include a reference implementation. It can use numpy under the hoods, but it should be a minimal implementation, and it'll be different from numpy's own implementation since numpy has certain considerations which make its implementation not minimal. Then all libraries could test against that implementation instead of some random other library's implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These is an Array API Compliance test suite here: https://github.com/data-apis/array-api-tests which test that an Array API implementation follows the spec. For a subset of operators, the test suite also test for correctness.
I see the numpy.array_api
as the minimal implementation backed by NumPy. The idea around testing on another with another library's implementation is that the numerical operations can return different results depending on hardware. For us to trust that our algorithms are correct using CuPy's or PyTorch's Array API implementation, we would still need to test it ourselves.
sum_prob = np.sum(X, axis=1).reshape((-1, 1)) | ||
|
||
if is_array_api: | ||
# array_api does not have `out=` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any plan or discussion for allowing this? Maybe as an optional API extension?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No there isn't. The reason is twofold:
out=
doesn't make sense for all libraries - for example, JAX and TensorFlow have immutable data structures.- Even for libraries that do have mutable arrays,
out=
is not a very nice API pattern. It lets users do manual optimizations that a compiler may be able to do better. There was also another argument and an alternative design presented in Preliminary discussion: Standardise support for reuse of buffers data-apis/consortium-feedback#5.
And maybe (3), IIRC NumPy and PyTorch semantics for out=
aren't identical.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback. I guess we will have to keep on using those if is_array_api
conditions to protect our use of numpy's out=
arguments for now.
I don't necessarily see that as a block for the adoption of Array API in scikit-learn, but it does make the code looks uglier... I don't really see a potential longterm fix for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It lets users do manual optimizations that a compiler may be able to do better.
@rgommers I'm quite confused. Does python actually do such optimizations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's what I get on my machine:
In [19]: def f2():
...: a = np.random.rand(10000, 100000)
...: a = np.exp(a)
...: return a
...:
In [20]: def f1():
...: a = np.random.rand(10000, 100000)
...: np.exp(a, out=a)
...: return a
...:
In [23]: %timeit f1()
15.6 s ± 2.53 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [24]: %timeit f2()
[1] 210906 killed ipython
so the difference is quite significant (one of them gets killed 😁 )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does python actually do such optimizations?
No it doesn't - but Python doesn't have a compiler? I meant a JIT or AOT compiler like JAX's JIT or PyTorch's Torchscript. It is not strange to say that in principle X = xp.exp(X)
can be rewritten to an inplace update (i.e., exp(X, out=X)
) by a compiler transformation if and only if the memory backing X
isn't used elsewhere, right?
This code looks fishy by the way for copy=False
; the docs say nothing about inplace updating of the input array, which I'd consider a bug in library code if it were public. And to avoid this footgun, it defaults to copy=True
which is always slow?
(one of them gets killed grin )
Ouch, that doesn't look like the expected result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No it doesn't - but Python doesn't have a compiler? I meant a JIT or AOT compiler like JAX's JIT or PyTorch's Torchscript. It is not strange to say that in principle
X = xp.exp(X)
can be rewritten to an inplace update (i.e.,exp(X, out=X)
) by a compiler transformation if and only if the memory backingX
isn't used elsewhere, right?
Ok now that makes sense. But here's the issue. One benefit of using array api is that developers can learn that instead of numpy's API and develop their estimators. It would also make it really easy to support array api even if they don't explicitly do so. But that raises the issue that in order to write efficient numpy code, one needs to do numpy, and for all other backends do array api, like here. I think in an ideal world, we wouldn't want to have these separate branches for the two APIs, do we?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think in an ideal world, we wouldn't want to have these separate branches for the two APIs, do we?
I think we indeed want to have separate branches in as few places as possible. A lot of the places that are being identified are functions that can be added to either the array API standard (e.g., take
, moveaxis
) or NumPy (e.g., unique_*
). There's a few things that will remain though, because there's an inherent tension between portability and performance. out=
and order=
are the two that come to mind here. And some forms of (advanced) indexing.
We reviewed the use of out=
and order=
, and they're used less commonly then one would expect based on the amount of discussion around them. The amount of branching that will remain in the end is quite limited, and seems like a reasonable price to pay.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, I'm happy then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR now has a private helper (_asarray_with_order
) to explicitly deal with the case where we want to enforce a specific order for numpy (e.g. when used in conjunction with Cython code) vs array API passthrough.
Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
I added a note in the user guide about CuPy and NumPy here: Those are the only two libraries that fully adopted the Array API specification.
This is mostly has to do with Data-dependent output shapes. In sckit-learn's case, we use |
I can confirm the speedup that has been reported, using the notebook provided by @thomasjpfan , a local desktop with a 1070Ti GPU (and a low-end cpu), and the latest docker cupy docker image:
(I reduced the size of the data from 500000 to 250000 to fit in the 8GB VRAM of the GPU) and it passes the tests for lda with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this work, @thomasjpfan!
I have mainly two remarks:
- Should we mention somewhere in
LinearDiscriminantAnalysis
that there's a support of the Array API whensolver="svd"
? - What is the behavior of
LinearDiscriminantAnalysis
ifsolver!="svd"
and if it is given a tensor whose namespace is not NumPy but supports the Array API? Should it fallback tosolver="svd"
in this case?
Also, here are some minor suggestions.
import scipy.linalg | ||
from scipy import linalg |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that xp.linalg
can be used, should we use the fully qualified name for scipy.linalg
to be explicit and remove any potential ambiguity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this PR, I only used the full name where we had to select between xp.linalg
and scipy.linalg
, which I find more explicit:
As for the other parts of the file, I would prefer to fully qualify linalg
because I often need to look at the top of the file to see if it's numpy.linalg
or scipy.linalg
. I think this discussion and decision can be done independently of this PR.
Co-authored-by: Julien Jerphanion <git@jjerphan.xyz>
Yes that makes sense. I added it in
If I do not think we should fall back to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Thank you, @thomasjpfan.
if is_array_api: | ||
svd = xp.linalg.svd | ||
else: | ||
svd = scipy.linalg.svd |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm new to the array API and such, so there is a lot I don't know. But I'm interested to learn because I'd like to contribute to efforts like this.
When reading this PR to learn more I was expecting to mostly see np.
be replaced with xp.
and very little or no if
statements that depend on what kind of array is being processed.
Why not use svd = xp.linalg.svd
for numpy and "array API" inputs? Maybe because for a = np.array(...)
, a
doesn't have an __array_namespace__
? Which made me wonder why it doesn't have that attribute. Now it feels like "one question leading to another and to another, ..." - so my ultimate question: what is a good place to talk about these things? (It feels like this PR is somehow the wrong place but I don't know a better one)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[...] what is a good place to talk about these things? (It feels like this PR is somehow the wrong place but I don't know a better one)
+1
The official organisation leading the Array API standard is data-apis
. data-apis/array-api
is an official repository versioning the static website of the standard but apart from its issues tracker, I do not think there is a dedicated forum.
I think @scientific-python might have been a suitable organisation to host discussions (and the standard?) but it postdates the standard (IIRC) and probably the direction of both initiative is different. (Under @scientific-python, the Array API likely would have been a SPEC IMO.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think https://github.com/data-apis/array-api/issues is a perfectly reasonable place to bring up these kinds of usability or "why was this choice made" discussions - and there are other such conversations there already. There's folks from many libraries participating there, so it's probably preferred over the issue tracker or mailing list of a single project. Currently issues are fine, maybe at some point the volume of such questions will grow that using a separate Discourse is helpful, but that is not the case today.
I think @scientific-python might have been a suitable organisation to host discussions (and the standard?) but it postdates the standard (IIRC) and probably the direction of both initiative is different. (Under @scientific-python, the Array API likely would have been a SPEC IMO.)
They're complementary. I don't think the direction is different, just the scopes. The Data API Consortium is squarely focusing on array and dataframe APIs and interoperability features/protocols around those. Scientific Python has a "best practices for Python projects" in general scope - and not even limited to technical topics, also best practices around communities.
The whole API standard wouldn't have been a single SPEC, the scope and level of detail in that standard is way too much for a single SPEC. See a SPEC more like a PEP or a NEP, and the array API standard as a whole open source project.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use
svd = xp.linalg.svd
for numpy and "array API" inputs?
I think the point here is that in this case, scipy.linalg.svd
is preferred over numpy.linalg.svd
(it may be faster), and xp.linalg.svd
would select the numpy
function.
I'll also note that indeed that numpy.ndarray
does not yet have a __array_namespace__
method as you noticed. The reason for that is that the main numpy
namespace is not (yet) standard-compliant. Getting there takes time, due to backwards compatibility reasons. The new numpy.array_api
namespace could be made compliant straight away. Over time, numpy
aims to evolve its main namespace towards full compliance. The NumPy docs have a table with differences.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this specific PR, is the scipy SVD really better than numpy SVD? Do they use different LAPACK drivers by default? Based on the documentation it seems that both seem to use gesdd
by default:
- https://numpy.org/doc/stable/reference/generated/numpy.linalg.svd.html
- https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.svd.html
If my analysis is wrong, and the choice of the LAPACK driver happens to be important (e.g. to trade of speed for numerical stability) shall we suggest an extension the Array API spec to allow for driver hints to favor one SVD algorithm of the other and fallback to an implementation-specific default otherwise?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the same driver routine underneath indeed. So I think it should be the same - the differences are:
- NumPy may have been built without a proper LAPACK dependency (using the f2c'd
lapack_lite
sources instead). This may still be fairly common when folks build from source. - NumPy using the CLAPACK interface, while SciPy uses the Fortran interface (this shouldn't matter all that much).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect that now fewer and fewer people compile numpy from source and instead use either wheels, conda packages or linux distro packages.
But let's be safe and keep the scipy exception in this PR for now.
@betatim would you like more time to review this PR? Otherwise I think we could merge it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no opinion on merge or not. It will take me a while to learn enough to have an opinion, and I can read the code here or when it is in main
:D So go ahead and merge.
1. NumPy may have been built without a proper LAPACK dependency (using the f2c'd lapack_lite sources instead).
Yes, I've seen this often.
|
Thank you for this work, @thomasjpfan! |
n_classes = len(self.classes_) | ||
n_classes = self.classes_.shape[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be nice to allow len
in the array spec to avoid needing this kind of change. Raised as issue ( data-apis/array-api#481 )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was on purpose and was discussed fairly extensively. The new code is much clearer than the old one, len
on a >1-D array is an anti-pattern imho.
Reference Issues/PRs
Towards #22352
What does this implement/fix? Explain your changes.
This PR adds Array API support to
LinearDiscriminantAnalysis
. There is around a 14x runtime improvement when using Array API with CuPy on GPU.The overall design principle is to use the Array API Specification as much as possible. In the short term, there will be an awkward transition as we need to support both
NumPy
andArrayAPI
. In the far term, the most maintainable position for the code base is to only use the Array API specification.I extended the Array API spec in
_ArrayAPIWrapper
where these a feature we must have. In_NumPyApiWrapper
, I added functions to the NumPy namespace adopt the functions in the Array API spec.Any other comments?
There is still the question of how to communicated the feature. For this PR, I only implemented it for
solver="svd"
.