8000 ENH: Future of npymath library · Issue #20880 · numpy/numpy · GitHub
[go: up one dir, main page]

Skip to content

ENH: Future of npymath library #20880

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
matthew-brett opened this issue Jan 23, 2022 · 71 comments
Open

ENH: Future of npymath library #20880

matthew-brett opened this issue Jan 23, 2022 · 71 comments

Comments

@matthew-brett
Copy link
Contributor

Proposed new feature or change:

This is an issue to discuss the future of the npymath library in Numpy.

Numpy ships with a static library in core/lib named npymath.lib (Windows) or libnpymath.a (Unices).

It provides a suitable platform-specific implementation of various math routines.

Scipy links to npymath during its build process, in various places.

We (Scipy) have run into problems linking against Npymath, because of the combination of Numpy's use of the latest Visual Studio toolchain, and Scipy's use of the Mingw-w64 toolchain - discussed in MacPython/numpy-wheels#145 .

The issue that arises is that it is in general more difficult to link static libraries across toolchains, than it is to link against dynamic libraries (e.g. DLLs) or to recompile the sources. This issue is to discuss whether we should think of other solutions. Possible options are:

  • Leave things as they are.
  • Ship the C header and sources for Npymath, so external libraries and packages can recompile them.
  • Create a dynamic library from Npymath, and ship that, for linkage against external libraries.
  • Your option here.

This issue is to house the discussion of different options.

@rgommers
Copy link
Member

Note that whether or not this is a priority is kinda coupled with the vc141 vs. newer MS compiler toolchain discussion in MacPython/numpy-wheels#145. For vc141 it seems we can work with libnpymath.a in a Mingw-w64 based SciPy build, which with vc142 things just seem broken and likely hard/fragile to repair.

We could do something proactively, but the urgency is a lot lower as long as there's no need to upgrade from vc141.

@carlkl
Copy link
Member
carlkl commented Jan 25, 2022

my suggestion (Windows only) is as follows:

  • leave npymath.lib and npyrandom.lib as they are for numpy.
  • Numpy could create two additional shared binaries npymath.dll and npyrandom.dll from these static libraries.
  • during the Numpy installation, place these two DLLs in a common location where Scipy or other third-party packages can locate them both during the build process and during import.

This approach has the following benefits:

  • no need to change the build process of Numpy as it is only an extension of the existing build process.
  • packages which depends on these libraries can still use the static libraries if and only if they use the same VS toolset as the Numpy uses.
  • packages which depends on these libraries using a different toolset or a different compiler can link against the shared libraries instead.

For a test I have successfully build npymath.dll with VS link.exe. It only needs a def file with all symbol names to be exported. A test to build scipy with this change will follow.

There is a disadvantage as well:

One has to consider the situation that someone installs an Numpy version without theses additional DLLs and then Scipy which needs them. Mitigation could be:

  • build Scipy for windows with a different minimal Numpy version to avoid this situation.
  • copy these DLLs to the Scipy wheel as well and load them on import.

@rgommers
Copy link
Member

Ship the C header and sources for Npymath, so external libraries and packages can recompile them.

Not a terrible idea in principle, but it would probably be as much work as shipping a DLL and permanently increase build times for libraries using that C code.

Create a dynamic library from Npymath, and ship that, for linkage against external libraries.

This is probably best. I agree with @carlkl's suggestion's mostly, with two tweaks:

  1. It should be done on all platforms and not just on Windows
  2. The install location isn't arbitrary, we should install to libdir (sysconfig.get_config_var('LIBDIR'))

build Scipy for windows with a different minimal Numpy version to avoid this situation.

This would be the better option - if you need a newer NumPy, then bump the minimum version.

@mattip
Copy link
Member
mattip commented Apr 14, 2022

I came across this again when building NumPy from source when a wheel was not available. It would be nice to solve this for 1.23 since moving forward more people will be hitting this when they compile NumPy and Scipy from source with a modern visual studio.

One complication is that on windows, sysconfig.get_config_var('LIBDIR') does not exist. There is a DLL directory under sysconfig.get_config_var('base') (which is the same as sysconfig.get_path('base')) which would be appropriate. The import lib npymath.lib needed for linking currently lives in site-packages\numpy\core\lib, so it can probably stay there.

@mattip
Copy link
Member
mattip commented Apr 15, 2022

I am becoming convinced that the DLLs directory in windows is not a good target. It is not added to the dll search path (see os.add_dll_directory), rather it is added to sys.path. So any project linking to npymath would need to call `os.add_dll_directory(), and there is no canonical way to discover what that directory is. In fact on vanilla python from python.org, no directory is added to the dll search path, so there is no reason to prefer the DLLs directory over any random directory in site-packages. Are there any best-practices around this subject?

@rgommers
Copy link
Member

I came across this again when building NumPy from source when a wheel was not available.

How exactly? This issue was opened when we discovered it with the mingw-w64 compilers. What's your exact config here, and how does this turn up when building numpy only rather than linking libnpymath in another project?

One complication is that on windows, sysconfig.get_config_var('LIBDIR') does not exist.

I'm not sure what sysconfig.get_config_var('LIBDIR') is supposed to do, so ignore that part of my suggestion. What I meant is: the lib directory where the libpython for the Python version you are using is located. You have to link that too, so it should be discoverable somehow.

@mattip
Copy link
Member
mattip commented Apr 15, 2022

How exactly?

See this to build Scipy on PyPy using meson: it builds NumPy from source on the CI machine.

You have to link that too

Linking is the easy part. The harder problem is to find the DLL at runtime. Like LD_LIBRARY_PATH on Linux, windows has an internal list of directories it will search. For the NumPy/SciPy OpenBLAS dll, we call os.add_dll_directory in __config__.py. If we ship npmath.dll, consumers would need to also add something like this to their code.

if sys.platform == 'win32' and os.path.isdir(extra_dll_dir):
if sys.version_info >= (3, 8):
os.add_dll_directory(extra_dll_dir)
else:
os.environ.setdefault('PATH', '')
os.environ['PATH'] += os.pathsep + extra_dll_dir
.

@matthew-brett
Copy link
Contributor Author

I would like to return to the idea of shipping the C sources. Perhaps that would just be as simple as copying the generated .c and .h sources in the npymath directory into a directory, such as <np-include>/npymath.

I think that would be easy to maintain - in practice, surely easier than working out a general mechanism for Numpy to export a DLL directory? Or do we need that for some other reason?

@carlkl
Copy link
Member
carlkl commented Apr 16, 2022

I would propose to copy npymath.dll from numpy into scipy's .libs folder during the scipy build process. And to import npymath.dll from there during scipy import.

@carlkl
Copy link
Member
carlkl commented Apr 16, 2022

Another proposal:

As npymath.dll is quite small (I tried this out it in januarý and got a 56kB sized npymath.dll) it could be a good idea to simply load this DLL during numpy import. Once the DLL is loaded into process space it will be available for all packages depending on this DLL. Symbols of this DLL does not interfere with the static linked symbols used by numpy, that means numpy will just ignore this DLL.

@rgommers
Copy link
Member

I think that would be easy to maintain - in practice, surely easier than working out a general mechanism for Numpy to export a DLL directory? Or do we need that for some other reason?

Looking at the code to build npymath in https://github.com/numpy/numpy/blob/main/numpy/core/setup.py, it's not exactly trivial with compiler flags, depending on math libs, etc. (although consumers may already have similar code in their own build setup, not sure). Also, note that we have to give npyrandom the same treatment.

I think I like the DLL plan a little better.

As npymath.dll is quite small (I tried this out it in januarý and got a 56kB sized npymath.dll) it could be a good idea to simply load this DLL during numpy import.

Of the three ways of using a DLL, this is probably the nicest one - assuming we won't run into trouble with changes in the library & versioning. libnpyrandom.a is ~25% larger than libnpymath.a, so that's still okay.

@matthew-brett
Copy link
Contributor Author

I agree - that's an excellent idea - preloading the DLL. Would this be the right approach for npyrandom? Or is that more likely to be compiled into projects that don't import Numpy?

@rgommers
Copy link
Member

I can't find many users, but SciPy uses both, and Erotemic/xdev@77a5712#diff-ffb8093dfe715993aed3918bfa9b1fb2e59c13d4a5699aab5088df611f960209R230 is an example of use in a random project I hadn't seen before (still uses numpy). So I'd say use cases are similar.

@mattip
Copy link
Member
mattip commented Apr 20, 2022

The problem with injecting npymath.dll via import numpy is that, well, it requires import numpy. I hope all projects using npymath import numpy. Building as a shared library seems like a reasonable first step.

I do wonder how far we can go to make npymath more of a header-file-only library, along the lines of eigen. The code in npy_math_internal.h.src](https://github.com/numpy/numpy/tree/main/numpy/core/src/npymath), could be shipped as a header file if we always use the NPY_INLINE_MATH macro. Complex number support has improved in MSVC so we may be able to alias native routines instead of the hand written ones in npy_math_complex.c.src. And it seems like there is work that can be done to define more HAVE_ macros like HAVE_ASINH. This can be a follow-on project, after we change the build to create and ship the dll.

@carlkl
Copy link
Member
carlkl commented Apr 20, 2022

Complex number support has improved in MSVC so we may be able to alias native routines instead of the hand written ones in npy_math_complex.c.src.

Most trigonometric symbols (also the complex ones) are backed by api-ms-win-crt-math-l1-1-0.dll. See the attached: npymath_api-ms-win-crt-math_imports.txt

@seberg
Copy link
Member
seberg commented Jun 17, 2022

Also ping @serge-sans-paille since I am worried this affects gh-21487.

One thing I am unclear about is whether using a dynamically linked library here might be problematic for some users because it prevents inlining? A lot of the functions are headers though, so maybe Matti's thought about just making it header-only makes sense?

Since npymath is effectively statically linked (or headers only). I now do wonder whether it might make sense to split it out and include it as a submodule in NumPy? That would also allow downstream to recieve bug-fixes by compiling against an old version of NumPy, but a new version of npymath?

@carlkl
Copy link
Member
carlkl commented Jun 17, 2022

as described in #20880 (comment), supplying static as well dynamic libraries for npymath and npyrandom could be a solution.

@h-vetinari
Copy link
Contributor
h-vetinari commented Jul 27, 2022

To make the link bidirectional: conda-forge is currently discussing moving to vc142 on windows (leaning towards implementation soon), perhaps some people here are interested or have an opinion: conda-forge/conda-forge.github.io#1732

@carlkl
Copy link
Member
carlkl commented Jul 28, 2022

As it is trivial to build DLLs from the npymath.lib and the npyrandom.lib libraries I propose to use DLLs rather than the VC lib files for the scipy building process on windows with mingw-w64. It is even possible to build one single DLL for the sake of simplicity, which include the object code from both libraries.

This procedure was necessary for the windows 32-bit scipy-meson test build anyway.

The advantage of this procedure: it works regardless of the VC version.
Disadvantage: this DLLs need to be integrated into the scipy wheel.

@rgommers
Copy link
Member
rgommers commented Aug 5, 2022

I'm finding myself toying with the idea of just shipping the sources again. It's raining issues with libnpymath.a, with cross-compiling SciPy 1.9.0, and also with cibuildwheel builds for SciPy on Windows (the 3 releases we shipped with vc142).

Looking at the code to build npymath in https://github.com/numpy/numpy/blob/main/numpy/core/setup.py, it's not exactly trivial with compiler flags, depending on math libs, etc. (although consumers may already have similar code in their own build setup, not sure)

In the end it's not that bad; a lot of that is weird handling of the math library. For SciPy we can probably just ignore all that. It would also help with solving the problem now - we look for the install source files, and if they're missing then we use a vendored copy. With a DLL, that doesn't work.

@carlkl
Copy link
Member
carlkl commented Aug 5, 2022

Unfortunately one also needs then source to numpy random as well, as numpy.random also depends on npymath symbols.
All in all I like the DLL idea best, because it is the easiest way to escape the compiler mix hell.

Another idea would be to build numpy with mingw-w64 as well and use that to build scipy. A numpy build of this kind should follow the specifications of CPython (long double == double) and should use the windows umbrella libraries for the math symbols. Later on a scipy wheel build this way should work together with a standard PYPI MSVC build numpy without any problems IMHO. In the end it should be sufficient to exclusively build the minimum numpy version for a given scipy version.

@lysnikolaou
Copy link
Member

In the last few weeks I've been working on #24085 that removes the complex structs and uses native complex types instead. In terms of new APIs, this PR adds six new functions that can be used to set the real or imaginary part of a complex number (npy_csetreal/npy_csetimag for all three complex types) that aim to replace directly setting the struct members as in c.real = 0.0. Keep in mind that this only is a problem for setting the real/imag part. Getting it is okay, since npy_creal/npy_cimag were already there before and can work correctly with both sets of complex types.

I'm now done with all of the changes required on the numpy side and I'm trying to port SciPy to the new version. I'm mostly done with the port, which works okay with #24085, but does not work with older numpy versions and I have a couple of questions:

  1. Should we provide a mechanism, in order for downstream packages to be able to do this cross-version? Or should we let them handle that on their own with something like this somewhere in the SciPy codebase?
#ifdef NPY_VERSION < 0x02000000
#define SETRE(x, r) x->real = r 
#define SETIM(x, i) x->imag = i
#else
#define SETRE(x, r) npy_csetreal(x, r)
#define SETIM(x, i) npy_csetimag(x, i)
#endif
  1. If we decide we wanna provide a mechanism on the numpy side, how would we go about it?

@mattip
Copy link
Member
mattip commented Jul 26, 2023

If the only thing needed is those 8 lines, I think we can copy that code into npy_math.h as public API (well, maybe NPY_CSETREAL and NYP_CSETIMAG or so would be alternative names, and probably the ifdef should be if). @seberg would it be better to use NPY_VERSION or NPY_FEATURE_VERSION? The difference is build-time vs. runtime.

@seberg
Copy link
Member
seberg commented Jul 26, 2023

Should we provide a mechanism, in order for downstream packages to be able to do this cross-version?

We semi do that: If you didn't still build with NumPy 1.x, you could in principle use set numpy>=2 as your build requirement. But overall, you probably want to keep being able to build with 1.x (also for local no-build-isolation builds).

What we could do, is put a numpy2_compat.h header somewhere (in NumPy or its own little project) that includes this and other #ifdef or maybe a #ifndef NPY_SETCIMAG that is designed to be vendored by downstream.

@WarrenWeckesser
Copy link
Member
WarrenWeckesser commented Sep 11, 2023

Edited...

FYI, here's a data point from a downstream project. ufunclab was using five functions from npymath: npy_half_to_float (convert float16 to float32), npy_clear_floatstatus_barrier, and the complex absolute value functions npy_cabsf, npy_cabs and npy_cabsl (magnitude of npy_cfloat, npy_cdouble and npy_clongdouble, resp.).

I can't simply replace the complex npy_cabsx functions with the C++ abs function, because it is not guaranteed that the NumPy types npy_cfloat, npy_cdouble and npy_clongdouble are just different names for the C++ types complex<float>, complex<double> and complex<long double>. Even after the recent work on complex types in NumPy, it looks like there are still cases where the NumPy types are plain struct with fields real and imag. (Of course, I can jump through some coding hoops to make it work, and I might do that to work around some other build issues, but it would be nice if I didn't have to.)

Edit: I did some nasty casting and eliminated the use of npy_cabsf, npy_cabs and npy_cabsl in ufunclab. So now there are just two functions that I use, npy_half_to_float and npy_clear_floatstatus_barrier.

Edit 2: In numtypes, I use npy_half_isnan.

@ngoldbaum pointed out the issue of the half float type above.

@lysnikolaou
Copy link
Member
lysnikolaou commented Nov 23, 2023

In the last few weeks I've put some time into moving npymath out of numpy and into its own stand-alone repo. I've gotten to a point where I have the following setup:

  1. One stand-alone repo with an independently buildable libnpymath and, hopefully soon, its own test suite.
  2. numpy adds libnpymath as a git submodule and builds against the static library built there.
  3. scipy does the same (for now).

This setup allows us to both work on libnpymath independently from numpy, which might or might not be useful, and, more importantly, have the capability to build and link against libnpymath depending on every project's own configuration. The libnpymath repo, after a suggestion from @rgommers, uses the Meson library function, which allows different projects that use it, to build a static or a shared library according to their own needs.

In my view, the plan ahead should be the following:

  1. Move all npymath tests to the new stand-alone repo and start maintaining that "independently".
  2. Figure out whether the halffloat API should stay in npymath or go to some other part of numpy.
  3. Figure out whether it makes sense to have C++ there. There's two places where C++ is used, halffloat and ieee754. ieee754.cpp is not used at all (and as far as I can tell was never used), only the C-templated version is, so this can go. The discussion for halffloat will probably resolve the other part of the question, if we decide to move it outside of npymath and inside of numpy. If not, we'll need to figure this out.
  4. Add libnpymath to numpy and scipy as git submodules and link against the library build from there.

Do people think this all makes sense? Anything else you'd like to see as part of this effort?

@rgommers
Copy link
Member

Thanks for the update @lysnikolaou. We briefly touched on this in the community meeting too, and folks seemed to think that indeed there is no issue in making npymath C-only again. If the C++ wrapper is needed, it can live outside of npymath.

A few other thoughts:

  • We should keep tests in numpy, since they are pretty fast and numpy CI is certainly going to be more comprehensive than that of a stanalone repo.
  • In the standalone repo, I think we should just have basic Windows/Linux/macOS coverage with a reasonable set of compilers.
  • For the adoption/rollout plan, I suggest targeting scipy first. Reason: that will surface any symbol naming conflicts, when a new SciPy with standalone libnpymath is built and then tested against an old numpy which contains and loads a different libnpymath. I had symbol conflicts before, and my conclusion at that time was that the standalone library would likely have to rename its symbols.

Figure out whether the halffloat API should stay in npymath or go to some other part of numpy.

I don't know about this, but if it has to be moved, 2.0 would be a good time to do it.

@ngoldbaum
Copy link
Member

Does anyone know why halffloat was put in npymath in the first place? Just that it's some new possibly generally useful C functions that aren't standardized?

Is there a reason why it can't live in the NumPy C API? I don't see a problem with it and to me it makes sense that all the C API needed to work with numpy dtypes are in the standard library or provided by numpy. We also recently exposed some functions for working with datetimes, for what that's worth as an argument here.

@ev-br
Copy link
Contributor
ev-br commented Dec 7, 2023

Would be great to update the docs on using/extending np.random from compiled code, which requires linking to npyrandom and npymath:

https://numpy.org/doc/stable/reference/random/extending.html

@rgommers
Copy link
Member
rgommers commented Dec 8, 2023

Would be great to update the docs on using/extending np.random from compiled code, which requires linking to npyrandom and npymath:

The design of exposing the random number distribution functions only via a static library is a bit unfortunate. In scipy/scipy#19638 (comment) I posted a summary of what SciPy needs from libnpyrandom, which is next to nothing, and essentially all pure C, nothing numpy-specific.

What's exposed in libnpyrandom is much more then what SciPy uses of course. There's some other things used like NPY_NAN, npy_bool, and lots of npy_intp (now == Py_ssize_t), but nothing really that actually must be numpy-specific I think. (to check, search random/src/distributions/, the only file using npy_ and NPY_ symbols is distributions.c).

The Numba example already relies on using the actual C sources rather than the static library we ship: https://numpy.org/doc/stable/reference/random/examples/numba_cffi.html.

I'd suggest that we change distributions.c to be pure C99 without a dependency on NumPy. And then we simply have 5 C source files and one header (distributions.h) that anyone is free to reuse in their own project if they want to use them (and that then also removes the need for libnpymath). The pattern to use a BitGenerator doesn't change (first example in https://numpy.org/doc/stable/reference/random/extending.html#cython, since that doesn't require the static library).

If we do that we don't have to stop shipping the static library and distributions.h in 2.0, but we can at least recommend a better approach. So everyone who runs into a hairy issue the next time we use a different compiler version for wheel building can then deal with it by switching to using the sources instead.

Cc @rkern @bashtage for thoughts

@rkern
Copy link
Member
rkern commented Dec 8, 2023

That all sounds reasonable, but I never really had a firm first-hand grasp on the problems these static libraries were meant to solve.

@rgommers
Copy link
Member

xref gh-25390, which attempts to perform the libnpyrandom surgery.

@lysnikolaou
Copy link
Member
lysnikolaou commented Dec 19, 2023

So a status update here:

I've opened two PRs that try to do a quite big overhaul in the hopes of improving the situation:

I created the libnpymath repo under my own username for the time being and it probably needs to be reviewed, since I changed quite a few things (including a rewrite of the C++ parts to C 🎉 )

So the current state is this:

  • npyrandom: Maintain inside numpy, but without any dependencies on numpy or npymath
  • npymath: Meson subproject / git submodule inside numpy, stand-alone repo available that can be built indepndently, pure C
  • scipy: No dependencies on the static libraries shipped by numpy, vendor all of npymath as a meson suproject / git submodule, vendor a very minimal version of npyrandom directly

That means that we can effectively stop shipping both of the static libraries and SciPy will work okay. Should we?

Also, if people think all this looks okay, should we move the libnpymath repo to the numpy org after someone reviews it?

@rgommers
Copy link
Member

Thanks for the update @lysnikolaou, this is starting to shape up nicely.

I created the libnpymath repo under my own username for the time being and it probably needs to be reviewed, since I changed quite a few things (including a rewrite of the C++ parts to C 🎉 )

Would it make sense to separate out that C++ to C rewrite and get that merged as a separate PR to the main numpy repo first? That would make review easier, because it's then split into smaller chunks and also because the primary reviewers for C++/C and build&packaging topics tend to no be the same.

That means that we can effectively stop shipping both of the static libraries and SciPy will work okay. Should we?

We should finish the SciPy changes first, and get those in before branching 1.13.x in 1-2 months. And then announce the plan to shop shipping static libraries - which may be on the late side for 2.0, not sure about that one yet. If it isn't feasible for 2.0, then it'll be by 2.2.0 probably if there aren't too many objections.

Also, if people think all this looks okay, should we move the libnpymath repo to the numpy org after someone reviews it?

+1 to that. It also seems like a reasonable name to continue using.

Did the symbol naming issue to avoid npy_* name clashes get resolved yet, and if so, how?

@lysnikolaou
Copy link
Member

Would it make sense to separate out that C++ to C rewrite and get that merged as a separate PR to the main numpy repo first?

Sure! Actually, I opened a PR for that on my fork as well, cause I figured it might be needed, so that can be reverted, reviewed and reapplied as well, if adding it on numpy first seems like more effort than needed.

We should finish the SciPy changes first, and get those in before branching 1.13.x in 1-2 months.

The SciPy PR could be merged as-is as far as I'm concerned (after the move of the libnpymath repo), but people need to review it and see whether there's anything wrong with it, since I might be missing something.

Did the symbol naming issue to avoid npy_* name clashes get resolved yet, and if so, how?

Yes, all the internal symbols were renamed to npymath_* and all the external symbols that are exported on numpy are still npy_*, but that's no problem since those do not get defined on the numpy side anymore. The only real issue was getting calls to npymath exported functions to work, since they're defined as accepting npymath_* types, but they're being passed npy_* types. Because of the nature of the typedefs, most of this played out of the box, with the exception of complex types under C++, which are defined as opaque types in npymath when it detects that it is built for use inside numpy.

@rgommers
Copy link
Member

Yes, all the internal symbols were renamed to npymath_* and all the external symbols that are exported on numpy are still npy_*, but that's no problem since those do not get defined on the numpy side anymore.

I checked the symbol names in the subproject on your SciPy PR with nm build/subprojects/libnpymath/src/libnpymath.a and they're still npy_* indeed. I'm not sure I understand the end of your sentence here - this should still be problematic. If we build a SciPy wheel against NumPy 2.0 and then at runtime use NumPy 1.25, we may get conflicts, right?

@carlkl
Copy link
Member
carlkl commented Jan 13, 2024

the npymath objects are linked statically, so I can't see a problem here. Any problem with symbol names should show up during the linking step.

@lysnikolaou
Copy link
Member
lysnikolaou commented Jan 16, 2024

Indeed I have tested this and it all plays well. I built a wheel with NumPy 2.0 and then run the whole suite with the wheel alongside both NumPy 2.0 and 1.26. All seems to be fine.

My concern is more about the readability of the whole thing. libnpymath basically has all npymath_* symbols, and then all of the sudden some symbols are named npy_*. I'm wondering whether there's a better solution, maybe rename all symbols to npymath_* and instead of installing libnpymath's version of npy_math.h (which is what the current solution does), #include it in a different header that's part of the NumPy source and which provides the compatibility layer with things like:

#include <npymath/npymath.h>

#define npy_log2 npymath_log2

And then use the npymath_* symbols directly in SciPy.

Obviously, these can also be post-release fixes/improvements if we're in a hurry to make it before NumPy 2.

@mattip
Copy link
Member
mattip commented Sep 14, 2024

@lysnikolaou where does this work stand? The dust has settled around NumPy 2.0, can we move forward with the separate repo for npmath?

@lysnikolaou
Copy link
Member

Thanks for the ping, @mattip!

Unfortunately, I haven't had time to work on this for a while. Last I did, there were some build complications around how complex numbers interact with each other. I'd documented that here.

I'll try to find some time to get back to this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

0