-
-
Notifications
You must be signed in to change notification settings - Fork 10.9k
BUG: np.random.multinomial(<float>, ...) raises a TypeError in numpy 1.26 #25061
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Weird, on main I added an I would suspect it to be a cython 3 change about how it converts to integers which are not The |
# cython: language_level=3str
# distutils: define_macros=NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION
"""
Teasing out factors for numpy #25061, where passing a float to Cython code
[like np.random.RandomState.multinomial()] might raise
TypeError: 'float' object cannot be interpreted as an integer
depending whether this code is compiled with Cython==0.29 + numpy==1.25.2
vs. Cython==3.0 + numpy==1.26.1, which declare npy_intp differently.
"""
import numpy as np
cimport numpy as np
def func_int(int length):
"""Accepts an int or a float."""
return length
def func_Py_ssize_t(Py_ssize_t length):
"""Accepts an int; never a float."""
return length
def func_npy_intp(np.npy_intp length):
"""Accepts an int; float depends on Cython and numpy versions since they
define npy_intp differently."""
return length
# Test these in Python:
"""
import testcase as tc
tc.func_int(1.0) # should be OK
tc.func_Py_ssize_t(1.0) # should raise TypeError
tc.func_npy_intp(1.0) # depends on Cython 0.29 vs. 3.0
""" |
Thanks, so only Admittedly the definitions for intp seem outright wrong, not that it matters on most platforms... Unless somehow |
I think the story is:
==> So if __init__.cython-30.pxd reverted the definition of Whether that causes other problems, I can't say. A narrower fix would change multinomial to declare n as an |
That doesn't add up. NumPy has been the primary source for the definitions for a long time, unless |
Do ask the Cython folks for expertise on this and let me know where I was wrong. Here's a bit of info from a cython-users thread:
I'm unsure about when |
Maybe the difference is the language level default? |
I like a testable hypothesis! I wrote a unit test to test the limits. Results:
|
Another hypothesis test: @thalassemia verified that editing |
The |
The more thorough fix for this is: gh-25094 which fixes the underlying reason for why it changed, rather than a targeted specific one for multinomial. |
Describe the issue:
Up through numpy 1.25.2,
random.multinomial()
accepted a float as the first argument n, whereas numpy 1.26 does not. (In our code, that value comes from a computation involvingscipy.constants.Avogadro
.)In numpy 1.25.2 (tweaking an example from the docs):
In numpy 1.26.0 and 1.26.1:
Q. Is this an intentional API change?
I don't see it in the release notes.
n: int
.np.npy_intp
._ArrayLikeInt_co
.npy_intp
asPy_ssize_t
.npy_intp
asint
.So I guess what changed was shifting API definitions from Cython's UFuncs.pyx to NumPy's __init__.cython-30.pxd
Environment: macOS 13.6, Intel CPU, Python 3.11.6.
Also: Ubuntu Linux, Intel CPU, Python 3.11.6.
Reproduce the code example:
Error message:
Runtime information:
import sys, numpy; print(numpy.__version__); print(sys.version)
print(numpy.show_runtime())
Context for the issue:
I'll adjust our calling code to convert these values to
int
, so fixing this is not a priority.I'd like to:
The text was updated successfully, but these errors were encountered: