Description
I encountered these issues when I was expanding documentation for one of my PR's and confirmed these issues on the most recent version of master
at the time of writing for Python 2.7.11 on both 32-bit Windows 7 and 32-bit Ubuntu 14.04.
>>> import numpy as np
>>> np.__version__
'1.12.0.dev0+a1ab9cf'
>>> for dt in (np.bool_, np.int8, np.int16, np.int32, np.int64,
np.uint8, np.uint16, np.uint32, np.uint64):
dtype = np.dtype(dt).type
num = np.random.randint(1, dtype=dt)
print num, type(num), dtype
0 <type 'int'> <type 'numpy.bool_'>
0 <type 'int'> <type 'numpy.int8'>
0 <type 'int'> <type 'numpy.int16'>
0 <type 'int'> <type 'numpy.int32'>
0 <type 'long'> <type 'numpy.int64'>
0 <type 'int'> <type 'numpy.uint8'>
0 <type 'int'> <type 'numpy.uint16'>
0 <type 'long'> <type 'numpy.uint32'>
0 <type 'long'> <type 'numpy.uint64'>
"Enforcing" the correct dtype
is relatively straightforward. However, there are issues of backwards compatibility, as some tests (e.g. test_diophantine_fuzz
in test_mem_overlap.py
) appear to expect Python integers being returned. Enforcing dtype
causes numpy
integers to be returned, leading to warnings and therefore test failures due to scalar overflow.
My question is then: in the future, should randint
just return solely numpy
integers as we do currently when size
is not None
, OR should randint
compromise and return Python integers when native Python types are specified and numpy
integers otherwise? What about platform-dependent numpy
integer types like np.intp
, np.intc
, or np.int
(alias of int
though, right?)?