BUG: np.loadtxt
return F_CONTIGUOUS ndarray if row size is too big
#26900
Labels
8000
np.loadtxt
return F_CONTIGUOUS ndarray if row size is too big
#26900
Describe the issue:
If row size is too big in a text file,
np.loadtxt
will retrun anp.ndarray
withF_CONTIGUOUS
set.Reproduce the code example:
Error message:
Python and NumPy Versions:
2.1.0.dev0+git20240708.735a477
3.12.4 (main, Jul 9 2024, 15:36:49) [GCC 11.4.0]
Runtime Environment:
[{'numpy_version': '2.1.0.dev0+git20240708.735a477',
'python': '3.12.4 (main, Jul 9 2024, 15:36:49) [GCC 11.4.0]',
'uname': uname_result(system='Linux', node='hn00', release='5.15.0-105-generic', version='#115-Ubuntu SMP Mon Apr 15 09:52:04 UTC 2024', machine='x86_64')},
{'simd_extensions': {'baseline': [], 'found': [], 'not_found': []}},
{'filepath': '/opt/intel/oneapi/mkl/2022.1.0/lib/intel64/libmkl_rt.so.2',
'internal_api': 'mkl',
'num_threads': 48,
'prefix': 'libmkl_rt',
'threading_layer': 'intel',
'user_api': 'blas',
'version': '2022.1-Product'},
{'filepath': '/opt/intel/oneapi/compiler/2022.1.0/linux/compiler/lib/intel64_lin/libiomp5.so',
'internal_api': 'openmp',
'num_threads': 96,
'prefix': 'libiomp',
'user_api': 'openmp',
'version': None}]
Context for the issue:
np.loadtxt
creates thePyArrayObject
inread_rows
, which reads a specific number of rows of the text file based on the allocated buffer size. If the row size was too big,read_rows
might decide to read only one line (varmin_rows
below), and thenPyArray_SimpleNewFromDescr
would set bothF_CONTIGUOUS
andC_CONTIGUOUS
. This results in an inconsistent behavior with whatloadtxt
does on small files.code for context:
numpy/numpy/_core/src/multiarray/textreading/rows.c
Lines 267 to 300 in 6b2a3e0
The text was updated successfully, but these errors were encountered: