8000 Improve zarr chunks docs by max-sixty · Pull Request #9140 · pydata/xarray · GitHub
[go: up one dir, main page]

Skip to content

Improve zarr chunks docs #9140

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 22, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
No commit message
  • Loading branch information
max-sixty committed Jun 22, 2024
commit 7cdecffa202b3726f7f507453f82525f87c8e184
7 changes: 4 additions & 3 deletions xarray/backends/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -428,13 +428,14 @@ def open_dataset(
chunks : int, dict, 'auto' or None, default: None
If provided, used to load the data into dask arrays.

- ``chunks='auto'`` will use dask ``auto`` chunking taking into account the
- ``chunks="auto"`` will use dask ``auto`` chunking taking into account the
engine preferred chunks.
- ``chunks=None`` skips using dask, which is generally faster for
small arrays.
- ``chunks=-1`` loads the data with dask using a single chunk for all arrays.
- ``chunks={}`` loads the data with dask using engine preferred chunks if
exposed by the backend, otherwise with a single chunk for all arrays.
- ``chunks={}`` loads the data with dask using the engine's preferred chunk
size, generally identical to the format's chunk size. If not available, a
single chunk for all arrays.

See dask chunking for more details.
cache : bool, optional
Expand Down
Loading
0