Replies: 1 comment
-
|
Many of these are limited by their data types. And so it would be more reasonable to say they are limited by available hardware resources. That is mostly memory and durable storage. Some note worthy limits are:
Note that in our managed cloud we have lower limits configured to prevent bad practices:
Note that we highly recommend to not use many collections. In fact, in almost all cases we recommend to use exactly one collection. Please carefully read through our multi tenancy documentation here: https://qdrant.tech/documentation/guides/multiple-partitions/ So, you should definitely be able to have more than 100 billion points in a collection. Could you clarify your desired use case a bit better. Maybe we'll be able to give a more targeted recommendation. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi Qdrant team,
I’m looking for clarification on general storage limits in Qdrant.
The documentation provides helpful guidance on RAM/disk usage and capacity planning, but I haven’t been able to find a definitive statement on whether Qdrant enforces any hard limits on:
Maximum number of points in a collection
Maximum size of a collection (total bytes on disk)
Maximum payload (metadata) size per point
Maximum vector size/dimension (for dense/sparse vectors)
Any other internal caps that would affect very large-scale deployments (hundreds of millions to billions of points)
From my understanding, these may only be constrained by available hardware, but I’d like to confirm whether there are any baked-in limits (e.g., max u32/u64 ranges, segment limits, file size caps, mmap limits, etc.) that users should be aware of when planning large deployments.
Could you clarify what the true upper bounds are, if any?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions