-
Notifications
You must be signed in to change notification settings - Fork 852
[3.2] [RocksDB] Slow response when use LIMIT #2948
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I honestly do not have any idea how to fix this. I am not saying that the skipping code cannot be optimized, I am just unsure how to solve the problem in general given the algorithmic complexity of the operation. |
any progress on this issue? "LIMIT 5000" is slow enough... |
@pocketwalker : can you elaborate on "LIMIT 5000 is slow enough"? |
@jsteemann Thanks. In arangodb, I create a big graph with collections(nodes & edges) and run my queries on it. and I created primary and hash index on all collections. |
@pocketwalker : the key thing in a query that uses |
@jsteemann could you take a look at this issue, ? I put the query and also explain() into it. |
This issue should be closed, @jsteemann explained it perfectly. @pocketwalker: |
since the referenced issue has been marked as solved, closing this as solved too. |
Uh oh!
There was an error while loading. Please reload this page.
my environment running ArangoDB
I'm using the latest ArangoDB of the respective release series:
Mode:
Storage-Engine:
On this operating system:
I'm issuing AQL via:
I've run
db._explain("<my aql query>")
and it didn't shed more light on this.The AQL query in question is:
FOR d IN onem LIMIT 999970, 10 RETURN d
The issue can be reproduced using this dataset:
FOR uid IN 1..1000000 INSERT {uid} IN onem
These are the steps to reproduce:
Whats wrong: The larger
offset
inLIMIT
the more slow response.The text was updated successfully, but these errors were encountered: