8000 Update llama-bench README · ggml-org/llama.cpp@083f56b · GitHub
[go: up one dir, main page]

Skip to content

Commit 083f56b

Browse files
committed
Update llama-bench README
SYCL backend introduced a workaround that allows execution of llama-bench also without specifying `--mmp 0` flag
1 parent 384dcb0 commit 083f56b

File tree

1 file changed

+0
-4
lines changed

1 file changed

+0
-4
lines changed

tools/llama-bench/README.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -80,10 +80,6 @@ Using the `-d <n>` option, each test can be run at a specified context depth, pr
8080

8181
For a description of the other options, see the [main e 6378 xample](../main/README.md).
8282

83-
Note:
84-
85-
- When using SYCL backend, there would be hang issue in some cases. Please set `--mmp 0`.
86-
8783
## Examples
8884

8985
### Text generation with different models

0 commit comments

Comments
 (0)
0