8000 Misc. bug: The following tests FAILED: 23 - test-arg-parser (Subprocess aborted) main · Issue #13371 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content
Misc. bug: The following tests FAILED: 23 - test-arg-parser (Subprocess aborted) main #13371
@vt-alt

Description

@vt-alt

Name and Version

  • llama-cli --version
    ggml_cuda_init: failed to initialize CUDA: OS call failed or operation not supported on this OS
    load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
    load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
    version: 5306 (alt1)
    built with x86_64-alt-linux-gcc (GCC) 14.2.1 20241028 (ALT Sisyphus 14.2.1-alt1) for x86_64-alt-linux

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

No response

Command line

Some tests fail:

+ ctest --test-dir x86_64-alt-linux --output-on-failure --force-new-ctest-process --parallel 20 -j1 -E test-eval-callback
Test project /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux
...
04:24:54       Start 23: test-arg-parser
04:24:54 23/28 Test #23: test-arg-parser ...................Subprocess aborted***Exception:   0.13 sec
04:24:54 ggml_cuda_init: failed to initialize CUDA: OS call failed or operation not supported on this OS
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 error while handling argument "-m": expected value for argument
04:24:54
04:24:54 usage:
04:24:54 -m,    --model FNAME                    model path (default: `models/$filename` with filename from `--hf-file`
04:24:54                                         or `--model-url` if set, otherwise models/7B/ggml-model-f16.gguf)
04:24:54                                         (env: LLAMA_ARG_MODEL)
04:24:54
04:24:54
04:24:54 to show complete usage, run with -h
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 error while handling argument "-ngl": stoi
04:24:54
04:24:54 usage:
04:24:54 -ngl,  --gpu-layers, --n-gpu-layers N   number of layers to store in VRAM
04:24:54                                         (env: LLAMA_ARG_N_GPU_LAYERS)
04:24:54
04:24:54
04:24:54 to show complete usage, run with -h
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 error while handling argument "-sm": invalid value
04:24:54
04:24:54 usage:
04:24:54 -sm,   --split-mode {none,layer,row}    how to split the model across multiple GPUs, one of:
04:24:54                                         - none: use one GPU only
04:24:54                                         - layer (default): split layers and KV across GPUs
04:24:54                                         - row: split rows across GPUs
04:24:54                                         (env: LLAMA_ARG_SPLIT_MODE)
04:24:54
04:24:54
04:24:54 to show complete usage, run with -h
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 error: invalid argument: --draft
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 error while handling environment variable "LLAMA_ARG_THREADS": stoi
04:24:54
04:24:54
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 load_backend: loaded CUDA backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cuda.so
04:24:54 load_backend: loaded CPU backend from /usr/src/RPM/BUILD/llama.cpp-5306/x86_64-alt-linux/bin/libggml-cpu-haswell.so
04:24:54 warn: LLAMA_ARG_MODEL environment variable is set, but will be overwritten by command line argument -m
04:24:54 terminate called after throwing an instance of 'std::runtime_error'
04:24:54   what():  error: cannot make GET request: Could not resolve hostname
04:24:54
...
04:24:54 96% tests passed, 1 tests failed out of 28
04:24:54 The following tests FAILED:
04:24:54         23 - test-arg-parser (Subprocess aborted)              main
04:24:54 Errors while running CTest

Problem description & steps to reproduce

I just built b5306 and run ctest, the same workflow b4855 worked without the error.

First Bad Commit

No response

Relevant log output

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0