-
Notifications
You must be signed in to change notification settings - Fork 12.4k
Closed
Labels
Description
Name and Version
$ llama-cli --version
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 ROCm devices:
Device 0: AMD Radeon RX 7900 XT, gfx1100 (0x1100), VMM: no, Wave Size: 32
Device 1: AMD Radeon Graphics, gfx1100 (0x1100), VMM: no, Wave Size: 32
version: 5191 (295354ea)
built with cc (GCC) 14.2.1 20250207 for x86_64-pc-linux-gnu
Operating systems
Linux
GGML backends
CPU
Hardware
- AMD Ryzen 9800X3D
- AMD Radeon 7900XT
Models
$ b3sum exaone3.5\:2.4b-instruct-iq4_XS.gguf
10d5355ddf36435abca2b9c74428130a90b23ccc411c8f3d9232e783d5c06cc5
Problem description & steps to reproduce
Run the following command:
llama-cli -m exaone3.5\:2.4b-instruct-iq4_XS.gguf -fa -ctk q8_0 -ctv q8_0
See the failed assertion
llama.cpp/src/llama-context.cpp:202: GGML_ASSERT(hparams.n_embd_head_k % ggml_blck_size(type_k) == 0) failed
Probably related to ollama/ollama#9605
Occurs on both ROCm and CPU backends.
First Bad Commit
No response
Relevant log output
$ llama-cli -m exaone3.5\:2.4b-instruct-iq4_XS.gguf -fa -ctk q8_0 -ctv q8_0
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 ROCm devices:
Device 0: AMD Radeon RX 7900 XT, gfx1100 (0x1100), VMM: no, Wave Size: 32
Device 1: AMD Radeon Graphics, gfx1100 (0x1100), VMM: no, Wave Size: 32
build: 5191 (295354ea) with cc (GCC) 14.2.1 20250207 for x86_64-pc-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon RX 7900 XT) - 20346 MiB free
llama_model_load_from_file_impl: using device ROCm1 (AMD Radeon Graphics) - 15569 MiB free
llama_model_loader: loaded meta data with 36 key-value pairs and 274 tensors from exaone3.5:2.4b-instruct-iq4_XS.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = exaone
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = EXAONE 3.5 2.4B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = EXAONE-3.5
llama_model_loader: - kv 5: general.size_label str = 2.4B
llama_model_loader: - kv 6: general.license str = other
llama_model_loader: - kv 7: general.license.name str = exaone
llama_model_loader: - kv 8: general.license.link str = LICENSE
llama_model_loader: - kv 9: general.tags arr[str,4] = ["lg-ai", "exaone", "exaone-3.5", "te...
llama_model_loader: - kv 10: general.languages arr[str,2] = ["en", "ko"]
llama_model_loader: - kv 11: exaone.embedding_length u32 = 2560
llama_model_loader: - kv 12: exaone.attention.head_count u32 = 32
llama_model_loader: - kv 13: exaone.attention.head_count_kv u32 = 8
llama_model_loader: - kv 14: exaone.context_length u32 = 32768
llama_model_loader: - kv 15: exaone.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 16: exaone.feed_forward_length u32 = 7168
llama_model_loader: - kv 17: exaone.block_count u32 = 30
llama_model_loader: - kv 18: general.file_type u32 = 30
llama_model_loader: - kv 19: exaone.rope.freq_base f32 = 1000000.000000
llama_model_loader: - kv 20: exaone.rope.dimension_count u32 = 80
llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 22: tokenizer.ggml.pre str = exaone
llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,102400] = ["[PAD]", "[BOS]", "[EOS]", "[UNK]", ...
llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,102400] = [3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, ...
llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,101782] = ["t h", "Ġ a", "Ġ í", "i n", "Ġ t...
llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 361
llama_model_loader: - kv 28: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 29: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 30: tokenizer.chat_template str = {% for message in messages %}{% if lo...
llama_model_loader: - kv 31: general.quantization_version u32 = 2
llama_model_loader: - kv 32: quantize.imatrix.file str = /models_out/EXAONE-3.5-2.4B-Instruct-...
llama_model_loader: - kv 33: quantize.imatrix.dataset str = /training_dir/calibration_datav3.txt
llama_model_loader: - kv 34: quantize.imatrix.entries_count i32 = 210
llama_model_loader: - kv 35: quantize.imatrix.chunks_count i32 = 137
llama_model_loader: - type f32: 62 tensors
llama_model_loader: - type q5_K: 30 tensors
llama_model_loader: - type q6_K: 1 tensors
llama_model_loader: - type iq4_xs: 181 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = IQ4_XS - 4.25 bpw
print_info: file size = 1.40 GiB (4.50 BPW)
load: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
load: special tokens cache size = 362
load: token to piece cache size = 0.6622 MB
print_info: arch = exaone
print_info: vocab_only = 0
print_info: n_ctx_train = 32768
print_info: n_embd = 2560
print_info: n_layer = 30
print_info: n_head = 32
print_info: n_head_kv = 8
print_info: n_rot = 80
print_info: n_swa = 0
print_info: n_swa_pattern =
679C
1
print_info: n_embd_head_k = 80
print_info: n_embd_head_v = 80
print_info: n_gqa = 4
print_info: n_embd_k_gqa = 640
print_info: n_embd_v_gqa = 640
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 7168
print_info: n_expert = 0
print_info: n_expert_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 32768
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 0
print_info: ssm_d_inner = 0
print_info: ssm_d_state = 0
print_info: ssm_dt_rank = 0
print_info: ssm_dt_b_c_rms = 0
print_info: model type = ?B
print_info: model params = 2.67 B
print_info: general.name = EXAONE 3.5 2.4B Instruct
print_info: vocab type = BPE
print_info: n_vocab = 102400
print_info: n_merges = 101782
print_info: BOS token = 1 '[BOS]'
print_info: EOS token = 361 '[|endofturn|]'
print_info: EOT token = 42 '<|endoftext|>'
print_info: UNK token = 3 '[UNK]'
print_info: PAD token = 0 '[PAD]'
print_info: LF token = 560 'Ċ'
print_info: EOG token = 42 '<|endoftext|>'
print_info: EOG token = 361 '[|endofturn|]'
print_info: max token length = 48
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 0 repeating layers to GPU
load_tensors: offloaded 0/31 layers to GPU
load_tensors: CPU_Mapped model buffer size = 1431.55 MiB
...............................................................................
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 4096
llama_context: n_ctx_per_seq = 4096
llama_context: n_batch = 2048
llama_context: n_ubatch = 512
llama_context: causal_attn = 1
llama_context: flash_attn = 1
llama_context: freq_base = 1000000.0
llama_context: freq_scale = 1
llama_context: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
llama_context: CPU output buffer size = 0.39 MiB
llama.cpp/src/llama-context.cpp:202: GGML_ASSERT(hparams.n_embd_head_k % ggml_blck_size(type_k) == 0) failed
ptrace: Operation not permitted.
No stack.
The program is not being run.
Aborted (core dumped)