8000 Converting kfkas Llama-2-ko-7b-Chat to GGUF fails · Issue #2865 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content
Converting kfkas Llama-2-ko-7b-Chat to GGUF fails #2865
Closed
@kurugai

Description

@kurugai

Hi. I'm trying to convert the 'kfkas/Llama-2-ko-7b-Chat' model I received from huggingface on Windows 11 into a gguf file.
So I tried to convert it to the command below.

C:\AI\llama.cpp>python convert-llama-hf-to-gguf.py .\models\kfkas_Llama-2-ko-7b-Chat 1

The conversion was successful, but when I tried to execute it, there was a problem that it couldn't be executed.

Can I ask you to review what should I do? Below are the results of the command execution.

I know you're busy, but please do it once.


C:\AI\llama.cpp>pip install gguf
Defaulting to user installation because normal site-packages is not writeable
Collecting gguf
Obtaining dependency information for gguf from https://files.pythonhosted.org/packages/bb/16/83a1cb95d9ec85bc316a1986481325c257a4a9a024e12bace801898db14e/gguf-0.2.1-py3-none-any.whl.metadata
Downloading gguf-0.2.1-py3-none-any.whl.metadata (1.9 kB)
Requirement already satisfied: numpy>=1.17 in c:\users\hwyoo\appdata\roaming\python\python310\site-packages (from gguf) (1.23.5)
Downloading gguf-0.2.1-py3-none-any.whl (8.1 kB)
Installing collected packages: gguf
Successfully installed gguf-0.2.1

C:\AI\llama.cpp>python convert-llama-hf-to-gguf.py .\models\kfkas_Llama-2-ko-7b-Chat 1
gguf: loading model kfkas_Llama-2-ko-7b-Chat
gguf: found 2 model parts
gguf: get model metadata
gguf: get tokenizer metadata
gguf: get special token ids
gguf: get tensor metadata
gguf: loading model part 'pytorch_model-00001-of-00002.bin'
token_embd.weight, n_dims = 2, torch.float16 --> float16
blk.0.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.0.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.0.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.0.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.0.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.0.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.0.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.0.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.0.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.1.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.1.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.1.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.1.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.1.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.1.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.1.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.1.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.1.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.2.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.2.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.2.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.2.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.2.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.2.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.2.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.2.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.2.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.3.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.3.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.3.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.3.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.3.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.3.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.3.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.3.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.3.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.4.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.4.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.4.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.4.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.4.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.4.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.4.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.4.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.4.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.5.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.5.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.5.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.5.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.5.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.5.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.5.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.5.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.5.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.6.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.6.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.6.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.6.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.6.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.6.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.6.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.6.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.6.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.7.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.7.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.7.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.7.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.7.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.7.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.7.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.7.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.7.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.8.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.8.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.8.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.8.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.8.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.8.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.8.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.8.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.8.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.9.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.9.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.9.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.9.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.9.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.9.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.9.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.9.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.9.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.10.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.10.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.10.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.10.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.10.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.10.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.10.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.10.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.10.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.11.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.11.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.11.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.11.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.11.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.11.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.11.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.11.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.11.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.12.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.12.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.12.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.12.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.12.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.12.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.12.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.12.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.12.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.13.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.13.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.13.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.13.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.13.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.13.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.13.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.13.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.13.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.14.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.14.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.14.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.14.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.14.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.14.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.14.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.14.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.14.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.15.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.15.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.15.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.15.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.15.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.15.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.15.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.15.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.15.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.16.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.16.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.16.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.16.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.16.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.16.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.16.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.16.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.16.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.17.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.17.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.17.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.17.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.17.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.17.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.17.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.17.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.17.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.18.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.18.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.18.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.18.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.18.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.18.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.18.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.18.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.18.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.19.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.19.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.19.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.19.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.19.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.19.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.19.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.19.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.19.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.20.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.20.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.20.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.20.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.20.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.20.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.20.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.20.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.20.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.21.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.21.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.21.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.21.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.21.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.21.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.21.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.21.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.21.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.22.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.22.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.22.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.22.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.22.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.22.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.22.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.22.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.22.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.23.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.23.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.23.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.23.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.23.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
gguf: loading model part 'pytorch_model-00002-of-00002.bin'
blk.23.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.23.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.23.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.23.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.24.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.24.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.24.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.24.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.24.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.24.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.24.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.24.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.24.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.25.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.25.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.25.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.25.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.25.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.25.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.25.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.25.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.25.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.26.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.26.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.26.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.26.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.26.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.26.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.26.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.26.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.26.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.27.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.27.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.27.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.27.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.27.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.27.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.27.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.27.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.27.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.28.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.28.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.28.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.28.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.28.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.28.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.28.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.28.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.28.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.29.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.29.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.29.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.29.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.29.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.29.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.29.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.29.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.29.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.30.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.30.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.30.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.30.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.30.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.30.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.30.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.30.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.30.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.31.attn_q.weight, n_dims = 2, torch.float16 --> float16
blk.31.attn_k.weight, n_dims = 2, torch.float16 --> float16
blk.31.attn_v.weight, n_dims = 2, torch.float16 --> float16
blk.31.attn_output.weight, n_dims = 2, torch.float16 --> float16
blk.31.ffn_gate.weight, n_dims = 2, torch.float16 --> float16
blk.31.ffn_down.weight, n_dims = 2, torch.float16 --> float16
blk.31.ffn_up.weight, n_dims = 2, torch.float16 --> float16
blk.31.attn_norm.weight, n_dims = 1, torch.float16 --> float32
blk.31.ffn_norm.weight, n_dims = 1, torch.float16 --> float32
output_norm.weight, n_dims = 1, torch.float16 --> float32
output.weight, n_dims = 2, torch.float16 --> float16
gguf: write header
gguf: write metadata
gguf: write tensors
gguf: model successfully exported to '.\models\kfkas_Llama-2-ko-7b-Chat/ggml-model-f16.gguf'

C:\AI\llama.cpp>main
main: build = 1100 (dd0dc36)
main: seed = 1693289567
llama_model_loader: loaded meta data with 15 key-value pairs and 291 tensors from models/7B/ggml-model-f16.gguf (version GGUF V1L����.llama_model_loader: - tensor 0: token_embd.weight f16 [ 4096, 46336, 1, 1 ]
llama_model_loader: - tensor 1: blk.0.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 2: blk.0.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 3: blk.0.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 4: blk.0.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 5: blk.0.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 6: blk.0.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 7: blk.0.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 8: blk.0.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 9: blk.0.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 10: blk.1.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 11: blk.1.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 12: blk.1.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 13: blk.1.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 14: blk.1.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 15: blk.1.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 16: blk.1.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 17: blk.1.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 18: blk.1.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 19: blk.2.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 20: blk.2.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 21: blk.2.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 22: blk.2.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 23: blk.2.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 24: blk.2.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 25: blk.2.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 26: blk.2.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 27: blk.2.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 28: blk.3.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 29: blk.3.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 30: blk.3.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 31: blk.3.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 32: blk.3.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 33: blk.3.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 34: blk.3.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 35: blk.3.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 36: blk.3.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 37: blk.4.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 38: blk.4.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 39: blk.4.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 40: blk.4.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 41: blk.4.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 42: blk.4.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 43: blk.4.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 44: blk.4.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 45: blk.4.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 46: blk.5.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 47: blk.5.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 48: blk.5.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 49: blk.5.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 50: blk.5.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 51: blk.5.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 52: blk.5.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 53: blk.5.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 54: blk.5.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 55: blk.6.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 56: blk.6.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 57: blk.6.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 58: blk.6.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 59: blk.6.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 60: blk.6.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 61: blk.6.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 62: blk.6.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 63: blk.6.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 64: blk.7.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 65: blk.7.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 66: blk.7.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 67: blk.7.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 68: blk.7.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 69: blk.7.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 70: blk.7.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 71: blk.7.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 72: blk.7.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 73: blk.8.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 74: blk.8.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 75: blk.8.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 76: blk.8.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 77: blk.8.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 78: blk.8.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 79: blk.8.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 80: blk.8.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 81: blk.8.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 82: blk.9.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 83: blk.9.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 84: blk.9.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 85: blk.9.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 86: blk.9.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 87: blk.9.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 88: blk.9.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 89: blk.9.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 90: blk.9.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 91: blk.10.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 92: blk.10.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 93: blk.10.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 94: blk.10.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 95: blk.10.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 96: blk.10.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 97: blk.10.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 98: blk.10.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 99: blk.10.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 100: blk.11.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 101: blk.11.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 102: blk.11.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 103: blk.11.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 104: blk.11.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 105: blk.11.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 106: blk.11.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 107: blk.11.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 108: blk.11.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 109: blk.12.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 110: blk.12.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 111: blk.12.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 112: blk.12.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 113: blk.12.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 114: blk.12.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 115: blk.12.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 116: blk.12.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 117: blk.12.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 118: blk.13.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 119: blk.13.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 120: blk.13.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 121: blk.13.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 122: blk.13.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 123: blk.13.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 124: blk.13.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 125: blk.13.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 126: blk.13.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 127: blk.14.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 128: blk.14.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 129: blk.14.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 130: blk.14.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 131: blk.14.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 132: blk.14.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 133: blk.14.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 134: blk.14.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 135: blk.14.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 136: blk.15.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 137: blk.15.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 138: blk.15.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 139: blk.15.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 140: blk.15.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 141: blk.15.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 142: blk.15.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 143: blk.15.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 144: blk.15.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 145: blk.16.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 146: blk.16.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 147: blk.16.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 148: blk.16.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 149: blk.16.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 150: blk.16.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 151: blk.16.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 152: blk.16.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 153: blk.16.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 154: blk.17.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 155: blk.17.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 156: blk.17.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 157: blk.17.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 158: blk.17.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 159: blk.17.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 160: blk.17.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 161: blk.17.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 162: blk.17.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 163: blk.18.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 164: blk.18.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 165: blk.18.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 166: blk.18.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 167: blk.18.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 168: blk.18.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 169: blk.18.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 170: blk.18.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 171: blk.18.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 172: blk.19.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 173: blk.19.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 174: blk.19.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 175: blk.19.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 176: blk.19.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 177: blk.19.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 178: blk.19.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 179: blk.19.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 180: blk.19.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 181: blk.20.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 182: blk.20.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 183: blk.20.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 184: blk.20.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 185: blk.20.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 186: blk.20.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 187: blk.20.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 188: blk.20.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 189: blk.20.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 190: blk.21.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 191: blk.21.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 192: blk.21.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 193: blk.21.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 194: blk.21.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 195: blk.21.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 196: blk.21.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 197: blk.21.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 198: blk.21.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 199: blk.22.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 200: blk.22.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 201: blk.22.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 202: blk.22.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 203: blk.22.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 204: blk.22.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 205: blk.22.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 206: blk.22.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 207: blk.22.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 208: blk.23.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 209: blk.23.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 210: blk.23.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 211: blk.23.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 212: blk.23.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 213: blk.23.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 214: blk.23.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 215: blk.23.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 216: blk.23.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 217: blk.24.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 218: blk.24.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 219: blk.24.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 220: blk.24.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 221: blk.24.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 222: blk.24.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 223: blk.24.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 224: 341A blk.24.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 225: blk.24.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 226: blk.25.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 227: blk.25.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 228: blk.25.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 229: blk.25.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 230: blk.25.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 231: blk.25.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 232: blk.25.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 233: blk.25.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 234: blk.25.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 235: blk.26.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 236: blk.26.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 237: blk.26.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 238: blk.26.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 239: blk.26.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 240: blk.26.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 241: blk.26.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 242: blk.26.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 243: blk.26.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 244: blk.27.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 245: blk.27.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 246: blk.27.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 247: blk.27.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 248: blk.27.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 249: blk.27.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 250: blk.27.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 251: blk.27.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 252: blk.27.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 253: blk.28.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 254: blk.28.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 255: blk.28.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 256: blk.28.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 257: blk.28.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 258: blk.28.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 259: blk.28.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 260: blk.28.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 261: blk.28.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 262: blk.29.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 263: blk.29.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 264: blk.29.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 265: blk.29.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 266: blk.29.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 267: blk.29.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 268: blk.29.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 269: blk.29.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 270: blk.29.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 271: blk.30.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 272: blk.30.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 273: blk.30.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 274: blk.30.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 275: blk.30.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 276: blk.30.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 277: blk.30.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 278: blk.30.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 279: blk.30.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 280: blk.31.attn_q.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 281: blk.31.attn_k.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 282: blk.31.attn_v.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 283: blk.31.attn_output.weight f16 [ 4096, 4096, 1, 1 ]
llama_model_loader: - tensor 284: blk.31.ffn_gate.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 285: blk.31.ffn_down.weight f16 [ 11008, 4096, 1, 1 ]
llama_model_loader: - tensor 286: blk.31.ffn_up.weight f16 [ 4096, 11008, 1, 1 ]
llama_model_loader: - tensor 287: blk.31.attn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 288: blk.31.ffn_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 289: output_norm.weight f32 [ 4096, 1, 1, 1 ]
llama_model_loader: - tensor 290: output.weight f16 [ 4096, 46336, 1, 1 ]
llama_model_loader: - kv 0: general.architecture str
llama_model_loader: - kv 1: general.name str
llama_model_loader: - kv 2: general.source.hugginface.repository str
llama_model_loader: - kv 3: llama.tensor_data_layout str
llama_model_loader: - kv 4: llama.context_length u32
llama_model_loader: - kv 5: llama.embedding_length u32
llama_model_loader: - kv 6: llama.block_count u32
llama_model_loader: - kv 7: llama.feed_forward_length u32
llama_model_loader: - kv 8: llama.rope.dimension_count u32
llama_model_loader: - kv 9: llama.attention.head_count u32
llama_model_loader: - kv 10: llama.attention.head_count_kv u32
llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32
llama_model_loader: - kv 12: tokenizer.ggml.bos_token_id u32
llama_model_loader: - kv 13: tokenizer.ggml.eos_token_id u32
llama_model_loader: - kv 14: tokenizer.ggml.unknown_token_id u32
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type f16: 226 tensors
error loading model: key not found in model: tokenizer.ggml.tokens
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models/7B/ggml-model-f16.gguf'
main: error: unable to load model

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0