8000 Eval bug: llama_model_load: error loading model: error loading model hyperparameters: key not found in model: llama.context_length · Issue #12857 · ggml-org/llama.cpp · GitHub
[go: up one dir, main page]

Skip to content

Eval bug: llama_model_load: error loading model: error loading model hyperparameters: key not found in model: llama.context_length #12857

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
www222fff opened this issue Apr 10, 2025 · 1 comment

Comments

@www222fff
Copy link

Name and Version

python3 convert_lora_to_gguf.py /models/CodeLlama-13b-finetuned --base /models/CodeLlama-13b-Instruct-hf --outfile /models/CodeLlama-13b.gguf --outtype f32

INFO:gguf.gguf_writer:Writing the following files:
INFO:gguf.gguf_writer:/models/CodeLlama-13b.gguf: n_tensors = 320, total_size = 104.9M
Writing: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 105M/105M [00:00<00:00, 210Mbyte/s]
INFO:lora-to-gguf:Model successfully exported to /models/CodeLlama-13b.gguf

root@7d293a7fbb1c:/app/llama.cpp# ./build/bin/llama-cli -m /models/CodeLlama-13b.gguf build: 5092 (d3bd719) with cc (Debian 12.2.0-14) 12.2.0 for x86_64-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_loader: loaded meta data with 12 key-value pairs and 320 tensors from /models/CodeLlama-13b.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = adapter
llama_model_loader: - kv 2: adapter.type str = lora
llama_model_loader: - kv 3: general.name str = CodeLlama 13b Finetuned
llama_model_loader: - kv 4: general.finetune str = finetuned
llama_model_loader: - kv 5: general.basename str = CodeLlama
llama_model_loader: - kv 6: general.size_label str = 13B
llama_model_loader: - kv 7: general.base_model.count u32 = 1
llama_model_loader: - kv 8: general.base_model.0.name str = models/CodeLlama 13b Instruct Hf
llama_model_loader: - kv 9: general.base_model.0.repo_url str = https://huggingface.co//models/CodeLl...
llama_model_loader: - kv 10: adapter.lora.alpha f32 = 32.000000
llama_model_loader: - kv 11: general.quantization_version u32 = 2
llama_model_loader: - type f32: 320 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = all F32 (guessed)
print_info: file size = 100.00 MiB (32.00 BPW)
llama_model_load: error loading model: error loading model hyperparameters: key not found in model: llama.context_length
llama_model_load_from_file_impl: failed to load model
common_init_from_params: failed to load model '/models/CodeLlama-13b.gguf'
main: error: unable to load model

Operating systems

Linux

GGML backends

CPU

Hardware

cpu node cloud

Models

CodeLlama-13b-Instruct-hf

Problem description & steps to reproduce

What is the issue here?

First Bad Commit

No response

Relevant log output

NA
Copy link
Contributor

This issue was closed because it has been inactive for 14 days since being marked as stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant
0