[go: up one dir, main page]

{ const container = $el; // The div with overflow const item = document.getElementById('sidebar-current-page') if (item) { const containerTop = container.scrollTop; const containerBottom = containerTop + container.clientHeight; const itemTop = item.offsetTop - container.offsetTop; const itemBottom = itemTop + item.offsetHeight; // Scroll only if the item is out of view if (itemBottom > containerBottom - 200) { container.scrollTop = itemTop - (container.clientHeight / 2 - item.offsetHeight / 2); } } })" class="bg-background-toc dark:bg-background-toc fixed top-0 z-40 hidden h-screen w-full flex-none overflow-x-hidden overflow-y-auto md:sticky md:top-16 md:z-auto md:block md:h-[calc(100vh-64px)] md:w-[320px]" :class="{ 'hidden': ! $store.showSidebar }">
Contact support

docker model pull

DescriptionPull a model from Docker Hub or HuggingFace to your local environment
Usagedocker model pull MODEL

Description

Pull a model to your local environment. Downloaded models also appear in the Docker Desktop Dashboard.

Options

OptionDefaultDescription
--ignore-runtime-memory-checkDo not block pull if estimated runtime memory for model exceeds system resources.

Examples

Pulling a model from Docker Hub

docker model pull ai/smollm2

Pulling from HuggingFace

You can pull GGUF models directly from Hugging Face.

Note about quantization: If no tag is specified, the command tries to pull the Q4_K_M version of the model. If Q4_K_M doesn't exist, the command pulls the first GGUF found in the Files view of the model on HuggingFace. To specify the quantization, provide it as a tag, for example: docker model pull hf.co/bartowski/Llama-3.2-1B-Instruct-GGUF:Q4_K_S

docker model pull hf.co/bartowski/Llama-3.2-1B-Instruct-GGUF