Transform a fresh NixOS installation into a fully-configured AI development powerhouse in 20-40 minutes (or 60-120 minutes if building from source).
curl -fsSL https://raw.githubusercontent.com/MasterofNull/NixOS-Dev-Quick-Deploy/main/nixos-quick-deploy.sh | bashOr clone and run locally:
git clone https://github.com/MasterofNull/NixOS-Dev-Quick-Deploy.git ~/NixOS-Dev-Quick-Deploy
cd ~/NixOS-Dev-Quick-Deploy
chmod +x nixos-quick-deploy.sh
./nixos-quick-deploy.shThat's it! Answer a few simple questions (including choosing between binary caches, remote builders/private Cachix, or local source builds), wait 20-120 minutes depending on your choice, reboot, and you're done.
The AI stack is now a first-class, public component of this repository!
# Deploy NixOS + complete AI development environment (single path)
./nixos-quick-deploy.sh --with-k8s-stackThis single command gives you:
- β AIDB MCP Server - PostgreSQL + TimescaleDB + Qdrant vector database
- β llama.cpp vLLM - Local model inference (Qwen, DeepSeek, Phi, CodeLlama)
- β 29 Agent Skills - Specialized AI agents for code, deployment, testing, design
- β MCP Servers - Model Context Protocol servers for AIDB, NixOS, GitHub
- β Embeddings Service - Dedicated sentence-transformers API for fast RAG
- β Hybrid Coordinator - Local/remote routing with continuous learning
- β
Nginx TLS Gateway - HTTPS termination on
https://localhost:8443 - β
Shared Data - Persistent data that survives reinstalls (
~/.local/share/nixos-ai-stack)
See ai-stack/README.md and /docs/AI-STACK-FULL-INTEGRATION.md for complete documentation.
- COSMIC Desktop - Modern, fast desktop environment from System76
- Hyprland Wayland Session - Latest Hyprland compositor alongside COSMIC for tiling workflows
- Performance Kernel Track - Prefers the tuned
linuxPackages_6_17build, then falls back to TKG, XanMod, Liquorix, Zen, and finallylinuxPackages_latest - 800+ Packages - Development tools, CLI utilities, and applications
- Nix Flakes - Enabled and configured for reproducible builds
- Podman - Rootless container runtime for local AI services
- Flatpak - Sandboxed desktop applications with profile-aware provisioning and incremental updates
- Home Manager - Declarative user environment configuration
- ZSH + Powerlevel10k - Beautiful, fast terminal with auto-configuration
Fully Integrated Components (v6.0.0):
| Component | Location | Status | Purpose |
|---|---|---|---|
| AIDB MCP Server | ai-stack/mcp-servers/aidb/ |
β Active | PostgreSQL + TimescaleDB + Qdrant vector DB + FastAPI MCP server |
| llama.cpp vLLM | ai-stack/compose/ |
β Active | Local OpenAI-compatible inference (Qwen, DeepSeek, Phi, CodeLlama) |
| 29 Agent Skills | ai-stack/agents/skills/ |
β Active | nixos-deployment, webapp-testing, code-review, canvas-design, and more |
| Embeddings Service | ai-stack/mcp-servers/ |
β Active | Sentence-transformers API for embeddings |
| Hybrid Coordinator | ai-stack/mcp-servers/ |
β Active | Local/remote routing + continuous learning |
| NixOS Docs MCP | ai-stack/mcp-servers/ |
β Active | NixOS/Nix documentation search API |
| Nginx TLS Gateway | ai-stack/compose/ |
β Active | HTTPS termination on https://localhost:8443 |
| MCP Servers | ai-stack/mcp-servers/ |
β Active | Model Context Protocol servers for AIDB, NixOS, GitHub |
| Model Registry | ai-stack/models/registry.json |
β Active | Model catalog with metadata, VRAM, speed, quality scores |
| Vector Database | PostgreSQL + Qdrant | β Active | Semantic search and document embeddings |
| Redis Cache | Redis + Redis Insight | β Active | High-performance caching layer |
AI Development Tools:
| Tool | Integration | Purpose |
|---|---|---|
| Claude Code | VSCodium extension + CLI wrapper | AI pair programming inside VSCodium |
| Cursor | Flatpak + launcher | AI-assisted IDE with GPT-4/Claude |
| Continue | VSCodium extension | In-editor AI completions |
| Codeium | VSCodium extension | Free AI autocomplete |
| GPT CLI | Command-line tool | Query OpenAI-compatible endpoints (local llama.cpp or remote) |
| Aider | CLI code assistant | AI pair programming from terminal |
| LM Studio | Flatpak app | Desktop LLM manager |
Quick Start:
./nixos-quick-deploy.sh --with-k8s-stack # Deploy everything
kubectl get pods -n ai-stack # Check status
python3 ai-stack/tests/test_hospital_e2e.py- TLS by default: External APIs are exposed via nginx at
https://localhost:8443(self-signed cert). - API keys required: Core APIs enforce
X-API-Keyfromai-stack/compose/secrets/stack_api_key. - Local-only tools: Open WebUI, MindsDB, and metrics UIs bind to
127.0.0.1. - Privileged self-heal: The health monitor runs only under the
self-healprofile (opt-in).
Languages & Runtimes:
- Python 3.13 with 60+ AI/ML packages (PyTorch, TensorFlow, LangChain, etc.) and
uvinstalled as a drop-in replacement forpip(aliases forpip/pip3point touv pip).
SetPYTHON_PREFER_PY314=1before running the deployer to trial Python 3.14 once the compatibility mask is cleared. - Node.js 22, Go, Rust, Ruby
AI/ML Python Packages (Built-in):
- Deep Learning: PyTorch, TensorFlow, Transformers, Diffusers
- LLM Frameworks: LangChain, LlamaIndex, OpenAI, Anthropic clients
- Vector DBs: ChromaDB, Qdrant client, FAISS, Sentence Transformers
- Data Science: Pandas, Polars, Dask, Jupyter Lab, Matplotlib
- Code Quality: Black, Ruff, Mypy, Pylint
- Agent Ops & MCP: LiteLLM, Tiktoken, FastAPI, Uvicorn, HTTPX, Pydantic, Typer, Rich, SQLAlchemy, DuckDB
Editors & IDEs:
- VSCodium Insiders (VS Code without telemetry, rolling build)
- Neovim (modern Vim)
- Cursor (AI-powered editor)
Version Control:
- Git, Git LFS, Lazygit
- GitLens, Git Graph (VSCodium extensions)
Modern CLI Tools:
ripgrep,fd,fzf- Fast search toolsbat,eza- Enhanced cat/ls with syntax highlightingjq,yq- JSON/YAML processorshtop,btop, vendor-specific GPU monitors (e.g.,nvtopwhen available) - System and GPU monitoring dashboards (AMD-specific tools auto-installed when detected)Glances,Grafana,Prometheus,Loki,Promtail,Vector,Cockpit- Full observability and logging stackgnome-disk-utility,parted- Disk formatting and partitioninglazygit- Terminal UI for Gitpnpm,biome,pixi,ast-grep,atuin,zellij,distrobox- Faster package managers, JS formatter/linter, code-aware search, shell history sync, terminal multiplexing, and mutable container conveniencenix-fast-build,lorri,cachix- Nix build acceleration and cache toolingsccache,cargo-binstall,gofumpt,staticcheck- Faster Rust/Go builds and linters
Nix Ecosystem:
nix-tree- Visualize dependency treesnixpkgs-fmt,alejandra- Nix code formattersstatix- Nix linternix-index- File search in nixpkgs
Container Tools:
- K3s, kubectl, containerd
AI Services (Systemd):
- Qdrant (vector database, enabled by default)
- Hugging Face TGI (LLM inference server, manual enable)
- Jupyter Lab (web-based notebooks, user service)
Pick the profile that matches your workflow during Phase 6:
- Core β Balanced desktop with browsers, media tools, and developer essentials.
- AI Workstation β Core profile plus Postman, DBeaver, VS Code, Bitwarden, and other data tooling.
- Minimal β Lean recovery environment with Firefox, Obsidian, Flatseal.
The installer records your selection in $STATE_DIR/preferences/flatpak-profile.env and only applies deltas on future runs, dramatically cutting repeat deployment time. You can switch profiles any timeβstate is cached in $STATE_DIR/preferences/flatpak-profile-state.env so already-installed apps are preserved when possible.
Need more? Over 50 additional apps remain available on Flathub, and the provisioning service respects manual changesβonly diverging packages in the active profile are synced.
cd ~/NixOS-Dev-Quick-Deploy
./nixos-quick-deploy.sh- GitHub username β For git config
- GitHub email β For git config
- Editor preference β vim/neovim/vscodium (choose 1-3)
- Replace config? β Press Enter (yes)
After the initial survey, Phase 1 now asks how you want to accelerate builds:
- Binary caches (default) β Fastest path, uses NixOS/nix-community/CUDA caches.
- Build locally β Compile everything on the target host for maximal control.
- Remote builders or private Cachix β Layer SSH build farm(s) and/or authenticated Cachix caches on top of the binary caches option.
If you choose option 3, be ready to paste builder strings (e.g., ssh://nix@builder.example.com x86_64-linux - 4 1) and any Cachix cache names/keys. Secrets are stored under $STATE_DIR/preferences/remote-builders.env for reuse later.
The script automatically:
- β Updates NixOS system config (COSMIC, Podman, Flakes)
- β
Runs
sudo nixos-rebuild switch - β Creates home-manager configuration (~100 packages)
- β
Runs
home-manager switch - β Installs Flatpak apps (Flathub remote + selected profile)
- β Builds flake development environment
- β Installs Claude Code CLI + wrapper
- β Configures VSCodium for AI development
- β Sets up Powerlevel10k theme
- β Verifies all packages are accessible
No manual intervention needed - everything is fully automated.
sudo rebootAt login, select "Cosmic" from the session menu.
When you open a terminal, Powerlevel10k wizard runs automatically:
- Choose prompt style
- Choose color scheme (High Contrast Dark recommended)
- Choose what to show (time, icons, path)
- Restart shell - beautiful prompt ready!
Run the automated health check:
cd ~/NixOS-Dev-Quick-Deploy
./system-health-check.shThe quick deploy runs this automatically after Phase 8, but you can rerun it anytime to confirm your setup.
This will verify:
- β All core tools (k3s, kubectl, python3, node, etc.)
- β Nix ecosystem (home-manager, flakes)
- β AI tools (claude-wrapper, gpt-codex-wrapper, codex-wrapper, openai-wrapper, gooseai-wrapper, ollama, aider)
- β
Python AI/ML packages (60+ packages):
- Deep Learning: PyTorch, TensorFlow
- LLM Frameworks: LangChain, LlamaIndex, OpenAI, Anthropic
- Vector DBs: ChromaDB, Qdrant client, FAISS, Sentence Transformers
- Data Science: Pandas, Polars, Dask, Jupyter Lab
- Code Quality: Black, Ruff, Mypy
- β Editors (VSCodium, Cursor, Neovim)
- β Shell configuration (aliases, functions)
- β Flatpak applications (including DBeaver)
- β AI Systemd Services (Qdrant, Hugging Face TGI, Jupyter Lab, Gitea)
- β Environment variables & PATH
The AI stack runs on K3s + containerd. Verify the runtime is ready:
Verify manually:
# Check core tools
kubectl version --client
python3 --version
node --version
go version
cargo --version
# Check Nix tools
home-manager --version
nix flake --help
# Check AI tools
which claude-wrapper
which gpt-codex-wrapper
which codex-wrapper
which openai-wrapper
which gooseai-wrapper
ollama --version
aider --version
# Check editors
codium --version
code-cursor --help
nvim --version
# Enter development environment
aidb-dev
# Check Flatpak apps
flatpak list --userIf any checks fail:
# Attempt automatic fixes
./system-health-check.sh --fix
# Reload shell environment
source ~/.zshrc
# Or restart shell
exec zsh
# Re-apply home-manager config
cd ~/.dotfiles/home-manager
home-manager switch --flake .#$(whoami)All done! Everything is installed and ready to use.
Comprehensive health check for all packages and services:
./system-health-check.sh # Run full system health check
./system-health-check.sh --detailed # Show detailed output with versions
./system-health-check.sh --fix # Attempt automatic fixesChecks include:
- Core system tools (k3s, kubectl, git, curl, etc.)
- Programming languages (Python, Node.js, Go, Rust)
- Nix ecosystem (home-manager, flakes)
- AI tools (Claude Code, llama.cpp, Aider)
- 60+ Python AI/ML packages (PyTorch, TensorFlow, LangChain, etc.)
- Editors and IDEs (VSCodium, Neovim, Cursor)
- Shell configuration (ZSH, aliases, functions)
- Flatpak applications (Firefox, Obsidian, DBeaver, etc.)
- AI systemd services (Qdrant, Hugging Face TGI, Jupyter Lab)
- Environment variables and PATH configuration
nrs # Alias for: sudo nixos-rebuild switch
hms # Alias for: home-manager switch
nfu # Alias for: nix flake updateaidb-dev # Enter flake dev environment with all tools
aidb-shell # Alternative way to enter dev environment
aidb-info # Show environment information
aidb-update # Update flake dependenciesNote: If aidb-dev is not found, reload your shell:
source ~/.zshrc # Or: exec zshkubectl get pods -n ai-stack
kubectl rollout restart -n ai-stack deployment/aidb
kubectl logs -n ai-stack deploy/aidbQuick Deploy defaults to CPU/iGPU-friendly models unless VRAM/RAM allow larger GGUFs. The CPU/iGPU baseline uses qwen3-4b for the coder model and sentence-transformers/all-MiniLM-L6-v2 for embeddings.
Swap models at any time:
./scripts/swap-llama-cpp-model.sh Qwen3-4B-Instruct-2507-Q4_K_M.gguf
./scripts/swap-embeddings-model.sh BAAI/bge-small-en-v1.5All other AI orchestration (vLLM endpoints, shared volumes, etc.) is managed by the ai-optimizer repository layered on top of this stack.
CLI Tools:
# Query LLMs from command line
gpt-cli "explain this code"
# Run aider (AI coding assistant)
aider
# Start Jupyter Lab manually
jupyter-lab
# Install Obsidian AI plugins
obsidian-ai-bootstrapkubectl get pods -n ai-stack
kubectl get svc -n ai-stackVSCodium Version Compatibility:
- VSCodium Insiders (rolling build; installed by this script)
- Claude Code extension works with VSCodium and VS Code 1.85.0+
- Smart wrappers ensure Node.js is found correctly for each CLI
Usage:
# Launch VSCodium with Claude integration
codium
# Launch with environment debugging
CODIUM_DEBUG=1 codium
# Test wrappers directly
~/.npm-global/bin/claude-wrapper --version
~/.npm-global/bin/gpt-codex-wrapper --version
~/.npm-global/bin/codex-wrapper --version
~/.npm-global/bin/openai-wrapper --version
~/.npm-global/bin/gooseai-wrapper --version
goose --version
# Launch Goose Desktop (search "Goose Desktop" in your application menu)
# Provided by the declarative xdg.desktopEntries configuration
# Debug Claude wrapper
CLAUDE_DEBUG=1 ~/.npm-global/bin/claude-wrapper --version
# Debug GPT CodeX wrapper
GPT_CODEX_DEBUG=1 ~/.npm-global/bin/gpt-codex-wrapper --version
# Debug Codex wrapper
CODEX_DEBUG=1 ~/.npm-global/bin/codex-wrapper --version
# Debug OpenAI wrapper
OPENAI_DEBUG=1 ~/.npm-global/bin/openai-wrapper --version
# Debug GooseAI wrapper
GOOSEAI_DEBUG=1 ~/.npm-global/bin/gooseai-wrapper --version# Launch Cursor (prefers Flatpak, falls back to native)
code-cursor
# Launch specific project
code-cursor /path/to/projectNote: Cursor must be installed via Flatpak:
flatpak install flathub ai.cursor.Cursor| File | Description | Managed By |
|---|---|---|
/etc/nixos/configuration.nix |
System configuration | Script auto-updates |
~/.dotfiles/home-manager/home.nix |
User packages & config | Script auto-creates |
~/.dotfiles/home-manager/flake.nix |
Home-manager flake | Script auto-creates |
~/.config/VSCodium/User/settings.json |
VSCodium settings | Home-manager (declarative) |
~/.config/p10k/theme.sh |
Powerlevel10k theme | Wizard auto-generates |
~/.zshrc |
ZSH configuration | Home-manager (declarative) |
~/.npmrc |
NPM configuration | Script auto-creates |
~/.npm-global/ |
Global NPM packages | AI CLI wrappers installed here (Claude, GPT CodeX, Codex IDE, OpenAI, GooseAI) |
NixOS-Dev-Quick-Deploy/
βββ nixos-quick-deploy.sh # Main deployment script (run this)
βββ scripts/
β βββ system-health-check.sh # System health verification and repair tool
β βββ p10k-setup-wizard.sh # Powerlevel10k configuration wizard
βββ templates/
β βββ configuration.nix # NixOS system config template
β βββ home.nix # Home-manager config template
β βββ flake.nix # Development flake template
βββ docs/
β βββ AIDB_SETUP.md # AIDB setup and configuration guide
β βββ AGENTS.md # AI agent workflow documentation
β βββ CODE_REVIEW.md # Code review and quality documentation
β βββ SAFE_IMPROVEMENTS.md # Safe improvement guidelines
βββ README.md # This file
βββ LICENSE # MIT License
Click to expand detailed execution flow
β Running as user (not root)
β NixOS detected
β Internet connection available
β sudo access confirmed
β GitHub username: yourusername
β GitHub email: you@example.com
β Editor preference: 3 (vscodium)
β Backing up /etc/nixos/configuration.nix
β Adding COSMIC desktop configuration
β Adding K3s + containerd runtime support
β Enabling Nix flakes
β Allowing unfree packages
Running: sudo nixos-rebuild switch
β NixOS system configuration applied!
β Creating ~/.dotfiles/home-manager/
β Writing home.nix with ~100 packages
β Writing flake.nix
β Creating flake.lock
Running: home-manager switch
β Home-manager configuration applied!
β Updating current shell environment
β Verifying package installation
β python3 found at ~/.nix-profile/bin/python3
β All critical packages in PATH!
β Flathub remote added
β Installing Firefox
β Installing Obsidian
β Installing Cursor
β Installing LM Studio
β Installing 7 more apps...
β All Flatpak applications installed!
Flatpak provisioning is now state-aware. The deployer inspects ~/.local/share/flatpak before touching anything, keeps existing repositories intact, and only installs packages that are missing from your selected profile or the projectβs core Flatpak set. To force a clean slate, pass --flatpak-reinstall; otherwise the run simply layers the new defaults onto your current desktop. Need to re-seed the Hugging Face caches used by the DeepSeek and Scout TGI services? Add --force-hf-download so Phase 5 wipes both caches and pulls fresh weights before the switch.
Need Qalculate? It now ships from pkgs.qalculate-qt in the declarative package set, so the Flatpak profile stays lean even when Flathub temporarily removes the app. Goose CLI/Desktop are likewise provided by pkgs.goose-cli, eliminating the brittle .deb download in Phase 6.
β Nix flakes enabled
Building flake development environment...
β Flake built and cached
β Created aidb-dev-env activation script
β Added AIDB flake aliases to .zshrc
β Installing @anthropic-ai/claude-code via npm
β Installing @openai/codex (GPT CodeX CLI) via npm
β Installing openai via npm
β Goose CLI detected via nixpkgs (goose-cli)
β Claude Code npm package installed
β GPT CodeX wrapper created from @openai/codex
β OpenAI CLI npm package installed
βΉ Registering Goose Desktop launcher
β Goose Desktop launcher available via application menu
β Created smart Node.js wrapper at ~/.npm-global/bin/claude-wrapper
β Created smart Node.js wrapper at ~/.npm-global/bin/gpt-codex-wrapper
β Created smart Node.js wrapper at ~/.npm-global/bin/codex-wrapper
β Created smart Node.js wrapper at ~/.npm-global/bin/openai-wrapper
β Created smart Node.js wrapper at ~/.npm-global/bin/gooseai-wrapper
β Testing wrappers succeeded
β VSCodium wrapper configuration updated
β Claude Code configured in VSCodium
Installing Claude Code extension...
β Anthropic.claude-code installed
Installing additional extensions...
β GPT CodeX and GooseAI extensions are not published on Open VSX (install from the VS Marketplace if you need them)
β Python, Pylance, Black Formatter
β Jupyter, Continue, Codeium
β GitLens, Git Graph, Error Lens
β Go, Rust Analyzer, YAML, TOML
β Docker, Terraform
β Remaining extensions installed!
============================================
β NixOS Quick Deploy COMPLETE!
============================================
Next steps:
1. Reboot: sudo reboot
2. Select "Cosmic" at login
3. Open terminal (P10k wizard auto-runs)
4. Verify: codium, claude-wrapper --version, gpt-codex-wrapper --version, codex-wrapper --version, openai-wrapper --version, gooseai-wrapper --version, goose --version
Symptom: VSCodium shows "Error 127" when trying to use Claude Code
Cause: Claude wrapper can't find Node.js executable
Solutions:
-
Run the diagnostic wrapper:
CLAUDE_DEBUG=1 ~/.npm-global/bin/claude-wrapper --versionThis shows exactly where it's searching for Node.js.
-
Verify Node.js is installed:
which node node --version
If this fails, Node.js wasn't installed properly.
-
Restart your shell to refresh PATH:
exec zsh -
Re-apply home-manager config:
home-manager switch --flake ~/.dotfiles/home-manager#$(whoami)
-
Reinstall Claude Code:
export NPM_CONFIG_PREFIX=~/.npm-global npm install -g @anthropic-ai/claude-code
-
Check VSCodium settings:
cat ~/.config/VSCodium/User/settings.json | grep -A5 "claude-code"
Should show
executablePathpointing to~/.npm-global/bin/claude-wrapper -
Verify no Flatpak overrides exist:
flatpak info com.visualstudio.code 2>/dev/null || echo "No conflicting Flatpak"
Remove any
com.visualstudio.code*orcom.vscodium.codium*Flatpak appsβthey replace the declarativeprograms.vscodepackage and undo the managed settings/extensions.
Symptom: Claude Code stops with "Prompt is too long"
Cause: Context sharing can send too much workspace data or MCP context for the model window.
Fix (default templates): The templates now disable context sharing by default. Re-run deploy or update your settings:
rg '"claudeCode.enableContextSharing"' ~/.config/VSCodium/User/settings.jsonSet it to false, then restart VSCodium.
Symptom: VSCodium notifications show:
Unable to initialize Git; AggregateError(2)
Error: Unable to find git
Error: Unable to find git
Cause: GUI launches of VSCodium sometimes miss ~/.nix-profile/bin on PATH, so the editor cannot locate the Git binary that Home Manager installs. Without explicit configuration, the built-in Git extension immediately fails.
Fix (v4.2.0+): The declarative settings now set "git.path" to the exact derivation path of config.programs.git.package. Re-run the deploy script or home-manager switch so ~/.config/VSCodium/User/settings.json picks up that value.
Manual verification:
- Confirm Git works in a terminal:
git --version
- Inspect the VSCodium settings file and ensure the stored path matches your
gitderivation:rg '"git.path"' ~/.config/VSCodium/User/settings.json
- Relaunch VSCodium. The Git panel should populate without the AggregateError message.
Symptom: Phase 6 logs show repeated errors when adding Flathub, for example:
βΉ Adding Flathub Flatpak remote...
β error: Unable to create repository at /home/$USER/.local/share/flatpak/repo (Creating repo: mkdirat: No such file or directory)
Cause: When Phase 6 runs on its own (or after a cleanup flag), the user-level Flatpak directories may be missing or the repo path may be an empty placeholder without the expected objects/ tree. In both cases flatpak remote-add fails before it can download anything.
Fix (v4.2.1+): The Flatpak automation now recreates ~/.local/share/flatpak and ~/.config/flatpak, verifies ownership, and removes empty repo stubs so Flatpak can bootstrap a fresh OSTree repository. Re-run the deploy script or restart Phase 6 and the remote should be added cleanly.
Manual verification:
- Confirm the directories exist:
ls -ld ~/.local/share/flatpak ~/.local/share/flatpak/repo ~/.config/flatpak
- If you previously created
~/.local/share/flatpak/repomanually and itβs missing theobjects/directory, remove the empty stub and rerun Phase 6:rm -rf ~/.local/share/flatpak/repo - Retry the add manually if needed:
flatpak remote-add --user --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
Symptom: VSCodium shows error when trying to save settings:
Failed to save 'settings.json': Unable to write file (EROFS: read-only file system)
Cause: Home Manager renders settings.json inside /nix/store so it can symlink the file into ~/.config/VSCodium. Editors therefore see a read-only filesystem and refuse to save.
Fix (v4.0.0+): The deployment now snapshots ~/.config/VSCodium/User/settings.json before each Home Manager switch, reapplies the snapshot afterwards, and ensures the final file is a writable copy stored in:
~/.config/VSCodium/User/settings.json
~/.local/share/nixos-quick-deploy/state/vscodium/settings.json # backup copy
VSCodium can now edit settings.json normally and your changes persist across rebuilds because the activation hook restores the snapshot after Home Manager finishes linking.
Need reproducible settings?
You can still keep long-term configuration in templates/home.nix β programs.vscode.profiles.default.userSettings. The declarative defaults are written into the store, and the activation hook will seed your mutable copy with those defaults whenever the backup is missing. A suggested workflow:
- Make quick edits directly inside VSCodium (they land in the mutable file).
- Periodically copy the keys you care about back into
templates/home.nixso future machines inherit them. - Rerun the deploy script so the declarative defaults and your mutable copy stay in sync.
Issue: kubectl: command not found, home-manager: command not found, or AI CLI wrappers (claude/gpt-codex/openai/gooseai) not found after installation
Solution:
# Run health check with automatic fixes
./system-health-check.sh --fix
# Restart shell to load new PATH
exec zsh
# Or manually source session vars
source ~/.nix-profile/etc/profile.d/hm-session-vars.sh
# Verify
which kubectl
which home-manager
which claude-wrapper
which gpt-codex-wrapper
which codex-wrapper
which openai-wrapper
which gooseai-wrapperSymptom: System drops into emergency mode with messages such as:
[FAILED] Failed to start Virtual Console Setup.
[FAILED] Failed to start Switch Root.
Cannot open access to console, the root account is locked.
See sulogin(8) man page for more details.
Press Enter to continue.
The system gets stuck in an endless "Press Enter to continue" loop.
Cause: The NixOS configuration did not define a root user password. When the system encounters boot issues and drops to emergency mode, it requires root access via sulogin. Without a root password configured, sulogin cannot provide a shell.
Additionally, the console font (Lat2-Terminus16) requires the terminus_font package, and without console.earlySetup = true, the Virtual Console Setup service fails during initrd.
Fix (for existing installations):
- Boot from a NixOS live USB/ISO
- Mount your root filesystem:
sudo mount /dev/nvme0n1p2 /mnt # Adjust device as needed sudo mount /dev/nvme0n1p1 /mnt/boot # EFI partition
3. Enter a chroot environment:
```bash
sudo nixos-enter --root /mnt
- Set a root password:
passwd root
- Regenerate and rebuild:
cd /path/to/NixOS-Dev-Quick-Deploy ./nixos-quick-deploy.sh --resume sudo nixos-rebuild switch --flake ~/.dotfiles/home-manager#$(hostname)
Prevention: The deployment script now automatically syncs the root password with the primary user and configures the console with earlySetup = true and the required terminus_font package.
Symptom: System drops into emergency mode during boot with messages such as:
[FAILED] Failed to mount /var/lib/containers/storage/overlay/df4bd49...
[DEPEND] Dependency failed for Local File Systems
Cause: Older Podman-based deployments could leave stale container storage configuration behind. The K3s path uses containerd, so legacy /var/lib/containers mounts should not be referenced by current runs. If you still see container storage mount failures, it usually means a legacy unit is still enabled.
Fix: Re-run Quick Deploy with --reset-state, then rebuild:
cd ~/NixOS-Dev-Quick-Deploy
./nixos-quick-deploy.sh --reset-state
sudo nixos-rebuild switch --flake ~/.dotfiles/home-managerIf the error persists, check for lingering units:
systemctl list-units | rg -i "podman|containers"Symptom: Pods stuck in ImagePullBackOff or CrashLoopBackOff.
Fix:
kubectl get pods -n ai-stack
kubectl describe pod -n ai-stack <pod>
kubectl logs -n ai-stack deploy/<service>Issue: Only GNOME/KDE shows at login screen
Solution:
# Verify COSMIC is in system config
grep -i cosmic /etc/nixos/configuration.nix
# Should show: services.desktopManager.cosmic.enable = true;
# If missing, rebuild
sudo nixos-rebuild switch
sudo rebootIssue: home-manager switch fails with "existing file conflicts"
Solution:
# Script automatically backs up conflicts to:
ls -la ~/.config-backups/
# If manual intervention needed, move conflicting files
mv ~/.config/problematic-file ~/.config-backups/
home-manager switch --flake ~/.dotfiles/home-manager#$(whoami)Issue: Flatpak app installed but won't start
Solution:
# Check if app is actually installed
flatpak list --user | grep -i appname
# Try running from command line to see errors
flatpak run org.mozilla.firefox
# Reinstall if needed
flatpak uninstall org.mozilla.firefox
flatpak install flathub org.mozilla.firefoxPhase 6 now installs the upstream OpenSkills automation toolkit directly from https://github.com/numman-ali/openskills.git (via npm install -g openskills). After the CLI is installed or updated, the deployer runs ~/.config/openskills/install.sh automatically so you can layer project-specific helpers on top.
Note: The deployer still creates ~/.config/openskills/install.sh as an executable placeholder. Edit that file with the commands you want run during Phase 6 and keep your workflows reproducible.
Issue: Pre-switch cleanup wipes Flatpak apps (the directories under .local/share/flatpak or .var/app get deleted before home-manager switch).
Fix: The deployer now preserves user Flatpak directories whenever it detects installed apps, cached profile state, or existing Flatpak data. Cleanup only happens if you explicitly opt in.
Solution:
# Default behavior: nothing to do, rerun the deployer and Flatpaks stay intact
./nixos-quick-deploy.sh
# Need a full reset? Opt in before running the deployer:
# (1) CLI flag:
./nixos-quick-deploy.sh --flatpak-reinstall
# Force a clean Hugging Face cache for both TGI services
./nixos-quick-deploy.sh --force-hf-download
# (2) Or environment variable:
RESET_FLATPAK_STATE_BEFORE_SWITCH=true ./nixos-quick-deploy.shIssue: flatpak list --user shows multiple versions of Freedesktop Platform (24.08, 25.08, etc.)
This is NORMAL! Different Flatpak applications depend on different runtime versions. For example:
- Firefox might need Platform 24.08
- Cursor might need Platform 25.08
- Some apps need both stable and extra codecs
Cleanup unused runtimes:
# See what would be removed (safe, dry-run)
flatpak uninstall --unused
# Actually remove unused runtimes
flatpak uninstall --unused -yNote: Only runtimes not needed by any installed app will be removed.
Issue: Text blends with background colors
Solution:
# Re-run wizard
rm ~/.config/p10k/.configured
exec zsh
# Choose option 1: High Contrast Dark
# Or option 7: Custom High ContrastIssue: MangoHud injects into every GUI/terminal session, covering windows you do not want to benchmark, including COSMIC desktop applets and system windows.
Fix: The shipped MangoHud presets now include the no_display=1 option for the desktop profile and blacklist COSMIC system applications (Files, Terminal, Settings, Store, launcher, panel, etc.) for all other profiles. This prevents overlays from appearing on desktop utilities.
Solution: Fresh deployments now default to profile 4) desktop, which keeps MangoHud inside the mangoapp desktop window so no other applications are wrapped. You only need the selector if you want to change this default.
For existing systems with the overlay bug: Run the fix script to update your MangoHud configuration:
./scripts/fix-mangohud-config.sh
# Then re-run the deployment to regenerate your configs with the corrected settings
./nixos-quick-deploy.shIf you want to change the MangoHud profile:
# Run the interactive selector and pick the overlay mode you prefer
./scripts/mangohud-profile.sh
# Re-apply your Home Manager config so the new preference is enforced
cd ~/.dotfiles/home-manager
home-manager switch -b backup --flake .1) disabledremoves MangoHud from every app.2) lightand3) fullkeep the classic per-application overlays.4) desktoplaunches a transparent, movable mangoapp window so stats stay on the desktop instead of stacking on top of apps. (Default)5) desktop-hybridauto-starts the mangoapp desktop window while still injecting MangoHud into supported games/apps.
The selected profile is cached at ~/.cache/nixos-quick-deploy/preferences/mangohud-profile.env, so future deploy runs reuse your preference automatically.
Need a lean workstation build instead? Set
ENABLE_GAMING_STACK=falsebefore running the deploy script to skip Gamemode, Gamescope, Steam, and the auxiliary tuning (zram overrides,/etc/gamemode.ini, etc.). The defaulttruevalue keeps the full GLF gaming stack enabled so MangoHud, Steam, and Gamescope are ready immediately after install.
ENABLE_LACT=auto(default) enables LACT automatically when the hardware probe finds an AMD, Nvidia, or Intel GPU. Set the variable totrueto force installation, orfalseto skip it entirely.- When enabled, the generated configuration sets
services.lact.enable = true;, which installs the GTK application and its system daemon. Launch LACT from the desktop to tune clocks, voltage offsets, or fan B1EC curves. The first time you save settings, the daemon writes/etc/lact/config.yaml; future tweaks can be made through the GUI or by editing that file. - AMD users who plan to overclock/undervolt should also keep
hardware.amdgpu.overdrive.enable = true;(already emitted when RDNA GPUs are detected) so LACT can apply the requested limits. See the upstream FAQ if you need additional kernel parameters for your card.
Edit your home-manager config:
nvim ~/.dotfiles/home-manager/home.nixFind the services.flatpak.packages section and uncomment apps you want:
services.flatpak.packages = [
# Uncomment to enable:
# "org.libreoffice.LibreOffice"
# "org.gimp.GIMP"
# "org.inkscape.Inkscape"
# "org.blender.Blender"
"com.obsproject.Studio" # OBS Studio (screen recording/streaming)
# "com.discordapp.Discord"
# "com.slack.Slack"
];Apply changes:
home-manager switch --flake ~/.dotfiles/home-manager#$(whoami)# Remove configuration marker
rm ~/.config/p10k/.configured
# Restart shell to trigger wizard
exec zshEdit home-manager config:
nvim ~/.dotfiles/home-manager/home.nixAdd packages to home.packages:
home.packages = with pkgs; [
# ... existing packages ...
# Add your packages here:
terraform
kubectl
k9s
];Apply:
hms # Alias for home-manager switchThe deployment configures an optional auto-upgrade service that can automatically update your Home Manager configuration on a schedule.
To enable it, edit your home-manager config:
nvim ~/.dotfiles/home-manager/home.nixFind the services.home-manager.autoUpgrade section and enable it:
services.home-manager.autoUpgrade = {
enable = true; # Change from false to true
frequency = "daily"; # Options: "daily", "weekly", "monthly"
useFlake = true;
flakeDir = "${config.home.homeDirectory}/.config/home-manager";
};Apply changes:
hms # Alias for home-manager switchThe service will:
- Run daily at 03:00 by default
- Pull updates from your
~/.config/home-managerflake - Automatically rebuild your home environment
- Log output to systemd journal:
journalctl --user -u home-manager-autoUpgrade.service
To check status:
systemctl --user status home-manager-autoUpgrade.timer
systemctl --user status home-manager-autoUpgrade.serviceEdit the main script:
nvim ~/NixOS-Dev-Quick-Deploy/nixos-quick-deploy.shFind the main() function and comment out steps:
main() {
check_prerequisites
gather_user_info
# update_nixos_system_config # Skip NixOS rebuild
create_home_manager_config
apply_home_manager_config
# setup_flake_environment # Skip flake setup
install_claude_code
configure_vscodium_for_claude
install_vscodium_extensions
print_post_install
}Already configured in your .zshrc:
nrs # sudo nixos-rebuild switch
hms # home-manager switch --flake ~/.dotfiles/home-manager#$(whoami)
nfu # nix flake update
lg # lazygit# Check status
kubectl get pods -n ai-stack
# Access Open WebUI at http://localhost:3001
# Access llama.cpp at http://localhost:8080
# Access Qdrant at https://localhost:8443/qdrant (use -k for self-signed)
# Access AIDB MCP at https://localhost:8443/aidb (use -k for self-signed)- VSCodium: General development, Claude Code integration, Continue AI
- Cursor: Heavy AI pair programming, GPT-4 integration, AI-first workflows
# Always set NPM prefix before installing global packages
export NPM_CONFIG_PREFIX=~/.npm-global
npm install -g <package-name>
# Or add to ~/.npmrc (already done by script):
# prefix=/home/username/.npm-global# Update NixOS system
sudo nixos-rebuild switch --upgrade
# Update home-manager packages
nix flake update ~/.dotfiles/home-manager
home-manager switch --flake ~/.dotfiles/home-manager#$(whoami)
# Update Flatpak apps
flatpak update./nixos-quick-deploy.sh --resumeβ continue the multi-phase installer from the last checkpoint../nixos-quick-deploy.sh --resume --phase 5β rerun the declarative deployment phase after editing templates.nrs(sudo nixos-rebuild switch) β manual system rebuild; the deployer now pauses container services automatically when it runs this step.hms(home-manager switch --flake ~/.dotfiles/home-manager#$(whoami)) β apply user environment changes.nfu(nix flake update) β refresh flake inputs before rebuilding.
kubectl get pods -n ai-stackβ show container status.kubectl rollout restart -n ai-stack deployment/<service>β restart a service.kubectl logs -n ai-stack deploy/<service>β stream logs.python3 ai-stack/tests/test_hospital_e2e.pyβ run the K3s health suite.
kubectl describe pod -n ai-stack <pod>β show events + mount issues.kubectl rollout restart -n ai-stack deployment/<service>β bounce a service after config/image changes.
βΉοΈ Automation note: During PhaseΒ 5 the deployer pauses managed services (systemd units and user-level Podman quadlets), cleans stale Podman storage if needed, applies
nixos-rebuild switch, and then restores everything it stopped. You no longer need to stop the AI stack manually before a rebuild.
Database Tools:
postgresql # PostgreSQL database
redis # Redis key-value store
mongodb # MongoDB database
dbeaver # Universal database IDECloud & Infrastructure:
terraform # Infrastructure as code
kubectl # Kubernetes CLI
awscli2 # AWS command line
google-cloud-sdk # Google Cloud CLI
azure-cli # Azure command linePerformance & Debugging:
valgrind # Memory debugging
gdb # GNU debugger
lldb # LLVM debugger
strace # System call tracer
perf # Linux profilingDocumentation & Diagramming:
graphviz # Graph visualization
plantuml # UML diagrams
mermaid-cli # Mermaid diagrams from CLISecurity Tools:
nmap # Network scanner
zenmap # GUI frontend for nmap scans
wireshark # Network protocol analyzer
hashcat # Password cracker
john # John the RipperCreative Tools:
org.blender.Blender- 3D creation suiteorg.inkscape.Inkscape- Vector graphicsorg.gimp.GIMP- Image editororg.kde.kdenlive- Video editor
Communication:
com.discordapp.Discord- Team chatcom.slack.Slack- Team collaborationorg.telegram.desktop- Messaging
Productivity:
org.libreoffice.LibreOffice- Office suitemd.obsidian.Obsidian- Note-taking (already included!)com.notion.Notion- Workspace
Already installed but worth highlighting:
- Claude Code - AI pair programming
- Continue - In-editor AI completions
- Codeium - Free AI autocomplete
- GitLens - Supercharged Git
- Error Lens - Inline error highlighting
- Todo Tree - TODO comment tracking
- Prettier - Code formatter
- Nix IDE - Nix language support
Gitea AI Workflow (already configured):
- Repository task runners
- Cursor integration for AI code generation
- Aider integration for AI code review
- GPT CLI for commit message generation
Obsidian AI Plugins (install with obsidian-ai-bootstrap):
- Smart Connections
- Text Generator
- Copilot
- AI Assistant
- AGENTS.md - Canonical agent onboarding and rules
- docs/AGENTS.md - Mirror for quick reference
- docs/agent-guides/00-SYSTEM-OVERVIEW.md - System map for agents
- docs/agent-guides/01-QUICK-START.md - Task-ready checklist
- ai-stack/agents/skills/AGENTS.md - Skill usage and sync rules
-
AI Stack Integration Guide - Complete architecture and migration
-
AI Stack README - AI stack overview and quick start
-
AIDB MCP Server - AIDB server documentation
-
Agent Skills - 29 specialized AI agent skills
-
AI Stack Architecture - Technical architecture details
-
Agent Workflows - Canonical AI agent onboarding and standards
-
MCP Servers Guide - Model Context Protocol server docs
-
Build Optimization Guide - Choose between binary caches (20-40 min) or source builds (60-120 min)
-
AIDB Setup Guide - Complete AIDB configuration walkthrough
-
AI Integration Guide - Sync docs and leverage AI-Optimizer tooling
-
Local AI Starter Toolkit - Scaffold local agents/OpenSkills/MCP servers without private repos
-
Agent Workflows - AI agent integration documentation
-
Troubleshooting Guide - Common issues and solutions
-
Code Review Guide - Code quality and review process
-
Safe Improvements - Guidelines for safe changes
-
System Health Check - Verify and fix installation
- Zero to Nix - Beginner-friendly Nix tutorial
- Nix Pills - Deep dive into Nix
- NixOS & Flakes Book - Modern Nix
Found a bug or have a suggestion? Please open an issue or submit a pull request!
When reporting problems, please include:
- NixOS version:
nixos-version - Script output or error messages
- Relevant logs from
/tmp/nixos-quick-deploy.log
MIT License - See LICENSE file for details.
Built with these amazing technologies:
- NixOS - Reproducible Linux distribution
- Home Manager - Declarative dotfile management
- COSMIC - Modern desktop environment
- Anthropic Claude - AI assistant
- Cursor - AI-powered code editor
- K3s - Lightweight Kubernetes distribution
- Powerlevel10k - Beautiful ZSH theme
Ready to deploy?
curl -fsSL https://raw.githubusercontent.com/MasterofNull/NixOS-Dev-Quick-Deploy/main/nixos-quick-deploy.sh | bashHappy coding! π