8000 Fix: #69 local model server failure by Gerome-Elassaad · Pull Request #71 · codinit-dev/codinit-dev · GitHub
[go: up one dir, main page]

Skip to content

Conversation

@Gerome-Elassaad
Copy link
Member

#69

What was fixed:

  • Created useLocalProviders hook that actively checks for running local provider instances
  • Automatically enables detected providers in the settings store
  • Provides real-time status with loading states

✅ Improved Ollama Provider

Problem: Hardcoded 8000 token limits regardless of actual model capabilities.
Solution:

  • Dynamic context window calculation based on model parameter size
  • Proper context windows: 32k for 70B+ models, 16k for 30B+ models, 8k for 7B+ models
  • Better model labeling with parameter size and context info

✅ Fixed Provider Settings Management

Problem: Type errors in the provider settings update function.
Solution:

  • Fixed updateProviderSettings function signature to properly handle settings updates
  • Added correct type imports for IProviderSetting
  • Ensured local providers can be properly enabled/disabled

✅ Model Selector Integration

Problem: Detected local providers weren't showing up in the UI selectors.
Solution:

  • Local provider detection now integrates with the existing provider/model selection system
  • When Ollama or LMStudio are detected running, they automatically appear as available options
  • Proper fallback handling when local providers go offline

- Fix Claude model IDs to use dots instead of dashes (4.5 not 4-5)
- Remove non-existent GPT models (5.2-thinking, 5.2-instant)
- Add GPT-4o as reliable fallback model
- Fix DeepSeek model ID to use correct free tier variant
- Update Claude Sonnet 4.5 context limit to 1M (actual value)
- All models now use verified OpenRouter API model IDs
- Remove all tool passing to AI SDK since built-in tools from JSON don't have proper Zod validation
- Tools are now processed server-side only, avoiding zod-to-json-schema conversion errors
- This prevents 'Cannot read properties of undefined (reading "typeName")' errors
- Functionality is maintained through server-side tool processing (MCP + built-in tools)
…ndows

- Add useLocalProviders hook that detects running Ollama and LMStudio instances
- Automatically enable detected local providers in settings
- Improve Ollama provider with proper context window calculation based on model size
- Fix updateProviderSettings function signature for correct type handling
- Local models now appear in provider/model selectors when available

Closes issues with local model detection and context window limits.
@Gerome-Elassaad Gerome-Elassaad self-assigned this Jan 21, 2026
@Gerome-Elassaad Gerome-Elassaad added the bug Something isn't working label Jan 21, 2026
@Gerome-Elassaad Gerome-Elassaad merged commit daab8c9 into main Jan 21, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ERROR api.chat DataStream onError triggered: TypeError: Cannot read properties of undefined (reading 'typeName')

2 participants

0