Autonomous AI agent layer - Installs into any project and manages it intelligently using GPT-4o-mini (free tier) and OpenCore.
curl -fsSL https://raw.githubusercontent.com/YOUR_USERNAME/IA_core/main/install.sh | bashThat's it! IA_Core will:
- Deeply analyze your project - Detect frameworks, languages, dependencies
- Request necessary credentials - OpenAI API key (recommended), MCP configuration
- Auto-configure - Generate project-specific workflows and settings
- Deploy MCP servers - Memory, context, and tools for advanced capabilities
- Start monitoring - Agent runs in background watching your project
- Add CLI tools -
iacorecommand available globally
- Python 3.9+ (required)
- OpenAI API Key (recommended for full functionality)
- Get one at: https://platform.openai.com/api-keys
- Free tier with GPT-4o-mini works perfectly
- Without API key: limited functionality with fallbacks
During installation, you'll be prompted for:
-
OpenAI API Key (optional but recommended)
- Used for intelligent analysis
- GPT-4o-mini free tier is sufficient
- Configure later:
iacore config
-
MCP Configuration (for backend/API projects)
- Enables memory, context, and tools servers
- Auto-detected for Django, FastAPI, Flask, Express
- Can skip if not needed
The installer analyzes your project and configures itself optimally!
- π€ Fully Autonomous - Works independently after installation
- π§ GPT-4o-mini Powered - Free-tier LLM for intelligent analysis
- π» Invisible Execution - Commands via OpenCore (silent)
- π Project-Aware - Understands 20+ project types automatically
- π Real-time Monitoring - Watches file changes intelligently
- βοΈ Smart Configuration - Auto-configures based on project analysis
- π MCP Protocol - Memory, context, tools servers for advanced features
- π¨ Rich CLI - Beautiful terminal UI with status and controls
- π REST API - HTTP API on port 8788 for integrations
- πΎ Persistent Learning - Remembers and improves over time
- π Secure - Credentials stored safely, dangerous commands blocked
Once installed, simply work on your project normally. IA_Core observes and assists transparently.
# View status and project analysis
iacore status
# See agent logs in real-time
iacore logs
# Pause/resume the agent
iacore pause
iacore resume
# Configure settings (API keys, workflows)
iacore config
# Re-analyze project after major changes
iacore analyze
# Check version
iacore version
# Uninstall completely
iacore uninstallIA_Core performs deep project analysis during installation:
- Frameworks: React, Vue, Angular, Next.js, Django, FastAPI, Flask, Express
- Languages: Python, JavaScript, TypeScript, Go, Rust, Java
- Build Tools: npm, pip, cargo, go modules, gradle
- Dependencies: package.json, requirements.txt, go.mod, Cargo.toml
- Structure: Components, modules, configuration
- Recommended Agents: Based on detected technologies
Example analysis output:
π Project Analysis:
β’ Type: Python
β’ Frameworks: fastapi, sqlalchemy
β’ Languages: python
β’ Recommended Agents: backend-developer, api-engineer
β’ MCP: β
Configured (memory + context + tools)
ββββββββββββββββββββββββββββββββββββββββββββ
β Your Project (any type) β
ββββββββββββββββββββββββββββββββββββββββββββ€
β IA_Core (invisible layer) β
β ββββββββββββββ ββββββββββββββββββ β
β β Detector βββββββΆβ IntelligentAI β β
β ββββββββββββββ ββββββββββββββββββ β
β ββββββββββββββ ββββββββββββββββββ β
β β GPT-4o-miniββββββββAutonomousAgent β β
β ββββββββββββββ ββββββββββββββββββ β
β β β β
β βΌ βΌ β
β ββββββββββββββ ββββββββββββββββββ β
β β Memory β β OpenCore β β
β β (MCP) β β Executor β β
β ββββββββββββββ ββββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββ
Components:
- ProjectDetector: Analyzes project type, frameworks, dependencies
- IntelligentAnalyzer: LLM-powered deep understanding of codebase
- AutonomousAgent: Main loop monitoring files and executing workflows
- LLMClient: GPT-4o-mini integration with rate limiting and caching
- OpenCoreExecutor: Silent command execution in sandboxed environment
- MCP Servers: Memory, context, and tools for advanced agent capabilities
- FastAPI Server: REST API for external integrations
- Typer CLI: Rich command-line interface for user control
IA_Core includes three MCP servers that enhance agent capabilities:
Persistent memory across sessions:
# Stores facts, decisions, learnings
memory/store_fact
memory/retrieve_fact
memory/store_decision
memory/store_learning
memory/get_learningsProject understanding and code navigation:
# Search, read, analyze code
context/search_files
context/read_file
context/find_definition
context/get_structure
context/project_summarySafe command execution:
# Execute commands, modify files
tools/execute_command
tools/write_file
tools/search_code
tools/analyze_dependenciesMCP servers are automatically configured during installation for projects that need them (detected backend/API frameworks).
| Type | Auto-Detection | Workflows | MCP |
|---|---|---|---|
| React | β | Component analysis, build optimization | Optional |
| Vue | β | Component analysis, state management | Optional |
| Angular | β | Module analysis, dependency check | Optional |
| Next.js | β | SSR optimization, route analysis | Optional |
| Python | β | Code quality, type checking, tests | β |
| Django | β | API analysis, migrations, admin | β |
| FastAPI | β | Endpoint analysis, validation | β |
| Flask | β | Route analysis, blueprints | β |
| Node.js | β | Dependency audits, package updates | Optional |
| Express | β | API endpoints, middleware analysis | β |
| Go | β | Module analysis, formatting | Optional |
| Rust | β | Cargo check, clippy suggestions | Optional |
Edit .iacore/config.yml in your project:
version: 1
project:
type: python
root: /path/to/project
frameworks: [fastapi, sqlalchemy]
agent:
enabled: true
auto_analyze: true
auto_execute: false # Set true for full autonomy
watch_mode: true
llm:
provider: openai
model: gpt-4o-mini
config_file: ~/.iacore/llm_config.yml
mcp:
enabled: true
config_path: .iacore/mcp/config.yml
auto_start: true
api:
host: 127.0.0.1
port: 8788
auto_start: true
workflows:
on_file_change:
- detect_impact
- analyze_context
on_git_commit:
- analyze_commit
- suggest_improvements
on_package_change:
- update_dependencies
- verify_security# OpenAI API Key
export OPENAI_API_KEY='sk-...'
# Custom project root for MCP servers
export PROJECT_ROOT='/path/to/project'
# Custom IA_Core home directory
export IACORE_HOME='~/.iacore'IA_Core exposes a REST API on http://127.0.0.1:8788:
# Check health
curl http://127.0.0.1:8788/health
# Get agent status
curl http://127.0.0.1:8788/status
# Analyze project
curl -X POST http://127.0.0.1:8788/analyze \
-H "Content-Type: application/json" \
-d '{"path": "."}'
# Pause/resume agent
curl -X POST http://127.0.0.1:8788/agent/pause
curl -X POST http://127.0.0.1:8788/agent/resumeSee full API documentation: API_REFERENCE.md
IA_Core is designed with security in mind:
- β Sandboxed execution - Commands run in controlled environment
- β Dangerous command blocking - rm -rf, sudo, shutdown blocked
- β Credential encryption - API keys stored securely with proper permissions
- β Rate limiting - Prevents API abuse (10 req/min on free tier)
- β Audit logging - All actions logged for review
- β No external data - Project data stays local
Contributions welcome! See CONTRIBUTING.md for guidelines.
MIT License - See LICENSE 682F for details.
- Documentation: docs/
- Agent Spec: AGENT_SPEC.md
- Deployment Guide: DEPLOYMENT.md
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Need help?
- Check the documentation
- Review AGENT_SPEC.md for technical details
- Search existing GitHub Issues
- Create a new issue with details
Built with β€οΈ for autonomous AI development