Next-generation AI mobile workstation for an intelligent lifestyle.
开启智能生活的次世代AI移动工作台
English | 中文
- 20+ AI providers: Qwen, GLM, Doubao, DeepSeek, ERNIE, Hunyuan, Yi, Kimi, Step, Spark, MiniMax, SiliconCloud, OpenAI, Claude, Gemini, and more
- Real-time streaming responses: streaming chat and tool calling for ChatGPT-like interactions
- Local model support: integrate LLM.swift for on-device inference and deployment
- 🔍 Smart search engines: Zhipu, Bocha, EXA, Tavily, LangSearch, Brave, Perplexity
- 🌐 Web content extraction: SwiftSoup parsing for titles, body text, and icons, with batch URL support
- 🗺️ Mapping and location services: CoreLocation and MapKit for live positioning, geocoding, and navigation
- 🌤️ Multi-source weather services: QWeather and OpenWeather APIs for live weather and forecasts
- 📅 System calendar integration: EventKit for events, reminders, and intelligent time filtering
- 💪 Health data analytics: HealthKit integration for steps, distance, calories, and nutrition
- 💻 Code execution service: Piston API for Python 3.10 with Jupyter-style outputs
- 🎨 Smart canvas system: multi-type canvases, version history, SwiftData persistence, and collaboration
- 🔊 Multi-mode voice synthesis: system TTS and external APIs with multi-language, multi-voice playback
- 🧠 Long-term memory system: intelligent storage, retrieval, and personalized context
- Advanced RAG retrieval: vector similarity search with multiple embedding models
- Full-format document parsing: PDF, Word/PPT, Excel, Markdown, and plain text extraction
- Intelligent knowledge chunking: semantic segmentation for long documents
- Vector data management: 1024-dimension embeddings with batch processing and retrieval optimization
- Professional camera system: AVCaptureSession engine with adaptive multi-camera support and smart zoom
- Multimodal AI image understanding: 20+ vision models for OCR, scene understanding, and analysis
- Intelligent image processing: camera, photo library, multi-select, preprocessing, and optimization
- Streaming vision interaction: real-time visual Q&A with contextual memory
- Cross-device image sync: SwiftData metadata for multi-device history
- Multi-language text-to-speech with system voices and external providers
- Real-time playback control and streaming synthesis
- Voice output integrated across chat and tool workflows
- Language: Swift 5.9+
- UI framework: SwiftUI
- Storage: SwiftData + CloudKit
- Networking: URLSession + Async/Await
- Dependency management: Swift Package Manager
// Local AI
.package(url: "https://github.com/otabuzzman/LLM.swift", from: "1.8.0")
// Document and chat rendering
.package(url: "https://github.com/CoreOffice/CoreXLSX", from: "0.14.2")
.package(url: "https://github.com/blackhole89/LaTeXSwiftUI", from: "1.5.0")
.package(url: "https://github.com/gonzalezreal/MarkdownUI", from: "2.0.0")
// Text and UI
.package(url: "https://github.com/RichTextFormat/RichTextKit", from: "0.9.0")
.package(url: "https://github.com/scinfu/SwiftSoup", from: "2.6.0")
// Utilities
.package(url: "https://github.com/weichsel/ZIPFoundation", from: "0.9.0")- iOS 18.0+
- Xcode 15.0+
- Swift 5.9+
- macOS 14.0+ (for development)
-
Clone the repository
git clone https://github.com/CherryHQ/hanlin-ai.git cd AI_HLY -
Open the project
open AI_HLY.xcodeproj
-
Configure signing
- Select your development team in Xcode
- Update the bundle identifier to a unique value
-
Run the app
- Choose a target device or simulator
- Press
Cmd + R
- Launch the app and open Settings
- Configure the required API keys in "API Key Management"
- OpenAI API key
- Claude API key
- Google Gemini API key
- Other provider keys
- Configure external services in "Tool Settings"
- Search engine API keys
- Map service keys
- Weather service keys
- Tap the list tab
- Create a new chat with the plus button
- Choose an AI model and parameters
- Start chatting
- Tap the vision tab
- Capture an image that includes text
- Review the extracted text
- Continue with AI analysis
- Tap the knowledge base tab
- Create a new library
- Upload documents or enter knowledge manually
- Reference the library in chat
AI_HLY/
├── AI_HLY/ # Main app
│ ├── Views/ # UI components
│ │ ├── MainTabView.swift
│ │ ├── ChatView.swift
│ │ ├── VisionView.swift
│ │ └── ...
│ ├── Models/ # Data models
│ │ ├── ChatRecords.swift
│ │ ├── AllModels.swift
│ │ └── ...
│ ├── Services/ # Service layer
│ │ ├── APIServices/
│ │ ├── ChatServices/
│ │ └── ...
│ └── Resources/ # Assets
├── AI_HLY.xcodeproj/ # Xcode project
└── README.md
-
App entry (
AI_HLY.swift)- SwiftData model container initialization
- CloudKit configuration
- Deep link handling
-
Main UI (
MainTabView.swift)- Five-tab navigation
- Deep link routing
-
Data layer (
Models/)- SwiftData model definitions
- CloudKit integration
- Data persistence
-
Service layer (
Services/)- API communication management
- Tool system integration
- External service adapters
This project is licensed under the MIT License. See LICENSE for details.
- Thanks to all AI model providers for their support
- Thanks to the open-source community for the libraries and tools
- Thanks to every contributor
If this project helps you, please give it a ⭐️!
Made with ❤️ by the Hylic.AI team
