Machine Learning Systems
-
Updated
Mar 12, 2026 - JavaScript
8000
Machine Learning Systems
Production Android AI with ExecuTorch 1.0 - Deploy PyTorch models to mobile with NPU acceleration and 50KB footprint
LLM inference on mobile via Capacitor — run quantized GGUF models on-device
📱 Optimized ML for edge devices. Showcasing efficient model deployment, GPU-CPU memory transfer optimization, and real-world edge AI applications. 🤖
Claude Code skill for Google LiteRT - on-device AI/ML deployment framework
Android ONNX runtime session management and preprocessing for Dust
A lightweight, mobile-optimized Neural Machine Translation (NMT) framework in PyTorch. LingoLite features a modern transformer architecture with state-of-the-art optimizations for efficient multilingual translation on resource-constrained devices.
Enable seamless integration of Dust LLM capabilities into Capacitor apps for efficient, unified AI model serving across devices.
INT8 quantization of MobileNetV2 for learning and production-oriented iOS mobile inference.
On-device text embedding generation for iOS and Android via Capacitor
Standalone ONNX runtime session management and preprocessing for Dust — iOS/macOS
Model download and serving orchestration for Dust — Capacitor bridge
ONNX model execution on iOS and Android via Capacitor
Android ML model server — download management, session caching, accelerator probing
magnitude-based pruning of MobileNetV2 for learning and production-oriented iOS mobile inference.
Add a description, image, and links to the mobile-ml topic page so that developers can more easily learn about it.
To associate your repository with the mobile-ml topic, visit your repo's landing page and select "manage topics."