Stars
- All languages
- ActionScript
- Assembly
- C
- C#
- C++
- CMake
- CSS
- Clojure
- CoffeeScript
- Common Lisp
- Crystal
- Cuda
- Cython
- Dart
- Dockerfile
- Emacs Lisp
- GDScript
- Go
- Groovy
- HCL
- HTML
- Handlebars
- Haskell
- Java
- JavaScript
- Jinja
- Jsonnet
- Julia
- Jupyter Notebook
- Kotlin
- Less
- Lua
- MDX
- Makefile
- Markdown
- Mojo
- Mustache
- Nim
- Nunjucks
- OCaml
- Objective-C
- Objective-C++
- PHP
- PLpgSQL
- Perl
- Protocol Buffer
- Python
- QML
- R
- Roff
- Ruby
- Rust
- SCSS
- SVG
- Sass
- Scala
- Shell
- Starlark
- Svelte
- Swift
- TSQL
- TeX
- TypeScript
- V
- Vala
- Vim Script
- Vim Snippet
- Visual Basic .NET
- Vue
- reStructuredText
Integrate 200+ LLMs with one TypeScript SDK using OpenAI's format.
Educational framework exploring ergonomic, lightweight multi-agent orchestration. Managed by OpenAI Solution team.
GoLang port of Google's libphonenumber library
🔥 Turn entire websites into LLM-ready markdown or structured data. Scrape, crawl and extract with a single API.
Easily start a fake gRPC/gRPC-Web/Connect/REST server from protobufs
Transformer Explained Visually: Learn How LLM Transformer Models Work with Interactive Visualization
System prompts from Apple's new Apple Intelligence on MacOS Sequoia
Run your own AI cluster at home with everyday devices 📱💻 🖥️⌚
Prompty makes it easy to create, manage, debug, and evaluate LLM prompts for your AI applications. Prompty is an asset class and format for LLM prompts designed to enhance observability, understand…
BAML is a language that helps you get structured data from LLMs, with the best DX possible. Works with all languages. Check out the promptfiddle.com playground
Go support for Google's protocol buffers
Protocol Buffer Validation - Go, Java, Python, and C++ Beta Releases!
The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System
Large Action Model framework to develop AI Web Agents
An alternative to stack traces for your Go errors
[ICML 2024] CLLMs: Consistency Large Language Models
Fully private LLM chatbot that runs entirely with a browser with no server needed. Supports Mistral and LLama 3.
Fluent — planning, spec and documentation
Run LLMs locally with as little friction as possible.
The Open Source Memory Layer For Autonomous Agents