- All languages
- ASP
- Agda
- Assembly
- Bikeshed
- C
- C#
- C++
- CMake
- CSS
- Clojure
- CoffeeScript
- Common Lisp
- Coq
- Cuda
- Cython
- Dockerfile
- Fortran
- G-code
- GLSL
- Go
- Groff
- HCL
- HTML
- Haskell
- Inform 7
- Java
- JavaScript
- Julia
- Jupyter Notebook
- Kotlin
- LLVM
- Lasso
- Lean
- Lua
- MATLAB
- MDX
- MQL5
- Makefile
- Markdown
- Mathematica
- Mojo
- MoonScript
- NSIS
- Nim
- OCaml
- Objective-C++
- OpenEdge ABL
- OpenQASM
- PDDL
- PHP
- Perl
- PostScript
- PowerShell
- Prolog
- Pug
- PureScript
- Python
- Q#
- R
- Racket
- Roff
- Ruby
- Rust
- SCSS
- Scala
- Scheme
- Shell
- Slash
- Solidity
- Swift
- SystemVerilog
- TeX
- TypeScript
- Verilog
- Vue
- YARA
- Zimpl
Starred repositories
Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.
Textbook on reinforcement learning from human feedback
LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon Llama-3.1-8B-Instruct, aiming to achieve speech capabilities at the GPT-4o level.
Code Repository for Liquid Time-Constant Networks (LTCs)
code for EMNLP 2024 paper: Interpreting Arithmetic Mechanism in Large Language Models through Comparative Neuron Analysis
Create an open source toy dataset for finetuning LLMs with reasoning abilities
A comprehensive survey on Internal Consistency and Self-Feedback in Large Language Models.
[NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*
High-quality and streaming Speech-to-Speech interactive agent in a single file. 只用一个文件实现的流式全双工语音交互原型智能体!
Code for "Counterfactual Token Generation in Large Language Models", Arxiv 2024.
open-o1: Using GPT-4o with CoT to Create o1-like Reasoning Chains
In Generative AI with Large Language Models (LLMs), you’ll learn the fundamentals of how generative AI works, and how to deploy it in real-world applications.
[EMNLP 2024] This repository contains code for the paper "Self-supervised Preference Optimization: Enhance Your Language Model with Preference Degree Awareness", accepted by EMNLP 2024 Findings
Parallel Monte Carlo Tree Search with Batched Rigid-body Simulations
🌳 Python implementation of single-player Monte-Carlo Tree Search.
Efficient Triton Kernels for LLM Training
g1: Using Llama-3.1 70b on Groq to create o1-like reasoning chains
Effective LLM Alignment Toolkit
Python wrapper for TA-Lib (http://ta-lib.org/).
A ranked list of algorithmic trading open-source libraries, frameworks, bots, tools, books, communities, education materials. Updated weekly.
Libtrading, an ultra low-latency trading connectivity library for C and C++.