[go: up one dir, main page]

Skip to content
View MekkCyber's full-sized avatar

Block or report MekkCyber

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
MekkCyber/README.md

Hi there

Pinned Loading

  1. huggingface/transformers huggingface/transformers Public

    🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

    Python 134k 26.9k

  2. huggingface/accelerate huggingface/accelerate Public

    🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support

    Python 7.9k 964

  3. huggingface/nanotron huggingface/nanotron Public

    Minimalistic large language model 3D-parallelism training

    Python 1.2k 120

  4. NVIDIA/Megatron-LM NVIDIA/Megatron-LM Public

    Ongoing research training transformer models at scale

    Python 10.5k 2.3k

  5. linkedin/Liger-Kernel linkedin/Liger-Kernel Public

    Efficient Triton Kernels for LLM Training

    Python 3.4k 190

  6. tomaarsen/attention_sinks tomaarsen/attention_sinks Public

    Extend existing LLMs way beyond the original training length with constant memory usage, without retraining

    Python 666 40