Patterns

Axolotl

Flexible, community-driven framework with extensive model support and YAML configuration.

Easy YAML configuration
Multi-GPU support
Rapid model adoption
Sequence parallelism

Unsloth

Speed and memory efficiency champion. 2-5x faster training with 80% less VRAM.

Custom Triton kernels
Extreme memory efficiency
Easy Colab integration
Single GPU (OSS)

Torchtune

Official PyTorch library. Native integration, extensible recipes, multi-node support.

Pure PyTorch
Hackable recipes
Multi-node training
QAT support

๐Ÿ”ง Other Popular Tools

Hugging Face TRL

Transformer Reinforcement Learning library with SFT, DPO, PPO support.

LLaMA-Factory

Web UI for fine-tuning. Supports multiple models and training methods.

DeepSpeed

Microsoft's distributed training optimization library with ZeRO stages.

PEFT Library

Parameter-Efficient Fine-Tuning methods from Hugging Face.

AutoTrain Advanced

No-code fine-tuning solution with automatic hyperparameter optimization.

MLX (Apple)

Fine-tuning framework optimized for Apple Silicon (M1/M2/M3).

Fine-Tuning Guide

closed
๐Ÿš€

Getting Started

3
๐Ÿงช

Methods & Techniques

1
โš™๏ธ

Implementation

1
๐ŸŒ

Deployment

2
Built by Kortexya