Fine-Tuning Guide
Getting Started
Methods & Techniques
Implementation
Deployment
Axolotl
Flexible, community-driven framework with extensive model support and YAML configuration.
Unsloth
Speed and memory efficiency champion. 2-5x faster training with 80% less VRAM.
Torchtune
Official PyTorch library. Native integration, extensible recipes, multi-node support.
๐ง Other Popular Tools
Hugging Face TRL
Transformer Reinforcement Learning library with SFT, DPO, PPO support.
LLaMA-Factory
Web UI for fine-tuning. Supports multiple models and training methods.
DeepSpeed
Microsoft's distributed training optimization library with ZeRO stages.
PEFT Library
Parameter-Efficient Fine-Tuning methods from Hugging Face.
AutoTrain Advanced
No-code fine-tuning solution with automatic hyperparameter optimization.
MLX (Apple)
Fine-tuning framework optimized for Apple Silicon (M1/M2/M3).
Axolotl
Flexible, community-driven framework with extensive model support and YAML configuration.
Unsloth
Speed and memory efficiency champion. 2-5x faster training with 80% less VRAM.
Torchtune
Official PyTorch library. Native integration, extensible recipes, multi-node support.
๐ง Other Popular Tools
Hugging Face TRL
Transformer Reinforcement Learning library with SFT, DPO, PPO support.
LLaMA-Factory
Web UI for fine-tuning. Supports multiple models and training methods.
DeepSpeed
Microsoft's distributed training optimization library with ZeRO stages.
PEFT Library
Parameter-Efficient Fine-Tuning methods from Hugging Face.
AutoTrain Advanced
No-code fine-tuning solution with automatic hyperparameter optimization.
MLX (Apple)
Fine-tuning framework optimized for Apple Silicon (M1/M2/M3).