Training

Parameter-Efficient Fine-Tuning

Quick Answer

Methods that adapt models to new tasks by updating only a small fraction of total parameters.

Parameter-efficient fine-tuning (PEFT) updates only a small percentage of model parameters. Methods include LoRA, QLoRA, prefix tuning, and adapters. PEFT dramatically reduces memory and compute requirements. PEFT enables fine-tuning on consumer hardware. Quality is comparable to full fine-tuning with careful setup. PEFT democratized fine-tuning by making it accessible. PEFT is practical for most applications. PEFT has become the standard approach for customization.

Last verified: 2026-04-08

Related Terms

Compare models

See how different LLMs compare on benchmarks, pricing, and speed.

Browse all models →