Training

Model Merging

Quick Answer

Combining weights from multiple models to create new models with combined capabilities.

Model merging combines weights from multiple fine-tuned models. Simple merging averages weights from different adapters or models. More sophisticated approaches: spherical interpolation, TIES merging (resolving conflicts), or learnable merging. Merging enables combining domain expertise from multiple models. Merging works best for similar models. Merged models sometimes achieve performance exceeding either parent. Merging is practical for combining specialized models without retraining.

Last verified: 2026-04-08

Compare models

See how different LLMs compare on benchmarks, pricing, and speed.

Browse all models →