Fundamentals

One-Shot Learning

Quick Answer

Providing a single example in the prompt to teach the model how to perform a task.

One-shot learning provides exactly one example before asking the model to perform a task. This minimal demonstration can significantly improve performance compared to zero-shot, especially for structured tasks or unusual formats. One-shot is useful when you want to show the model a specific output format or style without needing multiple examples. It's a quick way to boost performance while keeping prompts short and costs low.

Last verified: 2026-04-08

Compare models

See how different LLMs compare on benchmarks, pricing, and speed.

Browse all models →