Fundamentals

Instruction Following

Quick Answer

An LLM's ability to understand and follow explicit instructions in prompts.

Instruction following refers to how well an LLM adheres to specific instructions in prompts. A model with strong instruction following can follow detailed constraints: 'Respond in exactly 5 sentences', 'Only use these 10 specific words', or 'Format as a markdown table'. Strong instruction following is critical for applications with strict output requirements. It's influenced by training (instruction-tuned models follow better), model scale, and prompt clarity. Contradictory instructions or overly complex constraints degrade performance. Testing instruction following with your specific use cases is important.

Last verified: 2026-04-08

Compare models

See how different LLMs compare on benchmarks, pricing, and speed.

Browse all models →