Prompting

Chain-of-Thought Prompting

Quick Answer

Requesting explicit step-by-step reasoning to improve accuracy on complex tasks.

Chain-of-thought prompting (CoT) asks models to show reasoning. Simple phrasing: 'Let's think step by step' or 'Show your work'. CoT dramatically improves accuracy on reasoning tasks. CoT can be explicit (shown in output) or implicit (internal). CoT is particularly effective for math and logic. CoT sometimes hurts on simple tasks. CoT is a high-impact technique.

Last verified: 2026-04-08

Compare models

See how different LLMs compare on benchmarks, pricing, and speed.

Browse all models →