Architecture

Rotary Position Embedding (RoPE)

Quick Answer

A positional encoding technique that rotates vectors proportionally to position.

RoPE applies rotations proportional to position to query and key vectors. It encodes absolute position through rotation and naturally captures relative position through rotation composition. RoPE shows better generalization to longer sequences than absolute positional encoding. It's used in many modern models. RoPE is simpler than some alternatives and works well in practice. It enables models to generalize somewhat beyond training length. Theoretical properties and empirical performance are both strong.

Last verified: 2026-04-08

Compare models

See how different LLMs compare on benchmarks, pricing, and speed.

Browse all models →