About

Why LLMversus exists

Last updated: 2026-04-15

LLMversus compares 27+ large language models side by side on price, speed, and benchmarks. Pricing refreshes from the OpenRouter catalog and provider APIs. Benchmarks come from the papers and evaluation suites cited on each model page. Nothing on the site is static marketing copy.

The problem

Most AI comparison pages are hand-written lists that go out of date the week they ship. Providers move prices every few weeks, benchmark leaderboards churn constantly, and new models land on a schedule no one can keep up with by hand. If you are picking a model for a real workload, the data you find in a blog post is usually wrong by the time you read it.

The approach

The site is a live database with an opinionated editorial layer on top. Pricing is pulled on a scheduled cadence and stored with a timestamp, so every number on the site carries a visible last-updated marker. Benchmarks use MMLU, GPQA, HumanEval, SWE-bench, and Arena ELO as the primary signals, with secondary task-specific scores surfaced where they are useful. We mark deprecated and preview models clearly so you do not build on something that will be pulled.

Editorial criteria

Inclusion is not pay-to-play. Any production-grade model with a public API and published pricing can land on the site. Rankings in best-of pages follow a fixed rubric that weights price, quality, and throughput for the specific use case. The rubric is documented on each page so you can argue with it.

Who built it

LLMversus is built by Aniket Nigam, a solo founder based in India. Previous work includes the PrepAiro IB study app and a handful of smaller utilities. There is no team and no outside capital. That shapes the product: every page has to earn its keep, and anything that feels like filler gets cut.

Where this is going

The comparison engine is the top of the funnel. Beyond it we are building a B2B bill analyzer that ingests invoices from OpenAI, Anthropic, and the other major providers, a stack builder that maps workloads to the right blend of models, and an AI spend management product for finance and IT teams who need visibility into what their company is actually paying. If any of that sounds useful to your team, get in touch.

Feedback

Corrections, model requests, missing benchmarks, and product feedback all go to hello@llmversus.com. Every email gets read.