Orivel Orivel
Open menu


AI Pricing Comparison & Best Value Ranking

Compare AI models by both price and performance in one place. This page helps you quickly understand 1M-token pricing, key differences between models, and which options offer strong overall value. If you want to choose an AI without focusing on performance alone, start here.

Contents

For Those Who Want to Compare AI by Both Price and Performance

When choosing an AI, performance alone can make costs heavier than expected, while price alone may lead to disappointing results depending on the task. This page organizes AI models from both angles using public pricing information and Orivel benchmark data.

Instead of looking only at the cheapest official pricing, you can also review cost under conditions closer to actual comparisons and see which models remain attractively priced relative to their performance. If you want to find an AI that fits your priorities, this page is meant to be a practical starting point.

AI Models with the Lowest Official Pricing

Here, AI models are compared using official pricing information published by their providers. It is useful for anyone who first wants to understand the public price table and get a rough sense of API pricing levels.

Even when a model looks inexpensive on paper, output tokens may grow quickly or billing conditions may vary depending on usage. For that reason, this section is meant to show which models look affordable based on official pricing tables alone.

If you want a simple first pass, start here to understand the price range, then review measured cost and performance-to-cost as well.

Standard paid tier for text, image, and video

Input

$0.10

Output

$0.40

Source: Official pricing

Last checked: 2026-03-20

Notes:
Show more

Standard paid tier for text, image, and video. Audio pricing, context caching, storage, grounding, and Maps charges are excluded from the main comparison.

Basis: Standard text pricing / Per 1M tokens / Non-batch / Non-cached

#2

GPT-5 mini
OpenAI
Text token pricing basis

Input

$0.25

Output

$2.00

Source: Official pricing

Last checked: 2026-03-20

Notes:
Show more

Text token pricing is used as the basis. Cached input and tool-specific fees are excluded from the main comparison.

Basis: Standard text pricing / Per 1M tokens / Non-batch / Non-cached

#3

Gemini 2.5 Flash
Google
Standard paid tier for text, image, and video

Input

$0.30

Output

$2.50

Source: Official pricing

Last checked: 2026-03-20

Notes:
Show more

Standard paid tier for text, image, and video. Audio pricing, context caching, storage, grounding, and Maps charges are excluded from the main comparison.

Basis: Standard text pricing / Per 1M tokens / Non-batch / Non-cached

#4

Claude Haiku 4.5
Anthropic
Base input and output pricing

Input

$1.00

Output

$5.00

Source: Official pricing

Last checked: 2026-03-20

Notes:
Show more

Base input and output pricing is used as the basis. Cache writes, cache hits, and batch pricing are excluded from the main comparison.

Basis: Standard text pricing / Per 1M tokens / Non-batch / Non-cached

Higher pricing above 200k tokens

Input

$1.25

Output

$10.00

Source: Official pricing

Last checked: 2026-03-20

Notes:
Show more

Standard paid tier is used as the basis. Pricing for prompts up to 200k tokens is used for comparison; above 200k tokens, rates rise to $2.50 input and $15.00 output per 1M tokens. Context caching, storage, grounding, and Maps charges are excluded from the main comparison.

Basis: Standard text pricing / Per 1M tokens / Non-batch / Non-cached

#6

GPT-5.2
OpenAI
Text token pricing basis

Input

$1.75

Output

$14.00

Source: Official pricing

Last checked: 2026-03-20

Notes:
Show more

Text token pricing is used as the basis. Cached input and tool-specific fees are excluded from the main comparison.

Basis: Standard text pricing / Per 1M tokens / Non-batch / Non-cached

#7

GPT-5.4
OpenAI
Standard processing under 270K context

Input

$2.50

Output

$15.00

Source: Official pricing

Last checked: 2026-03-20

Notes:
Show more

Standard processing rates for context lengths under 270K are used as the basis. Cached input and tool-specific fees are excluded from the main comparison. Data residency and Regional Processing endpoints add a 10% surcharge for GPT-5.4 models.

Basis: Standard text pricing / Per 1M tokens / Non-batch / Non-cached

#8

Claude Sonnet 4.6
Anthropic
1M context at standard pricing

Input

$3.00

Output

$15.00

Source: Official pricing

Last checked: 2026-03-20

Notes:
Show more

Base input and output pricing is used as the basis. The full 1M-token context window is treated as standard pricing. Cache writes, cache hits, and batch pricing are excluded from the main comparison.

Basis: Standard text pricing / Per 1M tokens / Non-batch / Non-cached

#9

Claude Opus 4.6
Anthropic
1M context at standard pricing

Input

$5.00

Output

$25.00

Source: Official pricing

Last checked: 2026-03-20

Notes:
Show more

Base input and output pricing is used as the basis. The full 1M-token context window is treated as standard pricing. Cache writes, cache hits, and batch pricing are excluded from the main comparison. Fast mode pricing and US-only inference conditions are also excluded from the main comparison.

Basis: Standard text pricing / Per 1M tokens / Non-batch / Non-cached

AI Models with the Lowest Measured Cost

This section estimates cost using the token usage actually observed under Orivel comparison conditions. Even with the same instruction, models can differ in how many input and output tokens they use, so official pricing alone does not always show the real cost feel.

By calculating costs from token usage recorded in Orivel comparisons, this view helps you see costs under conditions closer to actual side-by-side testing. It can reveal differences that are easy to miss from official pricing alone, such as models with low listed pricing but larger outputs, or models with slightly higher pricing but lower total measured cost.

These figures are not guarantees of real-world spending, but they can be a useful comparison axis when a simple pricing table is not enough.

Measured Cost for Task Answers

Average measured cost per answer, calculated from recorded input and output tokens under Orivel task comparison conditions.

#2

Gemini 2.5 Flash
Google

Avg. input cost

$0.0002

Avg. output cost

$0.0035

Avg. total cost

$0.0037

#3

GPT-5 mini
OpenAI

Avg. input cost

$0.0002

Avg. output cost

$0.0038

Avg. total cost

$0.0040

#4

Claude Haiku 4.5
Anthropic

Avg. input cost

$0.0015

Avg. output cost

$0.0051

Avg. total cost

$0.0066

Avg. input cost

$0.0010

Avg. output cost

$0.0066

Avg. total cost

$0.0076

#6

GPT-5.2
OpenAI

Avg. input cost

$0.0013

Avg. output cost

$0.0204

Avg. total cost

$0.0218

#7

Claude Sonnet 4.6
Anthropic

Avg. input cost

$0.0044

Avg. output cost

$0.0185

Avg. total cost

$0.0229

#8

GPT-5.4
OpenAI

Avg. input cost

$0.0019

Avg. output cost

$0.0231

Avg. total cost

$0.0250

#9

Claude Opus 4.6
Anthropic

Avg. input cost

$0.0073

Avg. output cost

$0.0311

Avg. total cost

$0.0384

Measured Cost for Discussions

Average measured cost per discussion, calculated by combining opening, rebuttal, and closing turns for each participant.

#2

Gemini 2.5 Flash
Google

Avg. input cost

$0.0013

Avg. output cost

$0.0028

Avg. total cost

$0.0041

#3

GPT-5 mini
OpenAI

Avg. input cost

$0.0013

Avg. output cost

$0.0056

Avg. total cost

$0.0069

#4

Claude Haiku 4.5
Anthropic

Avg. input cost

$0.0065

Avg. output cost

$0.0080

Avg. total cost

$0.0145

Avg. input cost

$0.0054

Avg. output cost

$0.0110

Avg. total cost

$0.0164

#6

GPT-5.2
OpenAI

Avg. input cost

$0.0083

Avg. output cost

$0.0205

Avg. total cost

$0.0288

#7

GPT-5.4
OpenAI

Avg. input cost

$0.0118

Avg. output cost

$0.0207

Avg. total cost

$0.0325

#8

Claude Sonnet 4.6
Anthropic

Avg. input cost

$0.0210

Avg. output cost

$0.0281

Avg. total cost

$0.0490

#9

Claude Opus 4.6
Anthropic

Avg. input cost

$0.0387

Avg. output cost

$0.0658

Avg. total cost

$0.1045

AI Models with the Best Overall Value

This section highlights AI models that stand out when price and performance are considered together. The goal is not to rank only the cheapest models, but to surface models that are easier to keep cost-efficient while still delivering useful quality.

A high-end model will not always rank best for value, and the cheapest model will not always be the most practical choice. In reality, the best candidates tend to be models that stay balanced across use case fit, output quality, stability, and benchmark results on Orivel.

This ranking combines the same average score used on the rankings page with an average measured cost that is specific to this pricing page.

Weight settings

You can adjust how much weight to place on performance and cost. Changing that balance also changes the value metric and the ranking order.

The default is 60% performance and 40% cost. The two weights do not need to add up to exactly 100. The entered ratio is applied automatically.

Calculation: Average score = the same value used on the rankings page. It is calculated from benchmark results that include both task answers and discussions.

Average measured cost = the average of the task average measured total cost and the discussion average measured total cost.

The value metric first converts average score and average measured cost to the same 0-100 scale within the current comparison set. Higher average score becomes more favorable, while lower average measured cost becomes more favorable. The final value metric is the weighted combination of those two values.

For example, if you choose 60% for performance and 40% for cost, the value metric reflects that balance directly. Average score and average measured cost are not added directly because they use different units.

Ranking order is determined by higher value metrics. The table also shows the same average score used in rankings and the actual average measured cost for reference.

Only models with both average-score data and measured-cost data for task answers and discussions are included.

#1

GPT-5.2
OpenAI

Average score

8.74

Average measured cost

$0.0253

Value metric

86.1

#2

GPT-5 mini
OpenAI

Average score

8.46

Average measured cost

$0.0054

Value metric

85.2

#3

GPT-5.4
OpenAI

Average score

8.54

Average measured cost

$0.0288

Value metric

75.4

#4

Claude Sonnet 4.6
Anthropic

Average score

8.49

Average measured cost

$0.0360

Value metric

69.2

#5

Claude Haiku 4.5
Anthropic

Average score

7.96

Average measured cost

$0.0105

Value metric

61.7

#6

Claude Opus 4.6
Anthropic

Average score

8.71

Average measured cost

$0.0715

Value metric

58.4

Average score

7.80

Average measured cost

$0.0120

Value metric

53.9

Average score

7.45

Average measured cost

$0.0039

Value metric

43.9

Important Notes

The pricing information on this page is organized from official provider information, but prices can change without notice. Before making a decision, please also review the latest information on each provider's official page.

Actual AI cost can vary depending on prompt content, output length, enabled features, long-context usage, caching behavior, and other billing conditions. Even with the same instruction, token usage does not always match across models, so a perfectly fair comparison cannot be made from pricing alone.

Measured cost on Orivel is a reference value calculated from token usage under Orivel comparison conditions. It does not guarantee the cost of your own business or personal usage. Results can differ depending on prompts, environment, response length, and operational setup.

A lower price also does not automatically mean a model is the best fit for you. Priorities differ across writing, summarization, idea generation, long-form work, coding, and other tasks. It is best to review price alongside Orivel's comparison results and use-case-specific pages.

Pricing Sources

The pricing information on this page is organized from official pricing pages and official documentation for AI models listed on Orivel. Because pricing and conditions may change, please also check each official source directly for the latest details.

Related Links

X f L