Anthropic Claude Sonnet 4.6 & Google Gemini Gemini 2.5 Pro Pricing Calculator & Chatbot Arena

AnthropicClaude Sonnet 4.6vsGoogle GeminiGemini 2.5 Pro: API Pricing Comparison & Performance Calculator

Free tool

Last updated:

Welcome to the ROI chatbot arena. Adjust the sliders below to see which model actually wins when it comes to your monthly API bill and production speed. When architecting agentic workflows, the choice between Anthropic Claude Sonnet 4.6 and Google Gemini Gemini 2.5 Pro often represents the pivotal trade-off between raw intelligence and unit economics for LLMs. While Claude Sonnet 4.6 is widely regarded for its HumanEval coding performance, Gemini 2.5 Pro offers a massive 58% reduction in input costs, making it the superior choice for high-volume agentic workflows where unit economics for LLMs is the primary KPI. Our 2026 analysis provides the data-driven insights you need to optimize your agentic workflows without overpaying for unused HumanEval coding performance.

Comparative Tables

List $/1M tokens, context limits, and estimated monthly bill for the same workload you configure below—API list math for the first two models in this calculator.

Claude Sonnet 4.6

Anthropic

Input / 1M
$3.00
Output / 1M
$15.00
Context
1.0M tokens
Est. monthly (this workload)
$270.00

Gemini 2.5 Pro

Google Gemini

Input / 1M
$1.25
Output / 1M
$10.00
Context
1.0M tokens
Est. monthly (this workload)
$150.00

Monthly cost bar (same tokens & requests)

Longer bar = higher list spend for the sliders below. Cheaper run for this scenario is highlighted.

Claude Sonnet 4.6
$270.00
Gemini 2.5 Pro
$150.00
Your workload · live math

This was the teaser. The real compare is one scroll away.

Open the full workspace—dial tokens, requests, vision, batch & agency, then line up up to four models on that exact scenario. You get true monthly list cost, heuristic performance, and a Final Verdict ranking built for your numbers—not a generic blog table.

  • Live sliders
  • Exact list $
  • Value + verdict
  • 4 model slots
Launch full calculator

Sliders, charts & compare

Compare Models

2 of 4 selected

This page's two models are pre-selected. Add up to four models—sliders and toggles below apply the same usage to every model in the list.

Add a model
Claude Sonnet 4.6
Anthropic
$270
Gemini 2.5 Pro
Google Gemini
$150
BEST
Volume

Typical API, Heavy RAG, and Max context stress set monthly requests and how hard each call uses the token sliders—stress caps per request and trims calls so totals stay readable. Clears a use-case template on the right. Moving requests clears this row; moving input/output clears the tier.

Use Case Templates

Sets input, output, requests, and template value weights for the ROI read—touch a token slider and weights fall back to 50% / 50%. With Deep Reasoning, output is ×1.4 before pricing. Clears a volume preset on the left.

Include Vision / Image Processing

Off — no image fees for models that support vision.

Turn On to include image fees.

OffOn

Use Cached Pricing

Applies cached input rates where this catalog lists them (OpenAI, Anthropic, Google, …). Models without a cached rate keep list pricing.

OffOn

Quick Markup (Demo)

Add markup for client pricing

OffOn

Deep Reasoning / Thinking Mode

Model hidden reasoning / extended thinking charged like output tokens when enabled.

OffOn

Batch Pricing

Enable for 50% off input & output

OffOn

Price Alert

Get notified when cost exceeds limit

OffOn
≈ $120.00/mo
8K
1K1.0M
≈ $150.00/mo
2K
100500K
≈ $270.00 total
5K
10100K

Pricing & spend

Cost Analysis & Price per 1M Tokens

You are viewing list vs effective input/output rates per model, plus cached-token and batch notes—all driven by the sliders and toggles above. Monthly totals show who costs most for this exact workload before you jump to benchmarks and specs.

Claude Sonnet 4.6

Anthropic

$270.00/mo

Input (list)

$3.00 / 1M

Output (list)

$15.00 / 1M

Effective input / output (this scenario)

$3.000 / 1M in · $15.000 / 1M out

Cached input

No cached input rate in catalog for this model

Batch pricing

Eligible for 50% batch discount — toggle Batch Pricing on to apply

Vision

Up to $0.0030 per image when vision is on

Input $

$120.00

Output $

$150.00

Vision $

$0.00

Gemini 2.5 Pro

Google Gemini

$150.00/mo

Input (list)

$1.25 / 1M

Output (list)

$10.00 / 1M

Effective input / output (this scenario)

$1.250 / 1M in · $10.000 / 1M out

Cached input

No cached input rate in catalog for this model

Batch pricing

Eligible for 50% batch discount — toggle Batch Pricing on to apply

Vision

Up to $0.0070 per image when vision is on

Input $

$50.00

Output $

$100.00

Vision $

$0.00

Monthly cost stack

Live

Stacked spend by model — input, output, and vision from your sliders.

Input tokens

8K

per request

Output tokens

2K

per request

Images

vision off

Input Output Vision

Price Comparison

Claude Sonnet 4.6
$270.00
Value95
Gemini 2.5 Pro
$150.00
Value96
Best Input Price
Gemini 2.5 Pro
$1.250/1M
Best Output Price
Gemini 2.5 Pro
$10.00/1M
Largest Context
Gemini 2.5 Pro
1.048576M
Best value (heuristic)
Gemini 2.5 Pro
96 / 100
Quality per $ vs selected models (respects Vision / Thinking toggles).
Lowest monthly (this workload)
$150.00
Gemini 2.5 Pro

Your Cost Estimate

All selected models — same workload & toggles

Up to $120.00/mo ($1.44K/yr) less with Gemini 2.5 Pro vs Claude Sonnet 4.6 for this workload.

Anthropic

Claude Sonnet 4.6

$0.00

per month

Per request

$0.000000

Per 1K tokens

$0.0180

Google Gemini

Gemini 2.5 Pro

Cheapest for this workload — same sliders & toggles as above; lowest projected monthly cost in your compare list.

$0.00

per month

Per request

$0.000000

Per 1K tokens

$0.0112

Pricing updated …
Expert verdict

The 2026 Performance-per-Dollar Ranking

Your custom ranking based on your specific token volume. We calculate the exact ROI by dividing catalog benchmarks by your live estimated monthly cost for the US, Canadian, and Australian markets.

Best value in this compare

Gemini 2.5 Pro

Highest quality-per-dollar for your numbers below. Others are shown as a % of this leader.

100%value score

Match your goal

Four angles on the same compare — choose the story that matches what you optimize for.

Editor's pick · Best overall value

Pick Gemini 2.5 Pro when…

  • You want the best blend of capability score vs. monthly cost for this workload.
  • You’re comparing ROI across providers and care about “bang per dollar,” not just the lowest list price.
  • You’re okay using our catalog benchmark index — not a live benchmark run.

Lowest cost

Pick Gemini 2.5 Pro when…

  • Monthly spend must stay as low as possible for this token mix and request volume.
  • You’re prototyping, staging, or running high-volume tests where cost dominates.
  • You can trade some headroom on “quality index” for predictable savings.

Top quality

Pick Claude Sonnet 4.6 when…

  • Output quality and capability matter more than saving a few dollars per month.
  • You’re shipping customer-facing or compliance-sensitive flows.
  • You want the strongest catalog benchmark “quality” score in your current compare set.

Largest context

Pick Gemini 2.5 Pro when…

  • You need the largest context window for long docs, RAG bundles, or huge prompts.
  • You’re near the model’s context limit today and want more room before chunking.
  • You’re optimizing for “fits in one shot” over raw $/token.

Need a shareable artifact?

Get a print-ready PDF of your results and a CSV spreadsheet of your model comparison. Tap the button, then enter your work email. We use it to build your files and start the download—and to email you a copy if the site owner enabled that.

Head-to-head model compare by LeadsCalc

Detailed Analysis

PDF Breakdown

Receive a comprehensive native vector PDF report with unit economics, benchmarks, and illustrative charts from your current settings. Includes this session's lineup (Claude Sonnet 4.6 · Gemini 2.5 Pro).

Instant Setup
No CC Required

By submitting, you agree to our Privacy Policy and Terms.

Agency Accelerator

Whitelabel Anthropic Claude Sonnet 4.6
Calculator

Embed this Anthropic Claude Sonnet 4.6 cost surface on your own domain — whitelabel branding, lead capture, and the same sliders your prospects already trust on LeadsCalc.

1-Click CRM Sync
Custom Branding
Branded Reports
Lead Analytics

FREE TO START

$0/mo*

NO CREDIT CARD REQUIRED

Chatbot Arena Matchup: Claude Sonnet 4.6 vs Gemini 2.5 Pro Pros & Cons

Anthropic Claude Sonnet 4.6

Best for: Production coding agents, document analysis, and long-context tasks

Pros

  • Strong balance of speed and reasoning for production workloads
  • Very large context window for long docs and big codebases in one request
  • Solid fit for multi-step coding and careful instruction following
  • Prompt caching (when enabled) lowers cost on repeated system context
  • Usually feels a bit snappier in this pairing: our speed hint is 80/100 vs 50/100 (Balanced). Typically snappier than dedicated reasoning-only models for interactive apps.
  • Higher overall catalog benchmark composite (94/100 vs 87/100)—still not a lab benchmark, just a guide.
  • Coding benchmark leans here (95/100 vs 88/100)—verify with your own tests.
  • Supports vision/images

Cons

  • More expensive input tokens
  • More expensive output tokens
  • Smaller context window (1000k)
  • Output pricing is higher than budget or flash-tier models
  • No first-party fine-tuning like some OpenAI workflows

Google Gemini Gemini 2.5 Pro

Best for: High-volume text processing, RAG, and fast chat

Pros

  • Extremely fast generation speed
  • Highly cost-effective for scale
  • 58% cheaper input tokens
  • 67% cheaper output tokens
  • Larger context window (2000k vs 1000k)
  • Supports vision/images

Cons

  • Speed hint trails the other model here (50/100 vs 80/100). Flash and mini tiers optimize for low latency per dollar.
  • Lower overall catalog benchmark composite in this pair (87/100 vs 94/100).
  • Coding benchmark is lower than the other model (88/100 vs 95/100).
  • Struggles with highly complex reasoning

Model Profiles & Details

Anthropic Claude Sonnet 4.6

Anthropic Claude Sonnet 4.6 is offered by Anthropic as part of the hosted API lineup. List prices here are $3 per million input tokens and $15 per million output tokens. It can take images in the API; our catalog lists about $0.016 per image. On our catalog benchmarks (0–100, not official vendor scorecards): composite 94/100, coding 95/100, logic/reasoning 95/100, math 92/100, and instruction following 93/100. For UX speed orientation we show a speed score of 80/100 and call it “Balanced”—Typically snappier than dedicated reasoning-only models for interactive apps. Context window is 1,000,000 tokens (Very large — whole codebases or book-scale text in one shot (watch cost).). Large single-shot context — fewer chunks for long PDFs / repos (still extract text per API rules) Tools: Strong — standard tool/function patterns on hosted API. JSON outputs: Yes — JSON / schema-style outputs widely used. Prompt caching: Often supported — enable in calculator when catalog lists a cached rate. Catalog Benchmarks (0–100). Manually maintained model-level scores; verify on your own evals.

Google Gemini Gemini 2.5 Pro

Google Gemini Gemini 2.5 Pro is offered by Google Gemini as part of the hosted API lineup. List prices here are $1.25 per million input tokens and $5 per million output tokens. It can take images in the API; our catalog lists about $0.00722 per image. On our catalog benchmarks (0–100, not official vendor scorecards): composite 87/100, coding 88/100, logic/reasoning 85/100, math 90/100, and instruction following 85/100. For UX speed orientation we show a speed score of 50/100 and call it “Fast (latency-friendly)”—Flash and mini tiers optimize for low latency per dollar. Context window is 2,000,000 tokens (Very large — whole codebases or book-scale text in one shot (watch cost).). Large single-shot context — fewer chunks for long PDFs / repos (still extract text per API rules) Tools: Strong — standard tool/function patterns on hosted API. JSON outputs: Yes — JSON / schema-style outputs widely used. Prompt caching: Depends on provider — use catalog cached rate when shown. Catalog Benchmarks (0–100). Manually maintained model-level scores; verify on your own evals.

Price + performance hints

Deep dive comparison: Anthropic Claude Sonnet 4.6 vs Google Gemini Gemini 2.5 ProAPI pricing, speed hints, and where each model shines

Choosing between Anthropic Claude Sonnet 4.6 and Google Gemini Gemini 2.5 Pro affects your monthly API bill and how snappy your app feels. Skip the hype. Use the calculator above for dollars, then use this page for context limits, caching, and our plain-language hints on speed (80/100 vs 50/100) and rough “smarts” (94/100 vs 87/100). Those hints come from catalog + provider family signals—they are not lab benchmarks—so still try both on real tasks.

Regional latency & availability

API latency and failover paths depend on where you host and which provider region you call. Teams in Australia often verify Sydney (ap-southeast-2) or Singapore edges; US buyers standardize on us-east-1 / us-west-2; Canada frequently maps to the same US regions or dedicated CA endpoints where offered. Our list prices are global list rates—map the model to your closest allowed region in the provider console, then re-run the workspace above with your real traffic split so CFOs and CTOs see numbers tied to production, not a generic blog table.

Anthropic Claude Sonnet 4.6

Anthropic

Input
$3.00per 1M tokens
Output
$15.00per 1M tokens
Context
1000kmax tokens

Google Gemini Gemini 2.5 Pro

Google Gemini

Input
$1.25per 1M tokens
Output
$5.00per 1M tokens
Context
2000kmax tokens

Performance snapshot (hints, not benchmarks)

For “how quick it usually feels” in our rough scale, Anthropic Claude Sonnet 4.6 sits a little higher (80/100 vs 50/100). That is not a live benchmark—just a hint from model family and catalog signals. For overall quality hints, Anthropic Claude Sonnet 4.6 edges ahead (94/100 vs 87/100). For coding-style strength hints, Anthropic Claude Sonnet 4.6 is a bit higher (95/100 vs 88/100). Always run a few real prompts that matter to you.

Anthropic Claude Sonnet 4.6Google Gemini Gemini 2.5 Pro
Speed hintrough latency vibe80/10050/100
Tier labelhow we bucket itBalancedFast (latency-friendly)
Overall smartsnot official scores94/10087/100
Coding hintheuristic95/10088/100

Catalog Benchmarks (0–100). Manually maintained model-level scores; verify on your own evals. Same idea applies to both sides—use these rows as a starting point, not a verdict.

Core pricing

Input token cost comparison calculator

Every prompt, document, and system message costs input tokens. Anthropic Claude Sonnet 4.6 is $3 per million input tokens. Google Gemini Gemini 2.5 Pro is $1.25. For read-heavy workloads, Google Gemini Gemini 2.5 Pro wins. If you process huge documents daily, that gap adds up fast—pick Google Gemini Gemini 2.5 Pro over Anthropic Claude Sonnet 4.6 when quality is similar. Use our calculator above to see exact input costs.

Output token cost comparison calculator

Output tokens are what the model generates. They are usually pricier than input. Anthropic Claude Sonnet 4.6 charges $15 per million output tokens; Google Gemini Gemini 2.5 Pro charges $5. For long answers, code, or reports, favor Google Gemini Gemini 2.5 Pro. Tight prompts ("answer in one paragraph") cut spend on either side. Our calculator helps you estimate these output costs accurately.

Context window: Anthropic Claude Sonnet 4.6 vs Google Gemini Gemini 2.5 Pro

Context is how much text fits in one request. Anthropic Claude Sonnet 4.6 allows up to 1,000,000 tokens. Google Gemini Gemini 2.5 Pro allows up to 2,000,000. Google Gemini Gemini 2.5 Pro fits longer docs or repos—but you pay for every token you send, every turn. Do not max the window unless you need it. In plain words: Very large — whole codebases or book-scale text in one shot (watch cost). For the other side: Very large — whole codebases or book-scale text in one shot (watch cost).

Vision and image processing

Claude Sonnet 4.6 supports vision (about $0.016 per image in our catalog). Gemini 2.5 Pro supports vision (about $0.00722 per image). Resize images before the API when you can—it lowers token load and cost.

Prompt caching

Reusing the same long context? Caching can slash input cost. Claude Sonnet 4.6 does not show a cached rate in our data. Gemini 2.5 Pro does not show a cached rate here. Great for chat over one big PDF or policy doc.

Batch APIs and Claude Sonnet 4.6 / Gemini 2.5 Pro

If you do not need instant replies, batch jobs often run at a steep discount (often around half off list price, depending on the provider). Ship a file of requests, get results within about a day. Ideal for summaries, translations, and backfills. Use the calculator toggles above to see how batch mode changes your estimate.

Use cases

Which model fits chatbots?

Chats repeat system prompts and history every turn. A short user message can still bill thousands of input tokens. Lower input price helps—Google Gemini Gemini 2.5 Pro is usually safer for high-volume chat. On our speed hints, Anthropic Claude Sonnet 4.6 is 80/100 (Balanced) and Google Gemini Gemini 2.5 Pro is 50/100 (Fast (latency-friendly)). If one is clearly ahead on both price and speed hint, that is a nice combo for live chat—but slow networks or huge prompts can still swamp the difference, so try a realistic thread in your region.

Which model fits data extraction?

Extraction needs accuracy and often a large context for messy PDFs. Try both Anthropic Claude Sonnet 4.6 and Google Gemini Gemini 2.5 Pro on real samples. If quality matches, pick the cheaper input side—extraction is usually input-heavy.

Which model fits coding?

Coding rewards reliability over saving a few cents. Bad output costs engineer time. Our coding-strength hints (again, heuristics) put Anthropic Claude Sonnet 4.6 at 95/100 and Google Gemini Gemini 2.5 Pro at 88/100, with broader “smarts” hints at 94/100 vs 87/100. Between this pair, favor whichever passes your tests on your stack traces and style rules; if quality is a tie, output price leans toward Google Gemini Gemini 2.5 Pro for long patches.

Architecture & ops

Hidden cost: system prompts

System prompts ride along on every call. Example: 1,000 tokens × 100,000 requests per day ≈ 100M input tokens daily. At $3 per million for Anthropic Claude Sonnet 4.6, that is about $300.00 per day from the system prompt alone. Keep instructions short and reusable.

RAG and retrieval costs

RAG sends retrieved chunks with each question. More chunks mean more input tokens to Anthropic Claude Sonnet 4.6 or Google Gemini Gemini 2.5 Pro. Tighten retrieval: send only the best few passages, not whole folders.

Fine-tuning vs longer prompts

Long prompts tax you every request. Fine-tuning costs upfront but can shorten prompts. Compare total cost in our calculator: long prompt + cheap base model vs short prompt + fine-tuned pricing if you use it.

Agents and loops

Agents may call Anthropic Claude Sonnet 4.6 or Google Gemini Gemini 2.5 Pro many times per user task. One workflow can equal dozens of normal chat turns. Cap steps, log spend, and alert on spikes.

Business & strategy

Agencies and client markup

Bill clients for API usage you resell. Use Agency Mode in the calculator for markup, client price, and margin—plus PDFs for proposals.

Billing SaaS customers for AI

Flat plans get burned by power users on Anthropic Claude Sonnet 4.6 or Google Gemini Gemini 2.5 Pro. Credits or BYOK (bring your own key) align revenue with cost.

Track real usage

Dashboards, alerts, and tools like Helicone or Langfuse show who burns tokens and which prompts bloat bills. Measure before you optimize.

Landscape

Other models to consider

Beyond this pair, consider OpenAI GPT-4o, Anthropic Claude 3.5 Sonnet, or Google Gemini Gemini 1.5 Pro for price or capability fit. Design your stack so you can swap models without a rewrite.

Where API pricing is heading

List prices keep falling, but workloads get heavier—bigger contexts, agents, more tools. Net spend can still climb. Keep a running estimate whenever you change models or traffic.

Speed and latency (TTFT / TPS)

Cost is not everything. Claude Sonnet 4.6 carries a speed hint of 80/100 (Balanced); Gemini 2.5 Pro is 50/100 (Fast (latency-friendly)).Typically snappier than dedicated reasoning-only models for interactive apps. Flash and mini tiers optimize for low latency per dollar. In production you still want time-to-first-token and tokens per second on your prompts, region, and concurrency—especially for voice, typing indicators, or anything that feels “live.”

Security and data handling

Check training, retention, and region rules for each provider behind Claude Sonnet 4.6 and Gemini 2.5 Pro. Regulated data needs enterprise terms, not guesswork.

Open weights vs closed APIs

Proprietary APIs are simple but price-controlled. Open models (e.g. Llama family) add ops work but can cut unit cost at scale. Match the tradeoff to your team.

Embed this comparison on your site

Consultants can embed this Claude Sonnet 4.6 vs Gemini 2.5 Pro experience white-label, capture emails with PDF reports, and turn pricing questions into leads—free with LeadsCalc.

Dollar figures reflect catalog pricing; speed and “smarts” rows are in-house hints, not vendor benchmarks. Confirm rates and run your own latency tests before you commit.

Final Analysis & ROI Verdict

Final Verdict: If your LLM deployment is cost-sensitive and volume-heavy, Gemini 2.5 Pro is the logical choice to maximize your ROI optimization. Reserve Claude Sonnet 4.6 for the 5% of tasks that require absolute latency profiles.

Explore the Chatbot Arena: More Head-to-Head Matchups

While traditional chatbot arenas measure human preference (vibes), the LeadsCalc arena measures hard ROI. We pit models against each other based on cost-per-1M tokens, context windows, and latency.

More side-by-side API pricing calculator pages (for people and search). Each link opens an interactive cost calculator with the same breakdown style as this page. Use our calculator to evaluate different models and price tiers.

Frequently Asked Questions

Pricing, speed hints, and rough “smarts” scores for Anthropic Claude Sonnet 4.6 vs Google Gemini Gemini 2.5 Pro

For startups scaling on a budget, Gemini 2.5 Pro is the clear winner for ROI optimization, offering significantly lower entry costs. However, if your app requires maximum instruction-following precision, the premium for Claude Sonnet 4.6 may be justified by its higher accuracy.