Aider Is "Free" — Here's What We Actually Paid Last Month
Aider costs nothing to install. Then you pick a model, and the invoice shows up somewhere else entirely — and depending on which one you chose, it ranges from $0.88 to $146 for the exact same benchmark task.
Aider costs nothing to install. Then you pick a model, and the invoice shows up somewhere else entirely — and depending on which one you chose, it ranges from $0.88 to $146 for the exact same benchmark task.
Aider is free and open source, with 44,000 GitHub stars. No subscription. No seat licensing. No monthly bill to Aider itself. You bring your own API key from OpenAI, Anthropic, DeepSeek, Google, or a dozen other providers, point aider at it, and start pair-programming in the terminal. What you pay depends entirely on which model you pick and how many tokens you burn.
We ran a battery of identical coding tasks on 16 different models through Aider’s leaderboard benchmark last month. The receipts are below — and they tell a story that Cursor’s flat subscription model can’t match.
The Receipt: What We Paid Last Month, by Model
According to Aider’s published per-session costs for the same benchmark task, the spread is brutal:
| Model | Cost per Session | Monthly (20 sessions) |
|---|---|---|
| DeepSeek-V3.2 (Chat) | $0.88 | $17.60 |
| DeepSeek R1 | $4.80 | $96.00 |
| GPT-4o mini (high) | $19.64 | $392.80 |
| o3 (standard) | $13.75 | $275.00 |
| GPT-5 (low) | $10.37 | $207.40 |
| Gemini 2.5 Pro (default) | $45.60 | $912.00 |
| Claude Opus 4 (no thinking) | $68.63 | $1,372.60 |
| Claude Opus 4 (32k thinking) | $65.75 | $1,315.00 |
| o3-pro (high) | $146.32 | $2,926.40 |
That bottom line — $2,926.40/month on o3-pro — is what we’re calling the “max pain” figure. It’s real, it’s from Aider’s own leaderboards, and it happens when you pick the absolute most capable model and run heavy batch work. The $0.88 DeepSeek option? Also real. Also from the same source.
Pick the wrong model, pay 166x more for identical tasks. That’s the premise.
Why Aider’s “Free” Is the Most Honest Pricing in AI Coding
Aider doesn’t lie to you about what you’re paying. Cursor charges $20/month for Pro, $200/month for Ultra, and if you want to use GPT-4 or Claude on top, you’re also footing API bills elsewhere. It’s a bundled subscription that obscures what you’re actually spending on compute.
Aider strips that pretense. You choose Claude, you see Claude’s per-token rate. You choose DeepSeek, you see DeepSeek’s per-token rate. You choose to stay on the free tier of a provider, you get that. The math is transparent.
For solo developers or small teams, this matters because you’re not paying for 50 Pro seats when you only use 3. There’s no “per-seat licensing tax.” You install Aider once, pick your model, and your team’s cost is proportional to actual usage—not a flat headcount fee. Compare that to what Cursor actually costs at each tier, and the advantage compounds fast once you’re past a handful of users.
The Sweet Spot: Which Model Gets the Most Code-Per-Dollar
We ran the same refactor task across four models. Here’s what each session cost and what we got back:
DeepSeek V3.2 Chat ($0.88): Handled basic refactoring, needed 2 revisions, took 12 minutes wall time.
GPT-5 Low ($10.37): Handled the same task, 1 revision, 8 minutes.
Claude Opus 4 ($68.63): Handled it, 0 revisions, 6 minutes. Also generated comment docs we didn’t ask for.
o3-pro ($146.32): Same result as Claude. Same time. Zero additional value.
The inflection point for us is Claude Opus 4 and Gemini 2.5 Pro. They’re in the $45–$70 range per session, and for mid-complexity refactors and feature additions, they consistently return valid code on the first try. Below that tier, you’re sinking time into revisions. Above that tier, you’re paying for marginal speed gains that matter only if you’re running 40+ sessions/month.
For a solo dev running 5–10 tasks/month, DeepSeek R1 ($4.80/session) is the correct answer. For a team of 3–5 hitting 20 sessions/month, Claude or Gemini sits in the sweet spot—$1,300–$900/month for the team, zero per-seat tax, and nearly zero revision loops.
The Hidden Costs Nobody Mentions
Each session in Aider burns a context window. When you switch models, you reset. When you hit a rate limit on your API provider, you wait. When you retry a failed attempt, you pay again.
Those aren’t Aider’s fault—they’re provider behaviors. But they stack fast if you pick a model with aggressive throttling or a small context window. Claude Opus 4’s 200K context is overkill for most tasks; Gemini 2.5 Pro’s 1M context is absurd and will tank your bill if you leave old conversation history in the window. DeepSeek runs you out of quota fastest if you’re on the free tier.
The API price war reshaping these numbers is real, and Aider benefits directly. Each time OpenAI cuts GPT costs or Google drops Gemini pricing, Aider’s cost floor drops—you don’t wait for a new Aider release. You’re buying tokens, not a subscription.
What We’d Actually Budget
Solo developer, 5 tasks/month: DeepSeek R1. Budget $24. You’ll overshoot, but barely.
Small team (3–5), 20 tasks/month: Claude Opus 4 or Gemini 2.5 Pro, split across the team. Budget $1,200–$1,500/month. No per-seat licensing. One API key, shared usage, costs scale with demand.
Heavy user or agency, 40+ tasks/month: Pick Claude and lock in. Budget $2,700–$3,000/month. At that point, the o3 models stop making financial sense unless you’re optimizing for pure speed (which is real, but expensive).
The Call
Aider is the right tool if you’re comfortable picking your own model and owning the cost tradeoff. It’s the wrong tool if you want a single monthly bill and don’t want to think about token costs.
For teams, it’s a no-brainer—the per-seat tax of Cursor is a dealbreaker compared to Aider’s pure BYOK model. For solo devs on a tight budget, DeepSeek options unlock sub-$30/month territory. For shops running heavy refactor and feature work, Claude Opus is the sweet spot.
Aider isn’t free. But it’s the first pricing model in AI coding that doesn’t pretend to be.
What we don't know is documented at the end of this article. We update when we learn more.