OpenAI Codex Just Started Charging. Here's What You Actually Owe After the Free Tier Runs Out.
Workspace agents flipped to credit billing on May 6. We ran the math: at $0.005 per credit, here's what Pro, Plus, and Business teams actually pay once the included usage runs out.
On May 6, 2026, OpenAI flipped workspace agents to a credit-based billing system, ending the research preview grace period. The same morning, Anthropic announced it had doubled Claude Code’s usage limits, effective immediately. The timing is not coincidence—it’s the market tightening around code execution. We ran the numbers on what Codex’s shift actually costs, and where the real gaps open up.
The Free Tier Is Over. Here’s the Math.
Codex pricing now runs on API tokens, not flat seat fees. OpenAI prices credits at roughly $0.005 each (meaning 1,000 credits ≈ $5), per the published rate card. Pro subscribers get ~250 local messages per 5-hour window on GPT-5.5. Once you exceed that, you’re buying credits from OpenAI’s pooled workspace bucket.
According to OpenAI’s rate card, GPT-5.5 (the current standard in Codex) averages ~14 credits per local task message on Plus tier, and ~7 credits on Pro. Output-heavy sessions—refactoring, debugging, multi-file edits—can run 20+ credits per task. A cloud task (running code against an external service) costs more because it includes execution time.
Here’s a concrete example: 10 cloud tasks across a week on the Pro tier.
| Tier | Tasks | Credits/Task | Total/Week | Monthly (4 weeks) | Equivalent Cost |
|---|---|---|---|---|---|
| Pro | 10 cloud | ~9 credits | 90 | 360 | ~$1.80 |
| Business | 10 cloud | ~9 credits | 90 | 360 | ~$1.80 |
| Plus | 10 cloud | ~12 credits | 120 | 480 | ~$2.40 |
The math is forgiving at low volume. But a team of three devs, each running 40 tasks monthly? You’re looking at 1,200 credits per month, roughly $6 in overage costs per person, or $18 team-wide. That compounds if you hit reasoning tasks (GPT-5.5 with extended reasoning), which burn 2–3x the standard rate. For a broader view of where Codex fits in the current API pricing landscape, see our 2026 LLM API price war breakdown.
When the Limits Kick In
Pro subscribers get ~250 local messages per 5-hour window using GPT-5.5 (more capable). That sounds generous until you’re in the flow—a debugging session can inhale 30 messages in an hour, especially if you’re iterating on multi-file refactoring.
Once you hit the ceiling, Codex pauses your session. You can either wait for the window to reset (5 hours) or buy credits to unlock more capacity immediately. OpenAI’s messaging calls this “flexible usage.” In practice, it’s a soft paywall.
The catch: credits expire 12 months after purchase, and unused included usage does not roll over month to month. If you’re on Plus at $20/month and burn 80 messages one week and 10 the next, you can’t bank the leftover allotment. That week’s quota vanishes. This credit-versus-token distinction matters: GitHub Copilot’s June 2026 billing shift uses a different model where token consumption converts to credits at a published rate, giving you real-time spend visibility before you hit a wall.
Pro Tier Gets Doubled Capacity (Through May 31)
Through May 31, 2026, OpenAI is running a promotion that hands Pro $100/month subscribers 2x Codex usage, so ~500 local messages per 5-hour window instead of the baseline 250.
This promo is doing real work: it’s a direct response to Anthropic’s May 6 announcement doubling Claude Code’s five-hour limits across Pro, Max, Team, and Enterprise. On the exact same day OpenAI started metering Codex, Anthropic removed peak-hour restrictions and hiked throughput. It’s not accidental timing. It’s a capacity race, and OpenAI’s solution is promotional headroom while Anthropic’s is permanent infrastructure.
If you lock in Pro during the promo window, you’re effectively getting a 4-month test drive of expanded capacity at no extra cost. After May 31, limits revert to standard unless OpenAI extends the offer.
The Team Billing Difference
Business and Enterprise workspaces buy credits at volume rates through a shared pool. A Business workspace owner allocates credits via Settings > Billing, and all users in that workspace draw from the same bucket. There’s no per-user limit, only the total credits purchased.
A team of 10 devs on Business, each averaging 20 tasks per week, runs roughly 720 credits per dev per month, about $36 in pooled credits across the team. The same usage on individual Pro subscriptions costs the same in credits, but you’re also paying 10 × $100 per month in seat fees on top. The real Business advantage isn’t a credit discount; it’s avoiding seats that go underutilized. If every dev is heavy, the math is closer to a wash.
Enterprise customers negotiate credits through their account team as part of the contract, no self-service pricing. That opaqueness is deliberate; OpenAI is selling tiered access to model and usage guarantees.
How Claude Code’s Limits Compare
Claude Code’s concurrent file limit doubled to 10 files per session, and the five-hour usage window capacity is now 2x across all paid tiers. Anthropic isn’t metering this in credits. It’s baking it into the subscription. Claude Pro remains $20/month. Max remains $200/month. For a full look at how Claude Code’s token architecture and rate limits actually behave in practice, see our Claude Code 2026 review.
The irony: on the same day Codex started charging per-credit overages, Claude Code’s margins got bigger without a price hike. Anthropic’s SpaceX compute deal gave them the infrastructure to absorb growth. OpenAI’s approach is margin-focused: control supply, price the overflow.
We don’t have Cursor’s equivalent credit rate for comparison (Cursor is opaque on costs), but the pattern is clear. Code execution tools are separating into subscription-flat (Anthropic) and metered (OpenAI). For light users, both work. For heavy users, Anthropic’s fixed cost becomes the better deal.
What You Actually Owe After Day One
If you’re on Plus and use Codex casually (5–10 tasks per week), the math never exceeds your included limit. Cost: $20/month, zero overage.
If you’re on Pro and do 30–40 tasks per week, you’ll hit the ceiling once or twice per month and buy maybe 100–200 credits in overage. Cost: $100/month plus $1–2 in credits.
If you’re a Business workspace with three developers running heavy multi-file work (about 50 tasks per week each), you’re spending $30–60/month in shared credits, plus the Business seat fees.
The real cost isn’t the per-credit rate. It’s the complexity of tracking variable usage and the friction of hitting limits mid-session. OpenAI’s shift to credits is operationally clean for the API side, but it punishes workflow continuity.
What Happens Next
OpenAI’s workspace agents announcement signals a broader move toward metered consumption for agent-based coding. As agents take on more autonomous tasks (pulling from Slack, running code in CI/CD, iterating on files), flat pricing breaks down. Credits let OpenAI charge for actual resource burn: execution time, API calls, compute.
Anthropic’s play is the opposite: spend heavily on capacity, offer fixed pricing, win on simplicity. That’s only viable if you’ve locked down GPU allotment at scale, which they have through SpaceX.
For developers choosing between the two, the question is operational: do you want predictable monthly spend (Claude Code) or tolerate overages for deeper integration and more model choice (Codex)? Both started this week in opposite directions, and both expect you to pick sides.
The free tier didn’t end because the models got cheaper to run. It ended because demand outpaced the promo. Credits are how OpenAI rations that excess. Knowing your team’s baseline usage before you max out the free window is the only way to avoid surprise invoices.
What we don't know is documented at the end of this article. We update when we learn more.