We cry about AI tools so you don't have to.

Review

We Used Cursor AI for Six Months. Here's What Actually Happened.

Cursor is genuinely good. It's also a subscription we've almost cancelled twice. This is the review nobody wrote because they only tested it for two weeks.

cursorAI codingIDEdeveloper tools

Six months ago, we cancelled VS Code Copilot and moved our entire development workflow to Cursor. We’ve since considered switching back twice, shipped code we’re proud of using it, and hit walls that made us understand why some senior devs still refuse to adopt AI coding tools at all.

This isn’t a review written after a two-week trial or a sponsored hands-on. It’s what actually happens when you commit.

What Cursor is, without the hype

Cursor is a fork of VS Code with native LLM integration. The fork matters: it’s not a plugin, it’s a separate editor, which means it stays synced with VS Code’s extension ecosystem while shipping AI features that aren’t constrained by plugin API limitations.

The core workflow is built around three interactions:

  • Tab completion — context-aware autocomplete that finishes blocks, not just lines
  • Cmd+K / Ctrl+K — inline edits with natural language prompts
  • Composer — multi-file changes driven by a conversation

The model under the hood is your choice: GPT-4o, Claude Sonnet, or Cursor’s own fine-tuned models, depending on the plan and the feature.

Pricing reality as of May 2026

  • Free: 2,000 completions/month, 50 slow premium requests
  • Pro ($20/month): 500 fast premium requests, unlimited slow, 10 usage-based requests
  • Business ($40/user/month): Team features, SSO, zero data retention

We’re on Pro. The 500 fast premium requests sounds like a lot until you have a heavy refactoring week — we’ve burned through them by day 20 and spent the last 10 days of the month on slow requests, which are noticeably laggier on complex tasks.

This is the subscription friction point that has made us consider leaving twice. Cursor’s usage model penalizes power users with unpredictable monthly costs. If you’re using it lightly, $20 is fine. If you’re doing multi-file architectural work, you’ll either throttle yourself or reach for the credit card.

What we actually love

Tab completion is the real product. The marketing focuses on Composer, but we use Tab completion 200 times a day and it’s where Cursor earns its subscription. It doesn’t just finish what you started — it suggests the next logical block based on context, reads your function signatures, and proposes the completion you were about to type. The accuracy on typed codebases (TypeScript, Python with type hints) is noticeably better than on loosely-typed JS.

Cmd+K inline edits change how you write first drafts. We stopped worrying about boilerplate on the first pass. “Add error handling to this function” or “extract this into a shared utility” works reliably enough that we draft leaner and refactor faster.

The codebase indexing actually works. Cursor indexes your project and uses that context in responses. When we ask “where does the auth token get refreshed?” it finds the right file. This sounds basic but it’s where plugin-based tools fail — they can’t see across files the way Cursor can.

Where it breaks

Composer hallucinates on larger tasks. Ask Cursor to restructure a module across five files and it’ll sometimes invent imports that don’t exist, delete code it shouldn’t touch, or produce a diff that breaks type-checks. We’ve learned to use Composer for small, well-scoped tasks and stay skeptical on anything touching more than two files.

Context window limits bite on large codebases. In a monorepo with 400k+ lines, Cursor’s context window fills up fast. It becomes amnesiac — it “knows” the codebase but specific context gets dropped. The workaround is to be explicit about what files to include, which defeats some of the magic.

The privacy tradeoff is real. Code is sent to Anthropic or OpenAI servers unless you pay for the Business tier with zero data retention. Most devs working on commercial projects should read Cursor’s privacy FAQ before connecting their work repo. We use it on side projects and open-source work without concern. For client work, we’re more careful.

Compared to GitHub Copilot

We ran both for two months, head-to-head. Copilot’s integration is tighter (native VS Code, no fork required), its billing is more predictable ($10/month flat for individuals), and its enterprise story is more mature. Cursor is better at multi-file context and the Tab completion quality is noticeably higher.

Our call: Cursor for solo developers and small teams who can live with the fork and the usage-based billing ceiling. Copilot for larger teams where IT governance matters and predictable billing is non-negotiable. We’ve written more on this in our full Cursor vs Copilot comparison.

Sources used in this review

  1. Cursor pricing page — verified May 2026
  2. Cursor privacy FAQ
  3. Cursor changelog — for version history
  4. Hacker News: “Ask HN: Cursor vs Copilot after 6 months?” — community signal
  5. GitHub Copilot pricing — comparison data

What we don’t know / haven’t tested

  • We haven’t tested Cursor on large enterprise monorepos (500k+ lines) for extended periods. The context window behavior may differ.
  • We haven’t tested the Business tier’s zero data retention claim through independent audit.
  • We haven’t tried Cursor with local models (Ollama integration). If that works well, the privacy tradeoff changes completely — and we plan to test it.
  • We haven’t compared Tab completion quality against JetBrains AI Assistant in a controlled setting.

Bottom line: Cursor Pro at $20/month is worth it if you code daily and can tolerate usage-ceiling anxiety in heavy months. If you hit the fast-request ceiling regularly, the math gets worse. We’ve stayed because the Tab completion alone is better than anything we’ve used — but we’ve stayed with eyes open.

← More Reviews

What we don't know is documented at the end of this article. We update when we learn more.