Cursor: Best-in-Class Tab Complete, But the Cracks Are Showing

An honest review of Cursor after heavy daily use. The tab completion is unmatched, and they pioneered bringing AI into the code editor. Token economics remain the biggest challenge.

Nish Sitapara
AIDeveloper ToolsCursorCode EditorProductivity

Cursor: Best-in-Class Tab Complete, But the Cracks Are Showing

Cursor has been my primary code editor for a while now. It pioneered the agentic coding workflow inside an IDE and popularized the idea that AI should live where you write code: not in a separate chat window, not in a terminal, but right there in the editor alongside your files. After months of heavy daily use, I have a clear picture of where Cursor shines and where the experience starts to break down.

Tab Complete Is Still King

The "tab tab tab" slogan that Cursor launched with? It still holds true. Tab completion remains the core of the Cursor experience and the single best reason to use it.

Since Cursor acquired Supermaven, the tab completion model has been in a league of its own. The latency is near-instant. The accuracy is remarkably high; it doesn't just predict the next line, it generates multi-line completions that actually match your intent and fit the patterns already established in your codebase.

Nothing else on the market replicates this feeling. GitHub Copilot gets close but is noticeably slower and less context-aware. Tools like Claude Code and OpenAI's Codex don't even play in this space; they're agentic tools built for autonomous task execution, not inline assistance while you type.

The experience of writing code with real-time AI assist directly in your editor is fundamentally different from chat-based or terminal-based AI. You stay in flow. There's no context switch. The AI feels like an extension of your hands rather than a separate collaborator you need to instruct. For the pure act of writing code, Cursor's tab completion is the gold standard.

Agent Mode: Decent, and They Got There First

Credit where it's due: Cursor pioneered the agentic workflow inside a code editor. Before Cursor's Composer and Agent mode, the idea of describing a task in natural language and having an AI plan and execute changes across multiple files within your IDE was largely theoretical. They shipped it, and every other tool in the space has been racing to catch up since.

The Agent mode via the chat interface is a solid experience. You describe what you want, the agent reads your codebase for context, plans the changes, and applies edits across files. The inline diff review of those edits is genuinely nice. You see exactly what the agent changed, file by file, right in the editor you're already working in. There's no need to switch to a terminal or a browser to review AI-generated code.

For scoped tasks like refactoring a component, adding a new API endpoint, or updating types across a few files, Agent mode works well and feels productive. It's not perfect, but the workflow of "describe, review, accept" inside the editor is more ergonomic than copy-pasting from a chat window.

The Token Economy Problem

Here's where things get frustrating. The major setback with Cursor is that token consumption across all plan tiers is simply not enough for heavy agent usage.

If tab completion is your primary workflow, the Pro plan feels generous. But the moment you start leaning on Agent mode as your main way of working, using it for feature development, debugging, and refactoring, you burn through your fast request allocation quickly. The rate limits kick in, the model falls back to slower tiers, and the experience degrades.

The natural escape hatch is API mode: bring your own key and pay per token. This solves the rate limit problem but creates a cost problem. Agentic sessions are token-hungry by nature. Every file read, every edit, every verification step, every back-and-forth with the model consumes tokens. A single complex refactoring session can rack up meaningful charges. Over a week of heavy use, the API bill adds up fast and becomes unpredictable.

This creates an awkward middle ground for power users: the subscription plans feel throttled, but the API route is a blank check. Neither option feels quite right if you're using Agent mode as a core part of your daily workflow.

The Hidden Tax: Cursor's Own Token Overhead

What most users don't realize, and what becomes painfully obvious in API mode, is that Cursor itself consumes a significant chunk of your tokens before your actual prompt even reaches the model.

Every AI coding tool has internal scaffolding: system prompts that instruct the model on how to behave, context about your codebase, instructions for how to format edits. Cursor is no exception. On top of that, it compresses conversation history, manages context windows, and runs its own orchestration logic to coordinate multi-file edits. All of this costs tokens.

When you're on a subscription plan, this overhead is invisible; it just eats into your allocation behind the scenes. In API mode, you see it directly in the bill. Either way, it means the model is sometimes working with a compressed or truncated version of your context, and the output quality can suffer as a result.

To be fair, this is subjective and varies case by case. Some sessions feel sharp; the agent understands the full picture and makes clean, consistent edits. Other sessions feel like the model lost the plot halfway through, repeating itself or making changes that ignore context it should have had. Whether that's due to token compression, context window limits, or just model variance is hard to say definitively. But the pattern is noticeable enough that I've learned to keep my agentic sessions short and focused rather than trying to chain together long, complex workflows.

The Bigger Tension: Code Editor Workflow vs AI Workflow

There's a deeper friction I keep running into with Cursor, and I think it points to an unsolved problem across the entire AI coding tool space.

Cursor is trying to fit an AI-native workflow into a code-editor-native interface. These two paradigms don't always align.

Take the diff review experience. Reviewing AI-generated edits file by file in the editor is nice in theory. But in practice, you're reviewing changes the same way you'd review a coworker's pull request, except it's happening inline in your active workspace, mixed in with your own in-progress work. There's no clean separation between "my code" and "the AI's proposed changes." You're context-switching between writing and reviewing in the same window, and the cognitive overhead adds up.

Agent mode has a similar tension. The agent wants to take autonomous, multi-step actions: read files, plan changes, apply edits, verify results. But the editor workflow expects you to be in control of each file, each change, each save. The result is a hybrid that's sometimes powerful and sometimes awkward. You end up babysitting an agent that was supposed to be autonomous, clicking through diffs one by one. Or you rubber-stamp a batch of changes because reviewing twenty inline diffs is tedious, which defeats the purpose of the review step entirely.

Nobody has solved this perfectly yet. Terminal-based tools like Claude Code sidestep the problem by not trying to be an editor, but they lose the inline experience. Codex pushes everything to the cloud and hands you a PR, but you lose real-time interaction. Cursor tries to give you both, and the seams show.

This isn't a knock on Cursor specifically. It's an industry-wide design challenge: the best AI workflow (autonomous, multi-file, iterative) might not map cleanly onto the best editor workflow (file-focused, manual control, incremental changes). Forcing them together creates friction that no amount of UI polish can fully hide.

The Verdict

Cursor's tab completion is genuinely the best on the market. If real-time, inline AI assistance while you write code is what you're after, nothing beats it. The Supermaven-powered model is fast, accurate, and keeps you in flow. The "tab tab tab" experience is the gold standard, full stop.

Agent mode is capable, and Cursor deserves recognition for pioneering the concept. For focused, scoped tasks it works well. But the token economics make it frustrating for heavy use; you're either hitting plan limits or watching API costs climb.

The internal token overhead and context compression are real costs that affect output quality in ways that are hard to predict session to session. And the fundamental tension between editor UX and AI UX is something the whole industry still needs to figure out.

For now, my take: Cursor is excellent for writing code with AI inline, decent for agentic tasks if you keep them focused, and expensive if you push it hard. The tab complete alone justifies using it. Everything else is promising but comes with trade-offs you should go in with eyes open.

TL;DR: Cursor's tab completion (powered by Supermaven) is best-in-class and unmatched. Agent mode is solid for scoped tasks but hits token limits fast. The real challenge is that editor workflows and AI workflows are two different paradigms sharing one window, and that tension hasn't been solved yet.