Back to blogs
Written by
Travis Montgomery
Published on
February 11, 2026

Expensive and Slow for Small Changes: Why AI Coding Agents Can Be Overkill

A one-line typo fix cost thousands of tokens. Discover why AI coding agents can be overkill for small changes and how to reduce cost and overhead.

Table of Contents

Automated workflow runs triggered by an AI coding agent (“Claude Code”) for a trivial spelling fix. The agent opened an issue, posted a checklist comment, and then created a pull request. The final “Fix spelling mistake” PR run took about 1 minute to handle a one-line change.

Recently, we experimented with using an AI-based coding agent to fix a tiny typo: changing "applicasion" to "application" in a README. What we found was a significant mismatch between problem and solution. The agent went through a whole workflow – listing issues, posting a checklist comment, creating a new branch, committing the change, opening a pull request, and even pondering if a code review was needed. By the end, it consumed over 21,000 input tokens (roughly equivalent to reading a short novel) just to make that one-line edit.

It's a clear example of tool-task mismatch. The logs show how the process ballooned far beyond the scope of the task. This isn’t a knock on the AI’s capabilities; it’s a reality of how these AI coding agents work right now. They bring a lot of overhead even for the simplest changes. Let’s unpack why that happens, and what others have observed about this phenomenon.

Why so much overhead for a small change?

There are a few reasons an AI agent can end up being expensive and slow for small changes:

  • Context handling: Large language models don’t retain a continuous understanding of your whole codebase the way a human does, or even how an IDE’s indexing might. Every new action (prompt) starts from a blank slate except for what you explicitly feed into its context window. Modern models can have big context windows (hundreds of thousands or millions of tokens) but “they don’t maintain a persistent mental model of your entire project”[3]. The agent has to decide what information to include each time it runs a task. In our typo-fix case, the agent loaded issue descriptions, repository info, and more, just to ensure it had enough context. All of that counts against the token budget. If your codebase or task involves multiple files or steps, the agent might be shuffling a lot of text in and out of context. One developer pointed out that if code is tightly coupled or spread across many files, the AI has to include more context in prompts, leading to hidden costs in both time and tokens[4][3]. Essentially, the less “aware” the AI is of the narrow scope, the more it over-prepares by pulling in extra details.
  • Tool invocation and multi-step workflows: Unlike a single human edit, an agent might break a task into many sub-tasks (as our agent did: commenting, branching, editing, etc.). Each sub-task might involve calling tools or making additional LLM prompts. A commenter on Hacker News summarized this well: “It is expensive and slow to have an LLM use tools all the time for solving the problem.”[5] Every step has overhead. The agent has to compose a prompt, possibly re-load context, wait for the response, and so on. In a normal trivial fix, a developer just types the change and that’s it. The AI, however, treated it like an issue to fully resolve through an automated workflow, incurring overhead at each step. If an agent makes five or six prompts or tool calls to accomplish a one-line change, that’s inherently slower and costlier than a single direct edit.
  • Lack of pragmatic skips: A seasoned developer or code reviewer has the intuition to skip formalities when appropriate. For example, we wouldn’t file a Jira ticket, create a full branch, and run CI for a one-word typo fix in documentation. We’d likely just commit it directly. The AI agent, lacking human judgment, dutifully followed its general playbook: it opened an issue, posted a checklist, and tried to run a code review process for the PR. It’s thorough, unintelligent about the appropriate context for a task. In our run, the agent’s own logic eventually flagged that the change was trivial and no review was needed, but only after spending time and tokens to reach that conclusion. This highlights a current limitation: without specific instructions, agents won’t always short-circuit unnecessary steps.

You can see here just how much the bot is doing, simply to decide that a README change doesn’t require a code review.

  • Conservative reasoning: Large models often err on the side of caution and completeness. They may double-check work, generate verbose explanations, or handle edge cases that in practice don’t apply to a trivial change. In one community example, a developer noted that using an LLM for coding can lead to “overcomplicating changes” or adding needless steps[6]. The AI doesn’t inherently know the change is small or the stakes are low; it operates as if it should be as careful and comprehensive as possible. This can mean extra rounds of verification or documentation that a human wouldn’t bother with for a typo. All of that adds latency and token usage.

The result of these factors is that using an AI agent for something very small can be impractical or impossible to scale. It works, but it’s inefficient. Our internal trial vividly demonstrated that inefficiency: minutes of run time and tens of thousands of tokens to accomplish what one person could do in 30 seconds with an editor.

What developers are learning about AI agent costs

Our experience isn’t an isolated case. Early adopters of AI coding assistants have noticed the same pattern. Many have shared candid feedback about when these tools shine and when they falter. A common theme: if not set up with the right tooling and scope, an AI can introduce more overhead than it removes, especially on trivial tasks.

For instance, one programmer recounts spending hours guiding an LLM through code with ultimately frustrating results, concluding that “the mental overhead of all this is worse than if I just sat down and wrote the code”[7]. That sentiment will sound familiar to anyone who’s wrestled with an AI that keeps missing the point on a minor fix. There’s a threshold where the time spent explaining, re-prompting, and verifying the AI’s output outweighs the time it would take to just do it manually. Small bug fixes or tweaks often fall below that threshold.

It’s also noted that LLMs can be surprisingly adept at certain things (explaining code, sketching out a solution) while struggling to efficiently produce correct small edits. Without good supporting tools, one Reddit user argued, “you frequently get stuck for extended periods on simple bugs… usually the mental gymnastics of prompting and checking are worse than the fix itself”. In our case, prompting the agent to fix a simple typo and then verifying all its procedural steps was far more convoluted than the direct fix.

Another perspective comes from those building and using multi-step coding agents professionally. Greg Ceccarelli, an early pioneer in AI-driven development, pointed out that “each agent turn might warrant commentary or a new commit. The overhead can be enormous if not carefully managed.”[8] In other words, every extra cycle the agent takes (adding a comment here, a commit there) is an opportunity for bloat. If we don’t streamline the agent’s workflow, it will do a lot of unnecessary work because it doesn’t intuitively know which steps are essential versus overkill.

The good news is that these pain points are leading to best practices. Developers are learning when not to use the heavy AI hammer. One illuminating write-up by Allen Pike described how the initial hype around the biggest, “smartest” models died down because they “were ultimately too expensive and slow to be worth the squeeze for day-to-day coding.”[9] Many found that for routine development, the cost in API calls and time didn't justify the convenience, particularly for straightforward tasks. Pike’s solution was to reserve the really powerful (and costly) models for truly complex problems, and use cheaper or no AI for the rest. He even experimented with spending $1000/month on an advanced model for a period, and found it could be justified for difficult, high-impact coding, but he still advises not to waste expensive model cycles on trivialities[10][11].

Crucially, he emphasizes using deterministic tools for deterministic problems: “Shifting errors from runtime → test-time → build-time makes everybody more productive. Even better, fix issues deterministically with a linter or formatter. Let your expensive LLMs and humans focus on the squishy parts.”[11] A typo in documentation is the definition of a non-squishy problem. It’s straightforward, deterministic, and easily caught by a spellchecker. That’s a task begging for an automated script or a quick manual edit, not a deluxe AI treatment.

Others have echoed this: if an LLM-driven agent finds itself doing the same kind of small fix over and over, you’re better off automating that pattern outside the LLM. In a discussion about coding agent loops, one commenter put it this way: if you notice an AI agent frequently invoking a series of tool calls to solve a recurring problem, you should pull that sequence out into a normal function or script and “bypass LLMs altogether” for that case[5]. Essentially, cache the solution so the AI doesn’t reinvent the wheel every time. This approach reduces redundant prompt calls, saving time and money.

All of these community insights underline a key point: AI coding assistants are not a silver bullet, and they can introduce significant overhead on trivial tasks. Being aware of that is the first step toward using them intelligently.

The cost factor: Tokens and dollars add up

There of course exists a significant financial cost to consider when using AI for development. Most advanced models (OpenAI’s GPT-5.2, Anthropic’s Claude, etc.) charge per token or per request. A fraction of a cent per API call seems negligible, but those calls multiply quickly, especially when AI handles numerous small tasks in an automated pipeline.

Our spelling-fix experiment cost only a few dimes, but imagine that at scale or with a larger codebase. It’s not hard to see how cost can balloon when AI is mis-applied. An extreme (but instructive) example: OpenAI’s pricing for GPT-3.5 at one point was about $0.002 per ~750 words. A DoorDash engineer calculated that if they applied that to 10 billion predictions a day (DoorDash scale), it would be $20 million per day in API fees[17]. That's an extreme scenario, but it highlights how per-call costs compound at volume. Even at a smaller scale, companies have reported significant expenses: one logistics firm replaced a GPT-4-powered solution with a smaller 7B-parameter model and saw their per-query cost drop from ~$0.008 to ~$0.0006, saving about $70,000 per month[18]. The key insight is that bigger models are more expensive (in API fees and in required infrastructure), so you want to avoid paying those rates for trivial work that a simpler method could handle.

A major contributor to cost is the token usage overhead we discussed. Longer prompts, more context, and multiple back-and-forth turns all consume tokens that you get billed for. If 80% of those tokens are “overhead” (as one report suggests is common), then 80% of your money is being spent on overhead too. For instance, our Claude agent used ~23,000 tokens to fix the typo. If we estimate Claude’s pricing similar to OpenAI’s (~$0.016 per 1K tokens for input on a large model), that one typo fix cost around $0.37 just in token billing. We actually measured ~$0.23 for the fix and $0.09 for the review step, which aligns with that ballpark. Again, pennies in isolation, but it’s the principle: paying cloud compute to do something a human could do almost for free (and faster). Over many such micro-tasks, or in an agent that runs continuously, those pennies add up to real dollars.

There’s also the consideration of compute resources and energy. Running a big model for a small task can be substantial overkill. Smaller, more efficient models or solutions can often achieve the same result with a fraction of the compute. This is why AI practitioners advocate for techniques like model distillation and multi-model architectures. For example, one strategy is to route simple requests to a small model and only use the big model for complex queries[19]. By doing this, companies have reported being able to cut LLM costs by 80–90% while maintaining performance[20]. In the context of coding: you might use a lightweight code analysis for straightforward linting or formatting changes, saving the heavy LLM for when you truly need a deep reasoning about code. The bottom line is that efficiency matters. If you wouldn’t pay a senior engineer for 5 hours of work to fix a typo, you probably don’t want to pay for 5 minutes of a 500-billion-parameter model’s time either.

Strategies for smarter AI tool selection

So how can we get the best of both worlds? A few strategies emerge from our research and experience:

  • Use the right tool for the task: This principle is often overlooked amid the hype. Not every coding problem needs an AI solution. For truly trivial fixes (like renaming a variable, correcting spelling, small refactors), it might be fastest to just do it manually or rely on basic IDE features. Save the ChatGPT consultations for when you’re genuinely stuck or dealing with complexity. As one expert succinctly said, “Knowing when to use a tool is part of the learning process”[14]. The flip side is also true: when you do face a gnarly problem, don’t be afraid to use AI where it can save you time. In short, apply human intuition to decide if AI is overkill or a boon for each task.
  • Favor simpler/smaller models for simple jobs: If you have an automated pipeline that must handle small changes regularly (say, an AI ops bot managing minor code hygiene tasks), consider using smaller specialized models or rule-based automation for those. Research shows that carefully fine-tuned smaller models can handle niche tasks nearly as well as giant models, at a fraction of the cost[21][18]. For example, a 3.8 billion-parameter code model almost matched OpenAI’s 12B Codex on a bug-fixing benchmark[22]. Meaning you don’t always need GPT-4 level power for bug fixes. By adopting a tiered approach (small models or traditional scripts for small tasks, big models for big tasks), you keep the heavy compute reserved for where it truly pays off[19].
  • Optimize your AI prompts and workflows: When you do use a large model, minimize the overhead. Techniques like prompt compression, semantic chunking of context, and caching of results can dramatically reduce token usage[23][20]. In a practical sense, this means: don’t feed the entire codebase to the AI if you only need one function’s fix; don’t ask it to redo work it’s already done (cache intermediate answers); and keep prompts focused and concise. Many teams report that through such optimizations, they cut their LLM costs by over 80% without losing quality[20]. This not only saves money but also speeds up responses (since shorter prompts = faster processing). Essentially, trim the fat around the AI’s involvement.
  • Iterative, Human-in-the-Loop Processes: One pitfall we saw with the “one-shot” approach (give AI a task, it disappears and comes back with a full solution) is that it can be inefficient and sometimes produce bloated results. A better approach for code, as suggested by experts, is an iterative loop. For instance, rather than asking an AI to apply 10 changes across a codebase in one go (which might take a long time and come back with errors), step through changes one by one or in small batches, checking as you go. Austin Henley, after testing Copilot Workspace, noted that an iterative, interactive flow would be ideal, one where the AI and developer go back-and-forth making incremental changes. This way, the AI doesn’t waste effort on parts that might not be needed, and the human can course-correct quickly. Think of it like pair programming: you wouldn’t let a junior dev disappear for a day to rewrite your app for a one-line fix; you’d work closely and guide them. The same should apply with AI assistants. Keeping a human in the loop for validation and guidance on small changes can prevent the AI from “overdoing” the solution.
  • Monitor and Measure: Finally, to avoid AI overkill you need to notice when it’s happening. Keep an eye on how long AI tasks take and how much they cost. If you see an automated job taking 5 minutes to do something trivial, investigate why. Many times, it could be an inefficiency in prompt design or an unnecessary step in the process. By measuring token usage, latency, and success rates, you can identify where a lighter approach might substitute. In our case study, the metrics made it obvious that we should not use the full AI pipeline for simple typos. Implementing some basic conditional logic (e.g. “if diff is one line, maybe just apply it without invoking the whole agent”) is a straightforward fix once you realize the issue. In essence, treat your AI usage like any other part of your system: profile it, optimize it, and don’t assume more AI is always better.

Conclusion

AI coding tools offer substantial productivity benefits for the right problems. They can draft entire modules, refactor legacy code, and boost our productivity in ways we couldn’t imagine a few years ago. However, as we’ve seen, throwing a big AI model at a small problem can be counterproductive, leading to slower results, wasted compute, and higher costs for negligible benefit. The good news is that awareness is growing in the developer community. We’re learning to be more nuanced in how we use these tools, combining human judgment with AI assistance in smarter ways.

In the end, the goal is to make our workflow both efficient and effective. That means using AI where it truly adds value and not out of habit or hype. If a task takes a minute to code manually, you're likely better off doing it directly than spending five minutes in an AI workflow. On the other hand, if you’re facing a complex algorithm or an unfamiliar domain, that might be the perfect time to enlist your AI pair programmer. By balancing these approaches, teams can avoid the trap of "AI for AI's sake" and maximise the return on their tooling investments.

Building secure development workflows

At Cyfrin, we evaluate development tools, including AI assistants, through a security-first lens. For teams building in web3, understanding the strengths and limitations of your tooling is essential for maintaining code quality and security standards.

Whether you're integrating AI into your development workflow or evaluating its impact on your security practices, our team can help you make informed decisions.

Contact Cyfrin to discuss your development and security needs with a member of the team.

Sources:

  1. ThamizhElango Natarajan, “Why Tightly Coupled Code Makes AI Coding Assistants Expensive and Slow,” Medium (Dec 30, 2025) – Discusses how large context windows and code architecture influence the efficiency of AI code assistants[4][3].
  2. Allen Pike, “Spending Too Much Money on a Coding Agent,” allenpike.com (June 30, 2025) – Describes real-world costs of using advanced coding models and best practices to maximize value (e.g. use linters for trivial fixes)[9][11].
  3. Robin Linacre, “The emerging impact of LLMs on my productivity,” robinlinacre.com (Oct 2024) – Predicts that agentic AI behaviors (like writing PRs from issues) will initially be “relatively expensive and slow” due to the many prompts involved[14].
  4. Greg Ceccarelli, “Beyond Code-Centric: Agents Code but the Problem of Clear Specification Remains,” gregceccarelli.com (2025) – Notes that without structure, “each agent turn... might warrant a new commit” and overhead can become “enormous if not carefully managed.”[8]
  5. Reddit – r/ChatGPTCoding thread “Without good tooling, LLMs are abysmal for pure code generation” – Developers discuss frustration with LLMs on coding tasks. One comment notes “the mental overhead of all this is worse than if I just wrote the code” when dealing with simple issues[7].
  6. Hacker News – “The unreasonable effectiveness of an LLM agent loop…” (comment by user deadbabe) – Argues that constantly using an LLM for repetitive tool-based tasks is “expensive and slow,” suggesting to replace frequent multi-step patterns with direct code solutions[5].
  7. Internal Experiment Logs (2026): Claude Code agent run on a trivial text fix – Demonstrated ~21k token usage and multi-step workflow for a one-word change. The agent itself concluded the change was a “straightforward, non-functional” one-liner not requiring review, underscoring the disproportionate effort.

[3] [4] Why Tightly Coupled Code Makes AI Coding Assistants Expensive and Slow | by ThamizhElango Natarajan | Dec, 2025 | Medium

https://thamizhelango.medium.com/why-tightly-coupled-code-makes-ai-coding-assistants-expensive-and-slow-06ab5399169c

[5] The unreasonable effectiveness of an LLM agent loop with tool use | Hacker News

https://news.ycombinator.com/item?id=43998472

[6] [9] [10] [11]  Spending Too Much Money on a Coding Agent - Allen Pike 

https://allenpike.com/2025/coding-agents/

[7] Without good tooling around them, LLMs are utterly abysmal for pure code generation and I'm not sure why we keep pretending otherwise : r/ChatGPTCoding

https://www.reddit.com/r/ChatGPTCoding/comments/1dysrye/without_good_tooling_around_them_llms_are_utterly/

[8] Beyond Code-Centric: Agents Code but the Problem of Clear Specification Remains

https://www.gregceccarelli.com/writing/beyond-code-centric

[14] The emerging impact of LLMs on my productivity

https://www.robinlinacre.com/two_years_of_llms/

[17] [19] [20] [23] 7 Proven Strategies to Cut Your LLM Costs (Without Killing Performance) | by Rohit Pandey | Medium

https://medium.com/@rohitworks777/7-proven-strategies-to-cut-your-llm-costs-without-killing-performance-9ba86e5377e6

[18] [21] [22] Is your LLM overkill? - by Ben Lorica 罗瑞卡 - Gradient Flow

https://gradientflow.substack.com/p/is-your-llm-overkill

[25] The coming explosion in QA testing

https://charlesrubenfeld.substack.com/p/the-coming-explosion-in-qa-testing

Secure your protocol today

Join some of the biggest protocols and companies in creating a better internet. Our security researchers will help you throughout the whole process.
Stay on the bleeding edge of security
Carefully crafted, short smart contract security tips and news freshly delivered every week.