
And why your most valuable engineer might be the next one to quit
There’s a metric your engineering dashboard isn’t showing you.
Not velocity. Not deployment frequency. Not even code quality.
It’s the number of decisions your senior developer made today.
Because that number — quiet, invisible, never reported in a sprint retrospective — might be the most important signal in your entire organization right now.
We’ve spent the last two years obsessing over what AI can do for software teams. How many lines of code it generates. How much faster features ship. How many junior developers can now punch above their weight.
What we haven’t talked about is what happens to the human being sitting in the middle of all that acceleration.
That’s what this post is about.
The Productivity Story We Keep Telling Ourselves
The narrative around AI coding tools has been overwhelmingly positive — and for good reason. The numbers are real.
GitHub reports that developers using Copilot complete tasks up to 55% faster. Teams using Claude Code or Cursor are shipping features in days that used to take weeks. At some AI-native companies, the majority of production code is now AI-generated.
These are not made-up statistics. They represent a genuine shift in what a small, well-equipped team can produce.
But here’s what the productivity reports don’t capture: what happens after the code is generated.
Someone has to read it. Evaluate it. Decide if it’s correct, safe, maintainable, and actually solving the right problem. Someone has to catch the edge cases the model didn’t consider. Someone has to understand how this new module fits into a system that’s been evolving for three years.
That someone is your senior developer. And they’re doing it at a pace that has no historical precedent.
What AI Fatigue Actually Is
AI Fatigue is not burnout in the traditional sense.
Classic burnout is a volume problem. Too much work, not enough time, sustained over months until the person breaks. It’s visible. It builds slowly. Teams usually see it coming.
AI Fatigue is a density problem. It’s what happens when the number of decisions per hour increases dramatically, even as the physical output of typing or meetings stays the same or decreases.
Here’s a useful way to think about it:
Before AI tools: A senior developer made approximately 20 meaningful technical decisions per day. Architecture choices, code review judgments, debugging calls, design trade-offs.
After AI tools: That same developer is making 60 to 80 decisions per day — because every piece of AI-generated output requires evaluation. Accept or reject? Correct or leave? Trust or verify? Is this the right approach for our system specifically?
Each individual micro-decision feels small. But the cognitive load compounds. And the human brain doesn’t distinguish between one large decision and forty small ones. Depletion is depletion.
The insidious part is that this doesn’t look like fatigue from the outside. Velocity is up. Features are shipping. The dashboard is green.
The fatigue is happening underneath.
The Four Symptoms to Watch For
1. Accept Fatigue
This is the most dangerous one.
When a developer first starts using AI coding tools, they read every suggestion carefully. They understand what the model produced, why, and whether it fits. They’re an active collaborator.
After weeks or months of high-intensity AI-assisted work, that vigilance erodes — not because the developer stopped caring, but because their evaluation capacity is depleted. They start accepting suggestions they didn’t fully read. They approve pull requests they didn’t fully review.
This is exactly what Amazon discovered the hard way in March 2026, when a six-hour outage on their main ecommerce site was traced back to AI-assisted code deployments that hadn’t received adequate human review. Their internal briefing described a pattern of junior and mid-level engineers shipping AI-generated code without the scrutiny it required. The fix was to mandate senior sign-off on all AI-assisted deployments.
But here’s the thing: even senior engineers are susceptible to accept fatigue. A tired senior dev reviewing AI code at 6pm on a Friday is more dangerous than no review at all — because they’ll approve things they shouldn’t, with full confidence.
2. Extreme Context Switching
AI tools work fast. That’s the point. But “fast” in practice means a developer can have six parallel workstreams running simultaneously — each one being advanced by AI suggestions that require evaluation and steering.
The human context-switching cost doesn’t disappear because AI is doing more of the execution. If anything, it multiplies. The developer is now the bottleneck across six concurrent threads instead of two or three.
Context switching at this frequency and intensity isn’t just inefficient. It’s cognitively exhausting in a way that accumulates across a workweek in ways that are hard to articulate in a retrospective.
3. The “Never Done” Feeling
Here’s a subtle psychological effect that’s difficult to measure but easy to recognize once you’ve felt it:
When a human developer writes code at human speed, there’s a natural sense of completion. A feature is done. A bug is fixed. The list gets shorter.
When AI is generating output at 10x human pace, the backlog never shrinks at a satisfying rate. There’s always more to review. Always another suggestion to evaluate. Always another module that could be improved now that generation is cheap.
The goalposts move at AI speed. The human’s sense of progress doesn’t.
This creates a low-grade but persistent sense of inadequacy — a feeling of always being behind, even when the team is technically ahead of schedule. Over time, this erodes motivation and satisfaction in ways that look like disengagement before they ever look like fatigue.
4. Loss of Authorship
This one is harder to quantify but perhaps the most humanly significant.
Software development — at its best — is a craft. Senior developers derive genuine satisfaction from building things they understand deeply, that reflect their judgment and experience. The work has their fingerprints on it.
With AI-generated code, that relationship changes. A developer might deploy a module they largely didn’t write, didn’t fully read, and couldn’t reconstruct from memory. Technically, it works. But there’s a nagging question:
“Did I build this, or did I approve it?”
That distinction matters for motivation, for learning, and for the kind of deep expertise that makes senior developers irreplaceable. When developers stop feeling like authors and start feeling like reviewers of machine output, something important is being slowly eroded.
Why This Lands Hardest on Your Best People
Here’s the cruel irony of AI Fatigue: it disproportionately affects the people you can least afford to lose.
Junior developers using AI tools largely experience the productivity gains without bearing the full review burden. They generate code, it looks reasonable, they ship it. The downstream consequences of that code — the maintenance burden, the architectural debt, the edge cases that will surface in six months — don’t land on them yet.
Senior developers bear all of it. They’re the ones doing the deep review. They’re the ones catching what the model missed. They’re the ones maintaining the mental model of the system that no AI tool has access to. They’re the ones making the judgment calls at midnight when something breaks.
A July 2025 Fastly survey found that senior engineers produce nearly 2.5x more AI-generated code than junior ones — because they’re better at evaluating and directing model output. But almost 30% of those seniors reported that fixing and reviewing AI output consumed most of the time they’d saved.
They’re getting faster. And they’re getting more depleted. At the same time.
What Nobody Is Measuring
Pull up your engineering metrics dashboard. You’ll likely see:
- Deployment frequency ✓
- Cycle time ✓
- Sprint velocity ✓
- Code coverage ✓
Now tell me: what’s your senior developer’s decision density at 4pm on Thursday?
What’s the quality of their code review on Friday afternoon compared to Monday morning?
How many consecutive weeks have they been operating at peak AI-assisted intensity without a genuine cognitive recovery period?
Nobody is measuring these things. They’re invisible to every productivity framework we’ve built, because those frameworks were designed for a world where the bottleneck was execution speed — not decision quality.
We’ve moved the bottleneck. We haven’t updated our instruments.
The Organizational Risk Nobody Is Pricing In
Here’s where this becomes a business problem, not just a wellness problem.
In the AI-optimized team — smaller headcount, higher individual leverage, fewer redundancies — the senior developer is no longer one important person among several. They’re the critical path through everything.
They hold the architectural knowledge. They perform the quality gate. They make the calls that AI can’t make. They’re the human context that gives the AI’s output coherence.
When that person burns out — and “burn out” here means anything from declining performance to resignation — the impact is not linear. You don’t lose 20% of your team’s capacity. You lose the person who was providing judgment to the other 80%.
The team doesn’t slow down. It stops.
And the warning signs are invisible until they’re not. Retention risk from AI Fatigue doesn’t show up in your sprint metrics. It shows up when someone puts in their notice, and three months later you realize you can’t reconstruct the architectural decisions they were carrying in their head.
What You Can Actually Do About It
Measure what matters. Start tracking review quality over time, not just review completion. A senior dev who approved 40 PRs in a week without pushing back on anything is not a productivity hero — they’re a warning sign.
Design recovery into the sprint. Not just “no meetings Friday.” Actual cognitive recovery: unstructured problem-solving time, deep work on a single problem, time away from AI-generated output entirely. The brain needs to operate in non-evaluation mode regularly.
Redistribute the review load intentionally. In a lean team, all review pressure defaulting to one senior is a structural failure, not a sign that person is great at their job. Build redundancy into the review layer even when it feels inefficient.
Make authorship visible. Create space for developers to build things they fully own — from conception to deployment, with AI as a tool but their judgment driving every decision. Don’t let every engagement become a review task.
Have the honest conversation. Ask your senior engineers directly: how many decisions did you make today? When was the last time you felt genuinely recovered? What’s the quality of your attention at 5pm compared to 10am? Most of them already know the answer. They’ve just never been asked.
The Operator Problem
We’ve spent two years optimizing the machine.
The AI tools are faster. The code generation is better. The automation is more capable. The team is leaner. The velocity numbers look great.
We forgot about the operator.
Every complex system has operators — human beings who maintain situational awareness, make judgment calls, and intervene when the automation produces something that looks right but isn’t. Aviation learned this. Nuclear power learned this. Surgery is learning this.
Software development is next.
AI Fatigue is the occupational hazard that comes with being the human in the loop when the loop is running at machine speed. It doesn’t have an OSHA standard. It doesn’t have an established treatment protocol. Most HR systems don’t have a category for it.
But it’s real. And if you’re leading an engineering team right now, you’re probably watching it happen — in the form of a senior developer who’s technically performing but seems somehow less present, less curious, less sharp than they were eighteen months ago.
That’s not a people problem. It’s a structural one.
And the fix isn’t to slow down the AI. It’s to redesign how humans operate alongside it.
Diego Fiorentin is the founder of NextTo.ai, an AI consulting firm that helps SMBs implement AI operationally — business-first, not tool-first. This article was adapted from a talk delivered at the 1950Labs tech meetup in Montevideo, March 2026.
If your team is navigating the transition to AI-assisted development and you’re seeing signals like the ones described here, book a conversation →
