The Productivity Panic Is Your Problem Now
Your engineering org is caught between executives demanding AI ROI and engineers terrified their own improvements are arguments for their elimination. Here’s the playbook for leading through it.
In one of my coaching sessions, a fellow EM mentioned that several engineers on their team — independently, in separate 1:1s — brought up the same thing. They were feeling crushed. Not by the workload itself, but by the volume of open threads they were pulling simultaneously.
Before Claude Code and the other AI tools, they’d have one, maybe two tracks they were actively working on. That’s how software engineering has always operated — you take a problem, you think about it deeply, you ship it, you move on. Now, because the tools make it possible to generate code across multiple contexts at once, they were running three, four, five workstreams simultaneously. And it was breaking their brains.
The overwhelm wasn’t about the hours. It was about the loss of focus and the flood of second-guessing. When you can do more, you start asking: should I be doing more? Are these the right things? What else could I be working on? Every open thread becomes a low-grade anxiety signal — not because the work is hard, but because the optionality itself is paralyzing.
Then then said that it came up in a team retro. One engineer said it in front of everyone: they were experiencing an existential crisis. Real stress. And they specifically named AI tooling as the accelerator — not because the tools were bad, but because the tools made everything feel simultaneously possible and overwhelming.
Interestingly, in every conversation I’m having with EMs and PMs in my network circles back to the same thing — different teams, different companies, loads of excitement accompanied with the same anxiety. The best engineers and hands-on PMs are the most stressed, because they can see just how much AI could do and feel guilty about every hour they spend thinking instead of generating. The newest hires, meanwhile, are shipping AI-generated code they can’t fully explain when asked.
That retro moment is what made me dig into the data. What I found wasn’t reassuring.
The Panic Sandwich
Engineering leaders are caught between two panics. I call it The Panic Sandwich.
The top slice — board and exec panic:
”Why aren’t we moving faster with AI? What’s our adoption rate? Jack Dorsey just cut half his company and the stock surged 24%. Our investors are asking questions. Show me the ROI.”
At Series A–B companies, every board meeting now includes a “what’s your AI strategy?” slide. 81% of IT leaders say the C-suite is now the main driver of AI initiatives — up from 53% just a year earlier — and 51% expect measurable AI ROI within 12 months, despite no consensus on how to measure it (Flexential, 2025 State of AI Infrastructure Report).
The bottom slice — team panic:
”Am I about to be replaced? Should I be using AI more? Is my job safe? I can do five things at once now but I can’t tell if any of them are the right things.”
Your engineers read Hacker News. They see the Bloomberg headlines. They’re doing the math on their own replaceability. And if you’ve hired new ones, they doing it during onboarding.
As an engineering leader, you’re the filling.
You absorb the pressure from above. You absorb the anxiety from below. You translate between two groups that are panicking about the same thing from opposite directions — leadership worried AI isn’t delivering enough, engineers worried it’s delivering them out of a job.
Here’s the thing that makes this particularly dangerous during a scaling push: AI amplifies whatever culture you already have. If your engineers understand the user and the product, AI makes them faster at building the right things. If they don’t — if they’re technically strong but product-blind — AI makes them faster at building the wrong things. More wrong features, faster, with more code to review and more bugs to find.
You can either be a passive panic sponge — absorbing anxiety from both sides until it erodes your judgment — or you can be a clarity driver who turns ambient dread into organizational decisions.
The Numbers That Are Feeding the Panic
The headline data your team is reading:
Block cut 50% of its workforce on February 26, 2026 (TechCrunch). Gross profit was up 24% YoY. It’s a profitable company deciding it needed fewer people. Stock surged 24%.
Oracle cut 30,000 employees on March 31 — 18% of its workforce — to free up $8-10 billion for AI data centers (CNBC). Net income was up 95% the previous quarter.
Dorsey’s prediction — “most companies will do the same within a year” — landed the same day Andrej Karpathy posted that programming has become ”unrecognizable” since December 2025.
But the data that matters most isn’t about layoffs at 150,000-person companies. It’s about what’s happening inside teams that are still growing.
UC Berkeley found developers using AI are working more hours, not fewer (Bloomberg, February 2026). HBR reported that AI augments what employees can do, then delivers “fatigue, burnout, and a growing sense that work is harder to step away from” (HBR/Gartner, February 2026). One engineering leader told TechCrunch: “expectations have tripled, stress has tripled, actual productivity only up maybe 10%” (TechCrunch, February 2026).
And if you’re in the middle of scaling from 20 to 50 engineers, this is the environment your new hires are walking into.
The Velocity Illusion (And Why It’s Lethal During Scaling)
Your senior engineer who used to review four PRs a day is now staring at twelve — and half of them are twice the size they used to be. That’s the velocity illusion in practice.
Faros AI analyzed 10,000+ developers across 1,255 teams (February 2026). Teams with high AI adoption merge 98% more PRs. Leadership sees that and thinks the investment is paying off. But the full picture:
PR review time increased 91%
Bugs per developer rose 9%
Duplicated code increased 8x
Refactoring activity dropped ~60% (GitClear, cited in the Faros AI Productivity Paradox report)
Despite individual output spikes, they observed no significant correlation between AI adoption and improvements at the company level. The delivery pipeline isn’t moving faster. It’s clogging. The bottleneck didn’t disappear — it moved from writing code to reviewing code. Nobody updated the scoreboard.
The 2025 DORA Report — nearly 5,000 respondents, 100+ qualitative interviews, the most rigorous annual study in engineering productivity — now says it plainly: ”AI does not automatically improve software delivery performance.” It amplifies existing conditions. Teams with mature platform engineering, small-batch workflows, and strong engineering hygiene pull further ahead. Teams that are fragmented or lacking product discipline accelerate their decline. AI is a multiplier, not a corrective — and most scaling orgs don’t have the foundations in place to multiply something good.
The quality data is getting worse, not better. METR found that roughly half of test-passing AI-authored pull requests would be rejected by the repository’s own maintainers — agents optimize for green CI, not for code quality, architectural fit, or readability. Your pipeline says “all checks pass.” Your codebase says otherwise. The Cortex 2026 Benchmark Report fills in the rest: incidents per pull request up 23.5%, change failure rates up ~30% for teams that adopted AI without upgrading their quality practices. And CircleCI’s 2026 State of Software Delivery — 28 million workflows across 22,000 organizations — shows main branch success rates at a five-year low of 70.8%. AI throughput is outrunning validation capacity across the industry.
Waydev captured the punchline: AI drove a 59% surge in engineering throughput — but releases are not keeping pace. More code. Same number of releases. The work is going somewhere, but it’s not reaching users.
Meanwhile, 90% of CEOs report AI has had no measurable impact on productivity (NBER/Fortune, February 2026). Only 1 in 50 AI investments delivers transformational value (Gartner/HBR, February 2026). But 96% of executives expect AI to increase productivity (AskFlux, 2025). That gap between expectation and measurement is where the panic lives.
Now layer on Anthropic’s study from January 2026 — a randomized controlled trial with software developers: those using AI scored 17% lower on comprehension tests, roughly two letter grades worse than those who coded by hand. The biggest gap was in debugging — exactly the skill you need most when a growing portion of your codebase is AI-generated.
Think about what this means at 40 engineers. You just brought on 15 new people in two quarters. They’re using AI tools from day one, shipping code faster than any cohort you’ve onboarded before. But if they’re fully delegating without understanding the output, you’re building a team that’s fast but increasingly unable to validate what it ships.
At 15 engineers, your seniors could absorb this in review. At 40, they’re buried.
From Panic Sponge to Clarity Driver: The Playbook
To be clear: AI does deliver real value. On my team it’s accelerated everything from boilerplate and test generation to higher-leverage work — prototyping POCs, answering support tickets, digging into implicit product decisions buried in the codebase, feature flag cleanups, etc.
The problem isn’t AI. It’s adopting AI without the organizational infrastructure to channel it well. Every tactic below assumes you’re keeping AI and using it aggressively — just with guardrails.
1. Protect Product Sense First
This is the most important tactic: AI can generate code, but it can’t generate product judgment.1
An engineer who doesn’t understand why users need a feature will produce technically correct but product-wrong code — and AI will help them produce it three times faster. More wrong code, faster, all passing CI. If the culture your team had at 15 engineers — deep product understanding, engineers who could explain *why* they were building something — is eroding at 40, AI will accelerate that erosion.
Build product understanding into your AI practices:
Require PR descriptions to articulate the user problem, not just the technical change. “Adds pagination to the API” tells you nothing. “Users processing more than 500 transactions per month hit timeout errors — this adds cursor-based pagination to keep response times under 2 seconds” tells you the engineer understands what they’re building and why
For newer engineers: use the “Generation-Then-Comprehension” pattern from the Anthropic study — generate with AI, then understand the output and its product implications before moving on. Those who did this retained 86% comprehension. Those who fully delegated retained 39%. If they can’t explain how the code connects to a user need, they’re not ready to ship it
Run product walkthroughs alongside code reviews. When reviewing an AI-generated feature, check whether the code solves the right problem, not only its correctness. This is the review that AI can’t do for you.
The engineers who’ll be most valuable in two years are the ones who generate the right code — because they understand the product deeply enough to direct AI toward the work that actually matters.
2. Name the Elephant — Then Bound It
Stop waiting for your team to bring up the anxiety. They might not — not because they aren’t worried, but because they don’t know if it’s safe to be worried at work. Name it explicitly.
In 1:1s, ask directly:
”How is the AI tooling actually affecting your day-to-day? Not the productivity stats — your experience.”
”I know the industry headlines are out there. What questions do they raise for you about our team?”
”Are you finding it harder to focus with more things in flight?”
Then bound it.
Help your team separate what’s actually changing on this team from what’s happening in the news cycle. “Oracle cut 30,000 people” is a headline about a 160,000-person company betting $156 billion on data centers. “We just hired four engineers and our scope is expanding” is your team’s reality. Those are two different stories. Your job is to make sure your people are living in the right one.
The patterns across your 1:1s will tell you where the anxiety is concentrated — and whether it’s about AI specifically or about a broader lack of clarity on what the team is building and why. Often it’s the latter, and AI is just the accelerator.
3. Coach for Leverage, Not Volume
This one came directly from my team’s experience. AI tooling didn’t just speed up their work — it fragmented it. When you can generate code across five contexts simultaneously, you end up context-switching across five work streams. And the instinct becomes: I can do more things, so I should do more things.
That instinct is wrong. It’s always been.
An engineer’s leverage isn’t in solving a huge volume of problems. It’s in solving a specific, well-chosen problem that affects a large volume of users or revenue or other business outcomes. AI didn’t change that equation, it made the temptation to scatter harder to resist.
Your job as a leader is to coach engineers back toward leverage. Help them ask: “Of the five things I could start right now, which one matters most to the user? Which one moves the metric we actually care about?” Then go deep on that one thing. AI makes the execution faster — great. Use the time saved to understand the problem better, not for AI-powered snacking.
This seems counterintuitive when leadership is pushing for throughput. But an engineer deeply focused on one high-leverage problem will ship more real impact than an engineer scattered across five threads of AI-generated drafts that all need review and none of which were the most important thing to build.
Frame it this way to leadership: “We’re optimizing for impact per engineer, not tasks per engineer.”
4. Re-frame Measurement Upward
Your board is probably tracking the wrong metrics. Here’s what that conversation sounds like — and how to redirect it.
What your board says: ”We’re investing in AI tools across engineering. What’s the ROI? How many PRs are AI-assisted? What’s our adoption rate?”
What you say back: ”Adoption is high — our engineers use AI daily. But let me show you the full picture. We’re merging 98% more PRs. That sounds great. But review time is up 91%, and our DORA metrics — the ones that actually measure delivery performance — are flat. PR count measures activity, not outcomes. What I want to show you instead is product throughput: features that solve user problems, shipped to production, validated with data. That’s the number that matters for product-market fit.”
This re-framing works because you’re not pushing back on AI — you’re pushing back on bad measurement of AI. That positions you as the person thinking clearly about what actually drives the business. Anchor on Gartner’s finding that only 1 in 50 AI investments delivers transformational value. Results is the name of the game, not adoption.
5. Define AI Delegation Zones
The Anthropic study showed that how people use AI matters far more than whether they use it. Remove ambiguity by giving your team explicit zones:
🟢 Green zone — high delegation, low risk: Boilerplate code, test scaffolding, documentation drafts, simple refactors or cleanups. AI accelerates these and verification cost is low.
🟡 Yellow zone — AI-assisted, human-verified: Feature implementation, API integrations, data transformations. AI builds, humans review carefully. Every AI-generated change here gets the same scrutiny as any other PR. Even though this zone is “yellow”, it’s where the majority of your team’s PRs fall in.
🔴 Red zone — human-led, AI follows: Security-critical code, architecture decisions, complex business logic, anything touching money or user data. Humans lead. AI can suggest, but a human must understand and own every line.
Across all zones, one thing remains true: the human is accountable for each line of code, whether AI wrote it or not.
The zones do double duty. They give engineers clarity on where AI belongs — reducing the “should I be doing more?” anxiety. And they protect your product quality by concentrating human review where it matters most. At a scaling company, these zones also become part of your onboarding — new engineers know the expectations from week one.
6. Budget for The Verification Tax
AI-generated code isn’t free. It arrives fast but carries a verification cost that nobody budgets for. This is the operational reality behind the velocity illusion.
Make it explicit in sprint planning:
Add ”AI review” as a line item in capacity planning, not an afterthought
Track your review-to-generation ratio as a health metric. If generation is outpacing review, you’re accumulating unverified code — which is just tech debt with a different name
As a rough model: if AI cuts generation time by 60%, add 30% to your review estimates per story point. The net is still faster — but the review budget is now visible, not hidden
During rapid scaling, this matters even more. Every new engineer adds to your generation capacity. Your review capacity doesn’t scale at the same rate. If you don’t budget for the verification tax, you’ll hit a point where you’re shipping code faster than anyone can validate it — and the bugs will find your users before your reviewers do.
The Clarity Your Organization Actually Needs
After that team retro, we worked on a set of specific decisions. We capped WIP at two tracks. We started requiring product context in PR descriptions. We made the review queue visible as a team metric, not an invisible tax on seniors. We named the anxiety out loud in a team meeting and separated what was actually changing from what was headlines.
None of it was revolutionary. All of it was clarity.
You can’t control the Bloomberg headlines. You can’t promise layoffs will never happen. You can’t predict how AI will reshape engineering roles.
But you can control the clarity of the environment you build — where AI belongs, where human judgment is non-negotiable, how you budget for the real cost of AI-assisted development, and whether your engineering practices build product understanding or let AI bypass it.
The productivity panic is real. The pressure from above is real. Your team’s anxiety is real. But the engineering leader who converts panic into clarity — who takes the ambient dread and turns it into organizational decisions, specific practices, and honest conversations — that person isn’t a panic sponge. That person is the reason the team holds together during the hardest scaling phase of the company’s life.
The next time an engineer tells you in a retro that AI tooling is giving them an existential crisis, you’ll have an answer. Not a reassuring platitude — a specific set of organizational decisions that convert their anxiety into something they can actually work with.
If you’re scaling an engineering team through the AI productivity panic and want practical frameworks for maintaining product culture while growing fast, [subscribe to Stratechgist](https://stratechgist.com). I write about building product-minded engineering teams at scale — lessons from the trenches. You can also find me on LinkedIn where I share shorter takes on what’s actually working in engineering leadership right now.
If your team is growing past 20 engineers and you’re seeing velocity drop despite AI adoption, drop a reply or a DM and let’s exchange notes. I can help you out.
I might put this on a t-shirt – anyone interested in merch? Hit me up.





