The New Senior Engineer Skill: Knowing When NOT to Use AI
Everyone in tech right now is trying to prove how much AI they’re using. I want to talk about the opposite.
AI writes 30% of Microsoft’s code and a quarter of Google’s [1]. Meta is tying performance reviews to “AI-driven impact” [2]. Companies that announced AI strategies in 2023-2024 saw their stock prices jump over 6% compared to those that didn’t [3]. The incentives are loud and clear: more AI, faster, everywhere.
I get it. I’m not naive about why this is happening. Investors want to see AI adoption metrics. Executives want to show them. When the market rewards you for saying “30% of our code is AI-generated,” you’re going to find ways to hit that number.
But I’ve been building software for 15 years, and I think someone with actual engineering experience needs to say this: the percentage of AI-generated code in your codebase is a terrible metric. Lines of code have never measured anything useful. Wrapping them in AI doesn’t change that.
Cortex’s 2026 benchmark report [4] found that teams using AI ship 20% more pull requests per author — but incidents per PR went up 23.5% and change failure rates climbed ~30%. IEEE Spectrum reported in January [5] that some experienced developers are actually slower with AI tools, because verifying AI-generated code takes longer than writing it yourself when you know the domain. We’re not shipping faster. We’re shipping more. That’s not the same thing.
A developer told the SF Standard [6] he used to feel like a craftsman and now feels like a factory manager at IKEA. That stuck with me. You stop thinking about why the code should work a certain way and start evaluating if what the AI gave you is close enough. You trade understanding for throughput — and when the incidents start, nobody can explain why.
I want to be clear: I’m not anti-AI. I use it every day. It’s great for EVERYTHING, as long as I already know what correct looks like, I will say it again:
AI is PERFECT when you know what “Correct” looks like.
I’m saying this as someone with 15 years in production codebases, not as someone trying to be contrarian for attention, there are real situations where the right engineering decision is to not use AI. Examples: When you’re learning a new system. When the problem is organizational, not technical. When debugging something subtle that the model will quietly paper over instead of solving [5].
What worries me most is the junior developer pipeline. Open source maintainers at VLC and Blender are drowning in AI-generated contributions — VLC’s CEO called the quality from people junior to the codebase “abysmal” [7]. We’re building an industry where juniors skip the struggle that creates seniors, and then we’ll wonder why nobody can think about systems.
The engineers I’ve always respected most write less code, not more. They kill features that shouldn’t be built. They say “we don’t need that” in planning meetings and save the team months. AI can only add. The discipline to subtract is still yours.
[1] MIT Technology Review — Generative Coding: 2026 Breakthrough Technologies
[2] eWeek — Meta Makes AI Adoption a Formal Part of Performance Reviews
[3] CodeRabbit — What Percentage of Your Code Should Be AI Generated?
[4] Cortex — Engineering in the Age of AI: 2026 Benchmark Report
[5] IEEE Spectrum — AI Coding Degrades: Silent Failures Emerge
[6] SF Standard — AI Writes the Code Now. What’s Left for Software Engineers?
[7] TechCrunch — For Open Source Programs, AI Coding Tools Are a Mixed Blessing
Lucas Pinto is a Staff Software Engineer with 15 years of experience across startups and enterprise. He writes about building software, leading teams, and the judgment that tools can’t replace.