AI Makes Code Cheaper. Your Experienced Developers Just Became More Valuable, Not Less.
Over the past year, I've watched a pattern repeat itself. A founder reads about AI replacing developers. They see headlines about companies cutting engineering teams by 30%. They start wondering if they're overstaffed. Maybe they could trim down, let AI handle more of the work, redirect that salary budget somewhere else.
Then six months later, they're quietly rehiring. Or scrambling to fix what broke. Or watching their best remaining developer hand in notice because they're exhausted from covering the gaps.
The data is clear now. 55% of companies that laid off people for AI regret it. Forrester / HR Executive Forrester predicts half will rehire those roles, often offshore at lower pay, which tells you everything about whether the AI actually delivered. The Register MIT found that 95% of generative AI projects failed to deliver measurable returns. Only 12% of CEOs report that AI accomplished both revenue growth and cost reduction. MIT & PwC via WhatJobs
Microsoft laid off 15,000 people. Final Round AI They still have over 200,000 left. They kept their best. If you have five developers, the math is different. You don't have 200,000 people to absorb a bad decision.
Microsoft laid off 15,000 and kept 200,000.
You have 5 developers. Do the math.
The Hype Comes From People Who Benefit From It
AI hype is loudest in certain circles. Content creators get views from hot takes about developers being obsolete. Model providers need you to believe their tool can replace your team. Platform vendors want you to buy seats. Some of this hype is deserved. The models have gotten significantly better, especially in the last few months. But much of it is motivated by people who profit when you believe the replacement narrative.
Small companies feel this as FOMO. You don't want to fall behind. You want to use the latest technology. You want to optimize. These are good instincts. But firing capable people to replace them with AI is not optimization. It's a category error.
Big companies use technology shifts as cover for other agendas. They cut non-performers and call it AI transformation. They adjust valuations and call it efficiency. They offshore work and call it innovation. You see the headline about AI-driven layoffs, but you don't see the spreadsheet that shows they were planning workforce reductions anyway.
Big Companies (200k staff)
- Use tech shifts as cover for workforce reductions
- Adjust valuations, call it efficiency
- Can absorb mistakes with thousands of employees left
Small Companies (5 people)
- Don't have bureaucracy for slackers to hide
- Already trust their small team
- Can't afford to lose one good developer
Small companies are different. You don't have bureaucracy for people to hide behind. If someone on your five-person team is a slacker, you already know. You've already dealt with it or you're about to. The people you have left are the ones you trust, the ones who know your product, the ones who make decisions when things break at 2am. Firing them because a blog post said AI can write code is not a plan.
More Code Does Not Mean More Progress
Developers using AI tools report feeling more productive. They write more code faster. They claim 20-30% time savings. Some of this is real. But when you measure at the company level, the throughput doesn't increase proportionally.
Here's why. A study from early 2025 tracked experienced developers on real projects. Before starting, they predicted AI would make them 24% faster. After finishing, they estimated it made them 20% faster. The actual measurement showed they were 19% slower. Becker et al., arXiv
The perception gap matters. Developers believe they're moving faster because they're generating more code. But someone still needs to review that code. Someone needs to ensure it follows your architecture. Someone needs to catch the bugs, and AI-generated code has 1.7 times more bugs than human-written code. ByteIota Teams with high AI adoption merge 98% more pull requests, but their review time increases 91%. Faros AI The bottleneck just moved.
Before AI
With AI
- AI-generated code has 1.7x more bugs source
- Teams merge 98% more PRs source
- But review time increases 91% source
The bottleneck just moved.
I'm not saying AI is useless. I use it daily. My team uses it. The progress in model capabilities over the last few months has been real. Tools like Cursor and Claude Code with good context can be genuine productivity multipliers for small teams. But the gains come from augmentation, not replacement.
OpenAI's own development manager was on a podcast recently. He said almost everyone at OpenAI uses Codex for coding. The AI writes the initial version. Then most of that code gets rewritten, redirected, refined. Sometimes manually, sometimes through more AI iteration. But he didn't say the AI does everything end-to-end. And this is someone with unlimited access to the best models, the most tokens, and every incentive to claim it works perfectly.
If OpenAI can't run fully automated with their advantages, you definitely can't.
If OpenAI can't run fully automated with unlimited tokens and direct access to the best models, you definitely can't.
What Actually Works
I've spent 23 years in software development. I've conducted over 700 technical interviews. I've seen every wave of "this will replace developers" hype, from offshore outsourcing to no-code platforms to now AI. The pattern is always the same. The technology is real. The capabilities improve. But the human judgment layer doesn't disappear. It changes shape.
Here's what I tell founders when they ask about AI and their team.
Use AI with structure
- Standardize across similar roles (backend, frontend, etc.)
- Share prompts and learnings in your repo
- Document what works, treat it like any shared practice
- Nudge toward same direction, but allow tool preferences
Match oversight to criticality
- MVP stage: Let AI run, optimize for speed to validation
- Paying customers: Ensure acceleration doesn't create unpredictable failures
- Mission-critical paths: Maintain human oversight
Plan for where the industry is going
- Developers will orchestrate AI agents, not just type code
- Context extraction is the key unlock
- Your experienced team knows what matters and what doesn't
- Make their knowledge accessible to the models
First, use AI, but use it with structure. Check how your developers are using it. Maybe they're already using it but you don't know how. Maybe they're not using it at all. Either way, standardization helps. If your backend developers are all using AI differently, they can't share learnings. They can't build up a shared set of prompts and context documents. When you hire someone new, or when a current team member who's less AI-savvy wants to catch up, there's no clear path.
You don't need to force everyone onto the exact same tool. Some people work better with Cursor, some with GitHub Copilot, some with Claude directly. That's fine. But if you can nudge them in the same direction, they benefit from each other's experience. Put the prompts in the repo. Document what works. Treat it like any other shared practice.
Second, understand which tasks AI handles well and which it doesn't. The models are good at certain things. Great at others. Below mediocre at some. Boilerplate code, test generation, documentation, refactoring well-defined sections — these work well. Complex architectural decisions, understanding business context, evaluating tradeoffs between technical debt and delivery speed — these don't.
Match your oversight to the criticality. If you're in MVP stage and just trying to get something in front of users, let AI run. Your goal is speed to validation, not perfect code. But if you have paying customers and contractual obligations, faster cannot mean unpredictable. You need to ensure that acceleration doesn't introduce failure modes you can't afford.
Third, plan for where the industry is going, not where it is. No one seriously believes software development won't change more than it already has. This is the nature of the career. We've moved from punch cards to C to higher-level languages to scripting languages to languages that run everywhere from edge devices to the cloud. Now we're adding another abstraction layer. Developers will spend more time directing AI agents and less time typing syntax. That's coming.
But good stewards of your product will be valuable whether they're typing code themselves or orchestrating a swarm of agents. The knowledge they have about your customers, your codebase, your technical compromises, your deployment constraints — that doesn't transfer to an AI by default. It needs to be extracted, structured, made accessible. Most teams use AI in chat mode. Some use it in agentic mode through their IDEs, which is better. But even then, the developer's context is still mostly in their head.
If you can get that context out and make it available to the models, even today's models perform significantly better. Your experienced developers are the ones who can do that extraction. They're the ones who know what matters and what doesn't. They're the ones who can evaluate whether the AI's suggestion makes sense in your specific situation.
Trust Your People, Then Help Them Transition
When big companies lay off thousands, they still have thousands left. They kept their best people. You probably already have the people you can trust. The ones who've been with you through production incidents and architecture rewrites and customer escalations. The ones who know why that weird workaround exists in the payment flow and what happens if you remove it.
What AI Can't Access Without Your Team
- Why that payment workaround exists
- What happens if you remove the cache layer
- Which customers need manual handling
- The three times deployment failed and why
- Why we chose this architecture over that one
- The compromises we made to ship on time
Your experienced developers are the bridge.
Help them be better with AI, but also trust them about how to apply it. Your tech lead knows your codebase better than any blog post author. Your senior developer understands your customers better than any AI model. Sometimes applying AI to a codebase, even in a smart way, calls for more caution and slower progress than the online hype suggests.
The market is changing. Lower-complexity SaaS products might see churn as AI makes alternatives easier to build. Startups that thought they had a moat because their product was complex might get unexpected competition. These are real challenges. But they're navigated by people with strong analytical minds who understand your specific context, not by wholesale replacement with tools that are still learning how to not delete your production database.
There have been multiple instances of startups claiming amazing AI capabilities, only for it to turn out they were secretly outsourcing to low-paid contractors. No AI involved. That's an embarrassment to the industry, but it tells you something about the gap between hype and capability.
I'm not saying AI is useless. I'm saying keep your best people. Help those who might be struggling to adapt. Put processes in place for this transition period. No one knows what will happen in two years, but for now, you need people. You need the leads who can make judgment calls under pressure. You need the developers who have context about why your architecture looks the way it does. You need the team members who can evaluate whether that AI-generated code actually solves the problem or just looks like it does.
AI makes code cheaper.
Decisions about what code to write just became more expensive.
Your developers make those decisions.
They just became more valuable, not less.