AI Has Broken Software Engineering Interviews. Now What?
Introduction
Here's something most hiring managers haven't caught up with yet: the way we interview software engineers is fundamentally broken. Not slightly outdated. Broken.
For years, the standard has been some combination of leetcode-style problems, take-home projects, and whiteboard sessions. The assumption was always that you're testing a candidate's ability to write code. But in 2026, every working engineer uses AI tools daily: Copilot, Claude, ChatGPT are as standard as a text editor. So when you put someone in a room and ask them to reverse a binary tree from memory, what exactly are you measuring?
I've been on both sides of this: as a contractor being interviewed and as someone involved in hiring decisions. And I think the industry needs to rethink this from scratch.
The Leetcode Problem
Leetcode-style interviews were always a bit suspect. They test a narrow skill: algorithmic problem-solving under pressure. This has limited overlap with actual day-to-day engineering work. Most of us spend our time reading existing code, designing systems, debugging production issues, and communicating with stakeholders. Not implementing quicksort on a whiteboard.
But at least there was an argument for it: if someone can solve a hard algorithm problem, they probably have solid fundamentals. It was a noisy signal, but it was a signal.
AI has killed even that argument. Any candidate with access to an AI assistant can solve most leetcode problems in minutes. And banning AI tools during an interview just creates an artificial environment that doesn't reflect how anyone actually works. You're testing their ability to perform in conditions they'll never face on the job.
Three Approaches I've Seen
From my contracting experience across different companies, I've seen three emerging approaches to this problem. Each has trade-offs.
Let Them Use AI
The most progressive companies I've worked with now explicitly allow AI tools in technical interviews. They provide the candidate with a problem (usually something closer to a real task than a leetcode puzzle) and say "use whatever tools you normally use."
What they're watching for changes completely. Instead of "can you write this algorithm?", it becomes "how do you break down a problem? What prompts do you write? How do you evaluate and modify the AI's output? Do you spot the bugs it introduces?"
This approach tests the skill that actually matters in 2026: the ability to direct AI tools effectively and critically evaluate their output. The engineers who are brilliant at this are incredibly productive. The ones who can't are just copying and pasting code they don't understand.
The Trial Day
Some companies have moved to paid trial days. You come in, join the team for a day, and work on a real (or realistic) problem. You use your normal tools, your normal workflow, and interact with the actual team.
This is probably the most accurate way to assess someone. You see how they communicate, how they handle ambiguity, how they debug, and how they collaborate. It's expensive in terms of everyone's time, but the signal quality is miles ahead of a 45-minute coding challenge.
The downside is scalability. You can't give a trial day to 50 candidates. This works best as a final stage after you've already narrowed the field. There's also the question on payment, do you pay them for their time?
System Design and Discussion
The third approach leans heavily on system design interviews and technical discussions: no coding at all, just conversations about architecture, trade-offs, and decision-making.
"How would you design a notification system for a million users? What database would you choose and why? Walk me through a time you had to refactor a critical system."
AI can't fake this. You either have the experience and judgement or you don't. These conversations also reveal communication skills, which matter enormously in practice but are invisible in a coding test.
Soft Skills Matter More Than Ever
What's notable about all three approaches is what they reveal that a coding test never could: how someone thinks, communicates, and makes decisions under real conditions. And here's where the real shift happens.
Here's the uncomfortable truth: if AI handles more of the routine coding, the differentiating skills for engineers are increasingly non-technical. Communication, project management, stakeholder alignment, the ability to understand a business problem and translate it into technical requirements. These are the things that separate great engineers from average ones.
Yet most interview processes spend 90% of their time on technical assessment and maybe 10% on "culture fit" (which usually means "would I enjoy having a beer with this person").
I think the split needs to flip. Spend more time understanding how a candidate thinks, communicates, and makes decisions. Spend less time watching them write code that an AI could generate in seconds.
What This Means for Candidates
If you're interviewing right now, here's my honest advice: get good at working with AI, not despite it. Practice using AI tools to solve real problems, then practice explaining your approach clearly.
The candidates who stand out aren't the ones who memorised every sorting algorithm. They're the ones who can take a vague requirement, break it down, use AI to accelerate the implementation, spot the issues in what the AI produces, and explain the whole thing clearly to a non-technical stakeholder. In an interview, that might mean saying something like "I'd ask the AI for a first pass at the solution, but I'd verify this part because..." or walking the interviewer through your reasoning about what you changed and why.
That's the job in 2026. Interview prep should reflect that.
Conclusion
The engineering interview is overdue for a rethink. Leetcode was always an imperfect proxy, and AI has made it almost meaningless. The companies that adapt by testing real-world skills with real-world tools will hire better engineers. The ones that cling to whiteboard puzzles will keep selecting for a skill that matters less every year.
If you're involved in hiring, ask yourself: does our interview process test how people actually work? If the answer is no, start small. Try replacing one coding round with a 30-minute system design conversation. See what you learn about how candidates actually think. You might be surprised.