1. Use recruiters and network: Wading through the sheer volume of applications was even nasty before COVID, I don't even want to imagine what it's like now. A good recruiter or a recommendation can save a lot of time.
2. Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.
3. Put the candidate at ease - nervous people don't interview well, another problem with non-trivial tasks in technical interviews. I rarely do any live coding, if I do, it's pairing and for management roles, to e.g. probe how they manage disagreement and such. But for developers, they mostly shine when not under pressure, I try to see that side of them.
4. Talk through past and current challenges, technical and otherwise. This is by far the most powerful part of the interview IMHO. Had a bad manager? Cool, what did you do about it? I'm not looking for them having resolved whatever issue we talk about, I'm trying to understand who they are and how they'd fit into the team.
I've been using this process for almost a decade now, and currently don't think I need to change anything about it with respect to LLMs.
I kinda wish it was more merit based, but I haven't found a way to do that well yet. Maybe it's me, or maybe it's just not feasible. The work I tend to be involved in seems way too multi faceted to have a single standard test that will seriously predict how well a candidate will do on the job. My workaround is to rely on intuition for the most part.
So far, everyone that elected to use GPT did much worse. They did not know what to ask, how to ask, and did not "collaborate" with the AI. So far my opinion is if you have a good interview process, you can clearly see who are the good candidates with or without ai.
As an interviewer, it's wild to me how many candidates think they can get away with it, when you can very obviously hear them typing, then watching their eyes move as they read an answer from another screen. And the majority of the time the answer is incorrect anyway. I'm happy that we won't have to waste our time on those candidates anymore.
- share your screen
- download/open the coding challenge
- you can use any website, Stack Overflow, whatever, to answer my questions as long as it's on the screenshare
My goal is to determine if the candidate can be technically productive, so I allow any programming language, IDE, autocompleter, etc, that they want. I would have no problem with them using GPT/Copilot in addition to all that, as long as it's clear how they're solving it.
Once you get to the interview process, it's very clear if someone thinks they can use AI to help with the interview process. I'm not going to sit here while you type my question into OpenAI and try to BS a meaningful response to my question 30 seconds later.
AI-proof interviewing is easy if you know what you're talking about. Look at the candidates resume and ask them to describe some of their past projects. If they can have a meaningful conversation without delays, you can probably trust their resume. It's easy to spot BS whether AI is behind it or not.
AI doesn't just change the interviewing game by making it easy to cheat on these interviews, it should be changing your hiring strategy altogether. If you're still thinking in terms of optimizing for cogs, you're missing the boat—unless you're hiring for a very short term gig what you need now is someone with high creative potential and great teamwork skills.
And as far as I know there is no reliable template interview for recognizing someone who's good at thinking outside the box and who understands people. You just have to talk to them: talk about their past projects, their past teams, how they learn, how they collaborate. And then you have to get good at understanding what kinds of answers you need for the specific role you're trying to fill, which will likely be different from role to role.
The days of the interchangeable cog are over, and with them easy answers for interviewing.
I think every interviewer, hiring manager ought to know or be trained on these tools, your intuition about candidate's behaviour isn't enough. Otherwise, we will soon reach a tipping point where honest candidates will be at a severe disadvantage.
This works especially well if I don't know the area they're strongest in, because then they get to explain it to me. If I don't understand it then it's a pretty clear signal that they either don't understand it well enough or are a poor communicator. Both are dealbreakers.
Otherwise, for me, the most important thing is gauging: Aptitude, Motivation and Trustworthiness. If you have these three attributes then I could not possibly give a shit that you don't know how kubernetes operators work, or if you can't invert a binary tree.
You'll learn when you need it; it's not like the knowledge is somehow esoteric or hidden.
I want to see how the candidate reasons about code. So I try to ask practical questions and treat them like pairing sessions.
- Given a broke piece of code, can you find the bug and get it working?
- Implement a basic password generator, similar to 1Password (with optional characters and symbols)
If you can reason about code without an LLM, then you’ll do even better with an LLM. At least, that’s my theory.
I never ask trick questions. I never pull from Leetcode. I hardly care about time complexity. Just show me you can reason about code. And if you make some mistakes, I won’t judge you.
I’m trying to be as fair as possible.
I do understand that LLMs are part of our lives now. So I’m trying to explore ways to integrate them into the interview. But I need more time to ponder.
The candidate’s first response? “Memory updated”. That led to some laughs internally and then a clear rejection email.
To then select a good developer I'd test communication skills. Have them communicate what the pros/cons of several presented solutions are. And have them critique their own solution. To ensure they don't have canned answers, I might just swap the problem/solutions for the in-person bit. The problem they actually solved and how they did it isn't very important. It's whether they could read and understand the problem, formulate multiple solutions, describe why one would be chosen over another. Being presented with a novel problem and being asked on the spot to analyze it is a good exercise for developers (Assuming software development is the job we're discussing here).
Just take the time to talk to people. The job is about reading and writing human language more than computer programming. Especially with the arrival of AI when every junior developer is now micro managing even more junior AI colleagues.
The biggest thing I've noticed is take home challenges have lost all value. Since GPT can plausibly solve almost anything you throw at it, and it doesn't give you any indication of how the candidate thinks.
And to be fair, I want a candidate that uses GPT / Cursor / whatever tools get the job done. But reading the same AI solution to a coding challenge doesn't tell me anything about how they think or approach problems.
Not being able to code is by far the easiest failure mode to deal with and I can deal with it more quickly by looking at resumes and firing people for outright lying about their abilities very quickly.
What is much harder to detect is the person who gives up at the first sign of trouble. Or someone who likes to over abstract everything. Or someone who likes to spend all day nitpicking PRs.
The absolute most damaging employee is the technical tornado midlevel who has prolific output and is good at figuring out how to get PRs through the code review process.
I can only bring to begin imagine the kind of damage a person like that could do with an LLM, an inattentive manager, and a buddy willing to rubber stamp PRs.
And I had made it clear that they should use their own words.
It happened twice, that the candidate on the other side was clearly typing during the interview, taking a pause for a second or two, and then reading from the screen. That's very obvious as of today, but I can see how it will become a problem one day with the future AI development in terms of the speed of responses, and better voice recognition techniques (so, no typing needed).
The interview is a chance to see how a candidate performs in a work like environment. Let them use the tools they will use on the job and see how well they can perform.
Even for verbal interviews, if they are using ChatGPT on the side and can manage the conversation satisfactorily then more power to them.
And generally, the more junior people are just completely lost without it. They've become so dependent on it, they can't even google anything anymore. Their search queries are very weirdly conversational questions and the idea of reading the docs for whatever language or library they're using is totally foreign to them.
I think it's really hampering the growth of junior devs - their reasoning and thought processes are just totally tuned to this conversational form of copy and paste programming, and the code is really bad. I think the bottom half of programmers may just LLM themselves out of any sort of job because they lose the ability to think and problem solve... Kinda sad imo.
Ask even the shallowest question and they are lost and just start regurgitating what feels like very bad prompt based responses.
At that point it's just about closing down the interview without being unprofessional.
This is borne out by results downstream with clients. No client we've sent more than a couple of people has ever had concerns about quality, so we're fairly confident that we are in fact detecting the cheating that is happening with reasonable consistency.
I actually just looked at our data a few days ago to see how candidates who listed LLMs or related terms on their resume did on our interview. On average, they did much worse (about half the pass rate, and double the hard-fail rate). I suspect this is a general "corporate BS factor" and not anything about LLMs specifically, but it's certainly relevant.
- adapted Java's Regex engine to work on streams of characters
- wrote a basic parametric geometry engine
- wrote a debugger for an async framework
- did innovative work with respect to whole-codebase transformation using macros
Among other things.
As for ChatGPT in the context of an interview, I'd only use it if I were asked to do changes on a codebase I don't know in limited time.
It was a really great format and I think one that creates better separation between good candidates and great candidates because it is more open-ended and collaborative. One of my favorite technical interviews I've engaged in.
I think that now, with AI coding assistants becoming even more integral than those early days 18 months ago, this approach is more relevant than ever since it gives insight into how efficiently a candidate can quickly review AI generated code for correctness, defects, performance issues, and other gaps in quality.
I liked this format so much that I ended up creating a small open-source tool for it to make it easier to manage this process: https://coderev.app (https://github.com/CharlieDigital/coderev)
(The interviewer had to create a private GH repo and the repo itself didn't support commenting inline; I took my notes in a text file and reviewed it interactively with the interviewer).
If the AI is so good at it, why are we still hiring human to do the job? It just shows how the interview process is not measuring the right thing to start with.
And for questions, I don't mean "it's better a list or a set?", but something like: "you have an application like this, how can you improve it to perform X?"
This project is designed to evaluate your ability to: Feel free to use any tools, libraries, frameworks, or LLMs during this exercise. Also, you’re welcome to reach out to us at any time with questions or for clarification.
We estimate this project will take approximately 5–7 hours. If you find that it requires more time, let us know so we can adjust the scope. - Deconstruct complex problems into actionable steps.
- Quickly explore and adopt new frameworks.
- Implement a small but impactful proof of concept (PoC).
- Demonstrate coding craftsmanship through clean, well-architected code.
I used LLM-as-a-junior-dev to generate 95+% of the code and documentation. I'm just an average programmer, but tried to set a bar that if I was on the other side of the table, I'd hire anyone who demonstrated the quality of output submitted.
- The 5-7 hour estimate was exceeded (however, I was the first one through this exercise).
- IMHO the quality of the submission could NOT have been met in lesser time.
- They had 3 tasks/projects:
- a data science project,
- a CLI based project and
- a web app
- They wanted each to be done in a different language.
- I submitted my solution <38 hours of receipt of the assignment.
- In any other world, the intensity of this exercise would cause a panic-attack/burn-out.
- I slept well (2 nights of sleep), took care of family responsibilities and felt good enough to attack the next work-day.
I've been on both sides of the table of many interviews.This was by far the most fun and one to replicate every chance I get.
[EDITS]: Formatting and typos.
Honestly, if I could trust that companies won't try to evaluate my conversation through 20 different ridiculous filters, I would probably argue that my buddy is out of line.. As it stands, however, he is merely leveling out the playing field. But, just life with WFH, management class does not like that imposition one bit.
The job you are therefore hiring for is now trivial. If it weren't, no amount of AI could pass your interview process.
If I was incompetent, I could've shoved the problem into o1 on ChatGPT and probably solved the problems, but I wouldn't have been able to provide insight into why I made the design choices I made and how I optimized my solutions, and that would've ultimately gotten me thrown out of the candidate pool.
I had a few coding challenges, all were preinterview and submitted online or shared in a private repo. One company had an online quiz that was actually really interesting to take, the questions were all multiple choice but done really well to tease out someone's experience in a few key areas.
For what its worth I don't use LLMs and the interview loop went about as I'd expect in a tough job market.
What I usually do is case study that I also do not know at the start of the interview. The case study does not imagine spherical cows and are not usually leetcode style. Its a case of role playing a problem. We brainstorm it together and I determine if I can work with them.
I would say that if it wasn't a pattern. So let's not pretend they're not cheaters. Call them out.
Nontech roles are just a sea of prompt-generated answers, they don't tie into the applicants' experience and are usually about 200 words of waffle. If you're applying for something, type out a few sentences then give it to a prompt to refine. Don't just paste the question in.
Tech roles we focus on the combination of soft skills and technical skills and so we've gone back to a 'pair' programming exercise and a whiteboard architecture exercise. In reality, it's just a spectator sport that we nudge them forward if needed.
In Java looking for them to cover the basics of debugging a problem, writing a test, and mocking out some services. I would say 50% are unable to or unwilling to write a test to prove the error or don't know how to mock out a service.
Whiteboard exercise looking for them to explain a system they've worked on in the past. We question some of their decisions and see how they handle themselves. Not many get defensive (though some do).
What I liked about that process is that it relied less on their ability to suss out a solution to some problem they'll never have to solve on the job and focused more on average activities. Sometimes I'd get a candidate who would go "wait, is this a trap?" and start asking a lot of questions - good! Now I got to see them refine requirements.
Having them review a PR is a good exercise too, you can see how they are at feedback.
I've seen a few things change.
The interviews themselves are for the most part unchanged. Occasionally I see someone who seems to be using AI during the interview. It's sort of obvious. They fist give a vague answer. It's as if they repeat the question. Then they answer while not maintaining eye contact. When asked a followup or a "why" question, they fall apart. They do poorly.
I find more people pass the written screen nowadays and then bomb the interview. I'm guessing they use AI. C'est la vie.
It seems that a lot more people are looking for work now than 3+ years ago. It seems that for roles that require top-notch coding ability, there are relatively few people on the market. Im guessing that great people are staying at their jobs longer now. Makes sense.
After a candidate gives an answer, probe. Ask why they think that. Ask what alternatives they might consider. Watch their body language and how quickly they answer.
For context, I hire very experienced developers in Poland for very demanding remote roles in the US.
The story is completely different for airgapped dark room jobs, but if you know you know.
When I was part of interviews on the other side for my former employer, I encountered multiple candidates who appeared to be using AI assistance without notifying the interviewers ahead of time or at all.
Only the trivial problems. We don't use AI during interviews but many try and it's always obvious. Delay after any question problem; textbook perfect initial answer; absolutely nothing when asked to go deeper on a specific dimension.
It's nice because interviews that are scheduled for an hour are only lasting ~20 minutes in these situations, and we can cut them short.
Everyone hired after that is more suspect, and if they screw up too much or don’t perform well we just fire them quickly during the probation period, whereas previously it was rare for people to get fired during the probation period.
In the recent couple of years I have seen a lot more people ace the test and not do very well during the actual interview. Take-home exams feel like they would always be ineffective now.
I think it ultimately comes back to impact (like always) which has remained largely unchanged.
tbh it's very hard, in general, to assess someone skills in ~1 hour, with the diverse set of problems we face everyday and people/companies often focus too much on "previous relevant experience" (do you know this?) instead of thought process and depth of understanding (how much have you understood what you have done) which, to me, gives more insights on someone personality and attitude and ability to learn new things and becomes harder to cheat on (you can memorize "cracking the code interview" and be a hero on leetcode and still have no idea about how to write good software, take decisions or work in a team :)
If they can talk through the technology and code fluently, honestly, I don't care how they do the work. Honestly I feel like the ability to communicate is a far more important skill than the precise technology.
This is of course presumes you have a clue about the technology you're hiring for.
I feel that smaller things like syntax etc make perfect sense. But for larger things that involve a slightly higher complexity it becomes a bit grey. I likenit personally to writing. When I write things down as I'm trying to work things out or even trying to learn something I find I retain the data so much better and have a better picture in my mind of what's going on. That might just be personal preference for learning but if I copy straight from claud I know 100% I'm not going to remember anything about it the next day.
You can see things in the emails like:
"I provided a concise, polite response from a candidate to a job rejection, expressing gratitude, a desire for feedback, and interest in future opportunities."
The main takeaway is that if you make design your interview questions to match the actual skill you're looking for, AI won't be an issue because it doesn't have those skills yet. In short: ask questions that are straightforward in surface but deep beneath with trade-offs that must be weighted by asking questions to in interviewer.
[1]: https://softskills.audio/2025/02/03/episode-446-wading-throu...
It’s better to know when to use a Linked list than how to make one (because I’d just use the one in the library).
So the candidate can prompt well good. But how much of the knowledge can they apply to a problem or are they just masters of hacker rank (sic).
But more often than not most interviewers are lazy and just use canned hacker rank style questions or if it’s not laziness it’s being too overworked to craft a really good interview.
Even remotely, normally a coding interview isn't a candidate typing things for 45 min on a screen. There are interactions, follow-up questions, discussions about trade-offs and so on... I suppose it's possible for a good candidate to cheat and get some extra points, but the interview isn't broken yet.
You could also let the candidate use AI, and still gather all the relevant signals.
I had this one interview where I had to remove nodes from a linkedlist. Pretty trivial, right? It is, but as I was writing out the solution, probably for the 200th time, I thought "Ive never used a linkedlist, ever"
And I thought, I just cant do it anymore. I cant reverse one single more binary tree. I cant listen to some CTO tech bro in his early 30s explain to me why I need to be in an office he'll never set foot in for the 'company culture'.
Ive had so many interviews just like this. Ive done 7 hour on sites where you break down algorithms, system design, app development. Its brutal out there, no one seems to have any idea what their looking for so they just put you through the ringer.
So now im not interviewing, Im happy and content in my bank programming job. Im probably not a very good programmer, thats okay, Im a wonderful partner, family member, musician. Ill get by, hopefully, for another decade until everything collapses.
The kind of core conclusion is that companies need to do in person interviews now. There's no other way to prevent cheating.
Even if you're using ChatGPT heavily it's your job to ensure it's right. And you need to know what to ask it. So you still need all the same skills and conceptual understanding as before.
I mean, I didn't observe interviews change after powerful IDE's replaced basic text editors.
My company at the same had been using the same coding exercise for years, and many candidates inevitably published their code to github, so copilot was well trained on the exercise.
I had a candidate that used copilot, still flubbed the interview, while ignoring perfectly good suggestions from the LLM.
Screen share to avoid cheating via AI, same as we were doing before AI when people could get friends or Google to help them cheat.
In my unfortunate experience, candidates who covertly rely on LLMs also tend to have embellished resumes, so I try to root them out before asking technical questions.
Last time I interviewed I spent about half of it standing at a whiteboard.
Which is the case for >90% of the companies I interviewed with (big and small)
That seemed to thwart AI use, at least thwart one-shotting, and require understanding and experience with working in an organization
I liked that
It is really important to watch people code.
Anyone can fake an abstract overview.
Two main things I've seen:
1. Recommendations are a heck of a lot more important
2. Internal applicants are suddenly at a big advantage
The gap between interview and actual on job duties is very wide at many — delusional - companies.
Its a sad world out there these days.
As a result, several of my friends who assist in hiring at their companies have already returned to "on-site" interviews. The funny thing about this is that these are 100% remote jobs - but the interviews are being conducted at shared workspaces. This is what happens when the level of trust goes down to zero due to bad actors.
How good one is in understanding a problem (the scope and source), and how good are they at designing a solution that is simple, yet will tackle other "upcoming" problems (configurable vs hard coded, etc).
This is what one should care about.
* StackOverflow early days raised the same question, and so were Google. trust me. I am old enough to remember this.
Online, you don't know if the person you're interviewing is an AI or not.
They could have an AI resume, AI persona, AI profile, and then someone shows up who looks like that and it could be a deepfake, then they do the coding challenges, and then you hire the person remotely, and they continue to do great work with the AI, but actually this person doesn't exist, they just created 100 such people and they're just AI agents.
It sucks if you want to hire humans. And it sucks for the humans. But the work gets done for the same price and more reliably. So dunno
How LLMs will evaluate a skill they are making obsolete is a question I am not sure I understand.
I dont know but Ill always remember the funniest thing I noticed once during my career in England..
A company called tripadvisor based in a very very small town where I was at the time a senior dev, working on my own things, had never reached out. yet I saw their ads and finally an actual article in a newspaper, basically where they were bragging about their latest hire. Let call him Pablo, who had apparently aced every single technical interview question, and so they had hired him, after interviewing tens of people. They were so happy with their hire, the article had been based on him and the "curiosity" that they were the first company to have hired him after he had failed I think it was something like 50 interviews.
Obviously they couldn't believe how lucky they were to have finally found someone who could have completed all the technical tests perfectly.
Now I have nothing against Pablo, and rooted for when when I read the article. But I found it hilarious, and still do almost a decade later, that this top tier company, based in a university town with perhaps the most famous university, had not realised they had simply over fitted for someone who could answer their leetcode selection perfectly. Not only not realised this but then commissioned the article.
Eventually they reached out for an interview with me where the recruiter promised there was no test for me, then I was "surprised by one" in a room with a manager who hadnt had time to read my expection ( which is fine ), but when he walked in and saw I hadn't done it I was "walked out". The whole interview having taken less than about 10 minutes, when I was the most qualified senior developer for hundreds of miles who was available at the time. No Im honestly not tryign to brag Im just saying the town was so small there just couldnt have been more during that short time period I was available.
I know this reads bitter,( my life is great now ) , I just remember it because, my point was at the time I was at my peak, and would have accepted the job if offered, but I was walked out within 10 mins.
Honestly just sharing this insight, the moral of the story I think for me is companies never were great hiring and well if anything the advent of LLMs might actually improve as LLMs start to assess people based on their profiles and work? One can hope, I dont, I want an edge in this market with my company.
Just asked to add a small feature or make a small change after the present the initial code (without AI help). That makes it really easy to see who doesn’t understand the code they are presenting.
I don’t care how much AI you use if you understand the code that it writes and can build on top of it.
In-person dialogue with multiple team members and sufficiently complex take-home assignment remains a pretty good method. LLM are excellent API docs and should not be avoided.
Who is everyone?