HACKER Q&A
📣 ramesh31

What is interviewing like now with everyone using AI?


Have you gone back to in-person whiteboards? More focus on practical problems? I really have no idea how the traditional tech interview is supposed to work now when problems are trivially solvable by GPT.


  👤 fhd2 Accepted Answer ✓
The last time I've used a leet code style interview was in 2012, and it resulted in a bad hire (who just happened to have trained on the questions we used). I've hired something like 150 developers so far, and what I ended up with after a few years of trial and error:

1. Use recruiters and network: Wading through the sheer volume of applications was even nasty before COVID, I don't even want to imagine what it's like now. A good recruiter or a recommendation can save a lot of time.

2. Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.

3. Put the candidate at ease - nervous people don't interview well, another problem with non-trivial tasks in technical interviews. I rarely do any live coding, if I do, it's pairing and for management roles, to e.g. probe how they manage disagreement and such. But for developers, they mostly shine when not under pressure, I try to see that side of them.

4. Talk through past and current challenges, technical and otherwise. This is by far the most powerful part of the interview IMHO. Had a bad manager? Cool, what did you do about it? I'm not looking for them having resolved whatever issue we talk about, I'm trying to understand who they are and how they'd fit into the team.

I've been using this process for almost a decade now, and currently don't think I need to change anything about it with respect to LLMs.

I kinda wish it was more merit based, but I haven't found a way to do that well yet. Maybe it's me, or maybe it's just not feasible. The work I tend to be involved in seems way too multi faceted to have a single standard test that will seriously predict how well a candidate will do on the job. My workaround is to rely on intuition for the most part.


👤 sergioisidoro
I've let people use GPT in coding interviews, provided that they show me how they use it. At the end I'm interested in knowing how a person solves a problem, and thinks about it. Do they just accept whatever crap the gpt gives them, can they take a critical approach to it, etc.

So far, everyone that elected to use GPT did much worse. They did not know what to ask, how to ask, and did not "collaborate" with the AI. So far my opinion is if you have a good interview process, you can clearly see who are the good candidates with or without ai.


👤 twoparachute45
My company, a very very large company, is transitioning back to only in-person interviews due to the rampant amount of cheating happening during interviews.

As an interviewer, it's wild to me how many candidates think they can get away with it, when you can very obviously hear them typing, then watching their eyes move as they read an answer from another screen. And the majority of the time the answer is incorrect anyway. I'm happy that we won't have to waste our time on those candidates anymore.


👤 ryan-duve
My startup got acquired last year so I haven't interviewed anyone in a while, but my technical interview has always been:

- share your screen

- download/open the coding challenge

- you can use any website, Stack Overflow, whatever, to answer my questions as long as it's on the screenshare

My goal is to determine if the candidate can be technically productive, so I allow any programming language, IDE, autocompleter, etc, that they want. I would have no problem with them using GPT/Copilot in addition to all that, as long as it's clear how they're solving it.


👤 explorigin
Part of my resume review process is trying to decide if I can trust the person. If their resume seems too AI-generated, I feel less like I can trust that candidate and typically reject the candidate.

Once you get to the interview process, it's very clear if someone thinks they can use AI to help with the interview process. I'm not going to sit here while you type my question into OpenAI and try to BS a meaningful response to my question 30 seconds later.

AI-proof interviewing is easy if you know what you're talking about. Look at the candidates resume and ask them to describe some of their past projects. If they can have a meaningful conversation without delays, you can probably trust their resume. It's easy to spot BS whether AI is behind it or not.


👤 lolinder
The traditional tech interview was always designed to optimize for reliably finding someone who was willing to do what they were told even if it feels like busywork. As a rule someone who has the time and the motivation to brush up on an essentially useless skill in order to pass your job interview will likely fit nicely as a cog in your machine.

AI doesn't just change the interviewing game by making it easy to cheat on these interviews, it should be changing your hiring strategy altogether. If you're still thinking in terms of optimizing for cogs, you're missing the boat—unless you're hiring for a very short term gig what you need now is someone with high creative potential and great teamwork skills.

And as far as I know there is no reliable template interview for recognizing someone who's good at thinking outside the box and who understands people. You just have to talk to them: talk about their past projects, their past teams, how they learn, how they collaborate. And then you have to get good at understanding what kinds of answers you need for the specific role you're trying to fill, which will likely be different from role to role.

The days of the interchangeable cog are over, and with them easy answers for interviewing.


👤 shihab
To get an idea of just how advanced cheating tools has become, take a look here:

https://leetcodewizard.io/

I think every interviewer, hiring manager ought to know or be trained on these tools, your intuition about candidate's behaviour isn't enough. Otherwise, we will soon reach a tipping point where honest candidates will be at a severe disadvantage.


👤 ktallett
The key is having interviewers that know what they are talking about so in-depth meandering discussions can be had regarding personal and work projects which usually makes it clear whether the applicant knows what they are talking about. Leetcode was only ever a temporary interview technique, and this 'AI' prominence in the public domain has simply sped up it's demise.

👤 dijit
I've always just tried to hold a conversation with the candidate, what they think their strengths are weaknesses are and a little probing.

This works especially well if I don't know the area they're strongest in, because then they get to explain it to me. If I don't understand it then it's a pretty clear signal that they either don't understand it well enough or are a poor communicator. Both are dealbreakers.

Otherwise, for me, the most important thing is gauging: Aptitude, Motivation and Trustworthiness. If you have these three attributes then I could not possibly give a shit that you don't know how kubernetes operators work, or if you can't invert a binary tree.

You'll learn when you need it; it's not like the knowledge is somehow esoteric or hidden.


👤 vrosas
As someone currently job searching it hasn’t changed much, besides companies adding DO NOT USE AI warnings before every section. Even Anthropic forces you to write a little “why do you want to work here DO NOT USE AI” paragraph. The irony.

👤 meter
For the time being, I’ve banned LLMs in my interviews.

I want to see how the candidate reasons about code. So I try to ask practical questions and treat them like pairing sessions.

- Given a broke piece of code, can you find the bug and get it working?

- Implement a basic password generator, similar to 1Password (with optional characters and symbols)

If you can reason about code without an LLM, then you’ll do even better with an LLM. At least, that’s my theory.

I never ask trick questions. I never pull from Leetcode. I hardly care about time complexity. Just show me you can reason about code. And if you make some mistakes, I won’t judge you.

I’m trying to be as fair as possible.

I do understand that LLMs are part of our lives now. So I’m trying to explore ways to integrate them into the interview. But I need more time to ponder.


👤 MacsHeadroom
Changed enormously. Both resumes and interviews are effectively useless now. If our AI agents can't find a portfolio of original work nearly exactly what we want to hire you for then you aren't ever going to hear from us. If you are one of the 1 in 4000 applications who gets an interview then you're already 70% likely to get an offer and the interview is mostly a formality.

👤 acwan93
I don’t know the answer, but I’d like to share that I asked a simple question about scheduling a phone interview to learn more about a candidate.

The candidate’s first response? “Memory updated”. That led to some laughs internally and then a clear rejection email.


👤 alkonaut
It's not the solution itself that is interesting to me, it's first finding out whether the person can go through the motions of solving it. Like reading instructions, submitting solutions etc. It filters out those who can't code at all or who can't read instructions. A surprisingly large chunk. If the person also pipes the problem through an LLM, good.

To then select a good developer I'd test communication skills. Have them communicate what the pros/cons of several presented solutions are. And have them critique their own solution. To ensure they don't have canned answers, I might just swap the problem/solutions for the in-person bit. The problem they actually solved and how they did it isn't very important. It's whether they could read and understand the problem, formulate multiple solutions, describe why one would be chosen over another. Being presented with a novel problem and being asked on the spot to analyze it is a good exercise for developers (Assuming software development is the job we're discussing here).

Just take the time to talk to people. The job is about reading and writing human language more than computer programming. Especially with the arrival of AI when every junior developer is now micro managing even more junior AI colleagues.


👤 iExploder
Name of the game now is not to get fired at all costs and weather the storm until the dust settles...

👤 themanmaran
On our side we've transitioned to only in person interviews.

The biggest thing I've noticed is take home challenges have lost all value. Since GPT can plausibly solve almost anything you throw at it, and it doesn't give you any indication of how the candidate thinks.

And to be fair, I want a candidate that uses GPT / Cursor / whatever tools get the job done. But reading the same AI solution to a coding challenge doesn't tell me anything about how they think or approach problems.


👤 sarchertech
I do not understand the obsession with proving beyond a shadow of a doubt that an interviewer can code.

Not being able to code is by far the easiest failure mode to deal with and I can deal with it more quickly by looking at resumes and firing people for outright lying about their abilities very quickly.

What is much harder to detect is the person who gives up at the first sign of trouble. Or someone who likes to over abstract everything. Or someone who likes to spend all day nitpicking PRs.

The absolute most damaging employee is the technical tornado midlevel who has prolific output and is good at figuring out how to get PRs through the code review process.

I can only bring to begin imagine the kind of damage a person like that could do with an LLM, an inattentive manager, and a buddy willing to rubber stamp PRs.


👤 delduca
I recently reviewed a medium-complexity assignment—just questions, no coding—and out of six candidates, I only approved one. The others were disqualified because their answers were filled with easily identifiable ChatGPT-generated fluff.

And I had made it clear that they should use their own words.


👤 n0rdy
Here where I live, home assignments with the follow-up tech discussion are way more common than the leetcode-like interviews. Therefore, I can't say that the process has already changed dramatically. Yes, I did review a couple of home assignments that had all the signs of being completely AI-generated. But it all became clear during the presentation of the solution, "why"-like questions, and extending the scope of the task to discuss the improvements. If the candidate could answer that, then AI was just a supplementary tool, which is great, as we also use Copilot, ChatGPT, and friends at work. If not, well, it's an obvious rejection.

It happened twice, that the candidate on the other side was clearly typing during the interview, taking a pause for a second or two, and then reading from the screen. That's very obvious as of today, but I can see how it will become a problem one day with the future AI development in terms of the speed of responses, and better voice recognition techniques (so, no typing needed).


👤 screaminghawk
I don't understand why an interviewer would ban the use of AI if they are allowed to use AI in the role.

The interview is a chance to see how a candidate performs in a work like environment. Let them use the tools they will use on the job and see how well they can perform.

Even for verbal interviews, if they are using ChatGPT on the side and can manage the conversation satisfactorily then more power to them.


👤 mattbillenstein
I've only done a few interviews the past couple years, but I've asked people to turn off coding assistants and not use an LLM on my coding screen. I want to know how _they_ think and solve problems, not how the LLM does.

And generally, the more junior people are just completely lost without it. They've become so dependent on it, they can't even google anything anymore. Their search queries are very weirdly conversational questions and the idea of reading the docs for whatever language or library they're using is totally foreign to them.

I think it's really hampering the growth of junior devs - their reasoning and thought processes are just totally tuned to this conversational form of copy and paste programming, and the code is really bad. I think the bottom half of programmers may just LLM themselves out of any sort of job because they lose the ability to think and problem solve... Kinda sad imo.


👤 NomDePlum
It can be weird. Seen some decent resumes for people that in the actual interview the candidate obviously has zero demonstrable knowledge of.

Ask even the shallowest question and they are lost and just start regurgitating what feels like very bad prompt based responses.

At that point it's just about closing down the interview without being unprofessional.


👤 rachofsunshine
We haven't seen major issues with AI with candidates on camera. The couple that have tried to cheat have done so rather obviously, and the problem we use is more about problem-solving than it is about reverse-a-linked-list.

This is borne out by results downstream with clients. No client we've sent more than a couple of people has ever had concerns about quality, so we're fairly confident that we are in fact detecting the cheating that is happening with reasonable consistency.

I actually just looked at our data a few days ago to see how candidates who listed LLMs or related terms on their resume did on our interview. On average, they did much worse (about half the pass rate, and double the hard-fail rate). I suspect this is a general "corporate BS factor" and not anything about LLMs specifically, but it's certainly relevant.


👤 Xmd5a
Shouldn't a portfolio of personal projects be enough ? In the past couple years I:

- adapted Java's Regex engine to work on streams of characters

- wrote a basic parametric geometry engine

- wrote a debugger for an async framework

- did innovative work with respect to whole-codebase transformation using macros

Among other things.

As for ChatGPT in the context of an interview, I'd only use it if I were asked to do changes on a codebase I don't know in limited time.


👤 CharlieDigital
About 18 months ago, I interviewed with a YC startup and the first round was actually a code review. It included an API which included some SQL commands as well as a SQL schema. I pointed out missing validations, missing error handling, lack of indices, alternate data types, possible XSS vectors, etc.

It was a really great format and I think one that creates better separation between good candidates and great candidates because it is more open-ended and collaborative. One of my favorite technical interviews I've engaged in.

I think that now, with AI coding assistants becoming even more integral than those early days 18 months ago, this approach is more relevant than ever since it gives insight into how efficiently a candidate can quickly review AI generated code for correctness, defects, performance issues, and other gaps in quality.

I liked this format so much that I ended up creating a small open-source tool for it to make it easier to manage this process: https://coderev.app (https://github.com/CharlieDigital/coderev)

(The interviewer had to create a private GH repo and the repo itself didn't support commenting inline; I took my notes in a text file and reviewed it interactively with the interviewer).


👤 _sword
Even before LLMs were popularized, the shift to remote work made hiring awful in my experience. In finance roles, I had candidates who aced their tests and projects but then showed up to the job unable to competently use excel or write coherent sentences in English. Phone / zoom interviews all went fine, but clearly there was rampant cheating during remote projects.

👤 low_tech_punk
I'm actually glad AI use is revealing misalignment:

If the AI is so good at it, why are we still hiring human to do the job? It just shows how the interview process is not measuring the right thing to start with.


👤 madduci
Do people still insist on coding tasks? Why don't simply formulate questions that await a broader and deeper knowledge of things, which helps to know how far the candidate can go?

And for questions, I don't mean "it's better a list or a set?", but something like: "you have an application like this, how can you improve it to perform X?"


👤 sramam
I recently completed a take-home assignment with the following instructions:

This project is designed to evaluate your ability to:

  - Deconstruct complex problems into actionable steps.
  - Quickly explore and adopt new frameworks.
  - Implement a small but impactful proof of concept (PoC).
  - Demonstrate coding craftsmanship through clean, well-architected code.
We estimate this project will take approximately 5–7 hours. If you find that it requires more time, let us know so we can adjust the scope.

Feel free to use any tools, libraries, frameworks, or LLMs during this exercise. Also, you’re welcome to reach out to us at any time with questions or for clarification.

I used LLM-as-a-junior-dev to generate 95+% of the code and documentation. I'm just an average programmer, but tried to set a bar that if I was on the other side of the table, I'd hire anyone who demonstrated the quality of output submitted.

  - The 5-7 hour estimate was exceeded (however, I was the first one through this exercise). 
  - IMHO the quality of the submission could NOT have been met in lesser time.
  - They had 3 tasks/projects:
     - a data science project, 
     - a CLI based project and
     - a web app
  - They wanted each to be done in a different language. 
  - I submitted my solution <38 hours of receipt of the assignment. 
  - In any other world, the intensity of this exercise would cause a panic-attack/burn-out. 
  - I slept well (2 nights of sleep), took care of family responsibilities and felt good enough to attack the next work-day.
I've been on both sides of the table of many interviews.

This was by far the most fun and one to replicate every chance I get.

[EDITS]: Formatting and typos.


👤 elzbardico
I have a colleague that uses AI to comment on RFCs. It is so clearly machine generated, that I wonder if I am the only one to see it. He is a good colleague though, but as he is a bit junior, it is still not clear to me if AI is helping him to improve faster or if it is hindering his deep learning of stuff.

👤 trustinmenowpls
I've been on both sides recently, and it hasn't really changed significantly. If you're heming and hawing you're not getting the job.

👤 A4ET8a8uTh0_v2
Buddy of mine recently got a position with the help of custom built model that was listening on the call and printed answers on another screen. The arms race is here and frankly, given that a lot of people are already using it at work, there is no way to stop it short of minute upon minute supervision and even biggest micromanagers won't be able to deal with it.

Honestly, if I could trust that companies won't try to evaluate my conversation through 20 different ridiculous filters, I would probably argue that my buddy is out of line.. As it stands, however, he is merely leveling out the playing field. But, just life with WFH, management class does not like that imposition one bit.


👤 nimish
If your interview process is susceptible to AI then you don't need to hire for the job. Just use an AI and prompt it.

The job you are therefore hiring for is now trivial. If it weren't, no amount of AI could pass your interview process.


👤 sweca
My company actually encourages the use of AI. My interview process was one relatively complex take home, an explanation of my solutions and thinking, then a live "onsite" (via zoom) where I had to code in front of a senior engineer while thinking aloud.

If I was incompetent, I could've shoved the problem into o1 on ChatGPT and probably solved the problems, but I wouldn't have been able to provide insight into why I made the design choices I made and how I optimized my solutions, and that would've ultimately gotten me thrown out of the candidate pool.


👤 _heimdall
I went through a round of interviews the second half of last year. Interviewing felt the same as it had over the last 5 or 10 years honestly.

I had a few coding challenges, all were preinterview and submitted online or shared in a private repo. One company had an online quiz that was actually really interesting to take, the questions were all multiple choice but done really well to tease out someone's experience in a few key areas.

For what its worth I don't use LLMs and the interview loop went about as I'd expect in a tough job market.


👤 z3phyr
Who do I want to work with? A person who can work with me, get stuff done and somewhat match vibe with their coworkers. They can be good in spelling bee, swimming, chess, marathon or leetcode, its on them. I do not get much useful information out of it.

What I usually do is case study that I also do not know at the start of the interview. The case study does not imagine spherical cows and are not usually leetcode style. Its a case of role playing a problem. We brainstorm it together and I determine if I can work with them.


👤 RomanPushkin
One must not forget that cheaters are now everyone, and it's likely you gonna be interviewed by a cheater. I've seen this multiple times already, recent is Meta interview ~8 months ago. Very low quality interviewer - complained about bug in the code, while there was no bug. Can't keep the conversation going, the same for system design - poor guy didn't even want to listen, and was pretty much rude.

I would say that if it wasn't a pattern. So let's not pretend they're not cheaters. Call them out.


👤 andyish
For tech and nontech, it's terrible.

Nontech roles are just a sea of prompt-generated answers, they don't tie into the applicants' experience and are usually about 200 words of waffle. If you're applying for something, type out a few sentences then give it to a prompt to refine. Don't just paste the question in.

Tech roles we focus on the combination of soft skills and technical skills and so we've gone back to a 'pair' programming exercise and a whiteboard architecture exercise. In reality, it's just a spectator sport that we nudge them forward if needed.

In Java looking for them to cover the basics of debugging a problem, writing a test, and mocking out some services. I would say 50% are unable to or unwilling to write a test to prove the error or don't know how to mock out a service.

Whiteboard exercise looking for them to explain a system they've worked on in the past. We question some of their decisions and see how they handle themselves. Not many get defensive (though some do).


👤 yesiamyourdad
About 20 years ago, I used to do interviews that were "write a program to add two numbers together" (I think specifically I asked for a web application). It's trivial, right? There's actually a lot going on. They have to parse input, and sometimes you get strange things like "well if I make it use doubles then I'll cover all scenarios". You have opportunities to talk about error handling (bad characters in the input, int overflow, etc). You can talk about refactoring (now make it handle -,* & /). You can ask them about writing tests. Ask about how they'd handle arbitrarily large numbers. There's a bunch of ways you could take the conversation and really just talk about average developer activites.

What I liked about that process is that it relied less on their ability to suss out a solution to some problem they'll never have to solve on the job and focused more on average activities. Sometimes I'd get a candidate who would go "wait, is this a trap?" and start asking a lot of questions - good! Now I got to see them refine requirements.

Having them review a PR is a good exercise too, you can see how they are at feedback.


👤 koliber
I've been recruiting a lot over the past 8 years. First as a manager, later as an engineering department leader, and most recently in the role of a specialize recruiter.

I've seen a few things change.

The interviews themselves are for the most part unchanged. Occasionally I see someone who seems to be using AI during the interview. It's sort of obvious. They fist give a vague answer. It's as if they repeat the question. Then they answer while not maintaining eye contact. When asked a followup or a "why" question, they fall apart. They do poorly.

I find more people pass the written screen nowadays and then bomb the interview. I'm guessing they use AI. C'est la vie.

It seems that a lot more people are looking for work now than 3+ years ago. It seems that for roles that require top-notch coding ability, there are relatively few people on the market. Im guessing that great people are staying at their jobs longer now. Makes sense.

After a candidate gives an answer, probe. Ask why they think that. Ask what alternatives they might consider. Watch their body language and how quickly they answer.

For context, I hire very experienced developers in Poland for very demanding remote roles in the US.


👤 nvarsj
I'm pretty sure the majority of people I interview (in big tech) are now cheating with some kind of assistance. Usually it's obvious, but not always. I am just plodding along though, and hoping my company moves back to on-site interviews. Alternatively, interviews need to be designed for AI assistance (e.g. much more complex problems).

👤 kittikitti
I tell everyone to share their entire screen, have their video on, and start coding. It's not that different. Even as an interviewer, I experimented with the usual cheating techniques so I know what to look out for. The best are the AI teleprompters. If you can do the work with your own AI then I see no need to care as the business will not care either.

The story is completely different for airgapped dark room jobs, but if you know you know.


👤 j-scott
In my most recent cycle, I didn’t ask to use AI and I was only warned once about using AI when I had the official language plugin for an IDE annotate some struct fields with json tags. I explained the plugin functionality and we moved on.

When I was part of interviews on the other side for my former employer, I encountered multiple candidates who appeared to be using AI assistance without notifying the interviewers ahead of time or at all.


👤 skeeter2020
>> now when problems are trivially solvable by GPT.

Only the trivial problems. We don't use AI during interviews but many try and it's always obvious. Delay after any question problem; textbook perfect initial answer; absolutely nothing when asked to go deeper on a specific dimension.

It's nice because interviews that are scheduled for an hour are only lasting ~20 minutes in these situations, and we can cut them short.


👤 deadbabe
We now consider the employees that we hired prior to the arrival of AI to be the equivalent of “low background steel”. They have much stronger job security.

Everyone hired after that is more suspect, and if they screw up too much or don’t perform well we just fire them quickly during the probation period, whereas previously it was rare for people to get fired during the probation period.


👤 VPenkov
My employer sends a take-home test. It is relatively easy and not very time-consuming. Its main purpose is to act as a basic filter and to provide some material to base an interview on.

In the recent couple of years I have seen a lot more people ace the test and not do very well during the actual interview. Take-home exams feel like they would always be ineffective now.


👤 jppope
I've been very curious about this and about how we should modify our hiring. Its obvious that an individual should be able to use AI companions to build better, faster, higher quality things... But the skillsets are sooo uneven now that its unfair to those who are with and without.

I think it ultimately comes back to impact (like always) which has remained largely unchanged.


👤 bl4ckm0r3
I have wrote an article a year ago about this. https://medium.com/fever-engineering/on-hiring-great-talent-...

tbh it's very hard, in general, to assess someone skills in ~1 hour, with the diverse set of problems we face everyday and people/companies often focus too much on "previous relevant experience" (do you know this?) instead of thought process and depth of understanding (how much have you understood what you have done) which, to me, gives more insights on someone personality and attitude and ability to learn new things and becomes harder to cheat on (you can memorize "cracking the code interview" and be a hero on leetcode and still have no idea about how to write good software, take decisions or work in a team :)


👤 bdcravens
I haven't done any hiring in a while, but my feelings on the matter:

If they can talk through the technology and code fluently, honestly, I don't care how they do the work. Honestly I feel like the ability to communicate is a far more important skill than the precise technology.

This is of course presumes you have a clue about the technology you're hiring for.


👤 bilekas
I've given a few interviews with take home screening small projects that I have been able to identify very quickly were majority generated, I usually don't mind it too much but when I asked why they chose certain patterns or why they went with their approach they couldnt give any "true" answer a week later.

I feel that smaller things like syntax etc make perfect sense. But for larger things that involve a slightly higher complexity it becomes a bit grey. I likenit personally to writing. When I write things down as I'm trying to work things out or even trying to learn something I find I retain the data so much better and have a better picture in my mind of what's going on. That might just be personal preference for learning but if I copy straight from claud I know 100% I'm not going to remember anything about it the next day.


👤 bagels
People are sending us emails that are not just written with chatgpt, but I think they've automated the process as well, as parts of the prompt slip in.

You can see things in the emails like:

"I provided a concise, polite response from a candidate to a job rejection, expressing gratitude, a desire for feedback, and interest in future opportunities."


👤 dinkumthinkum
I don’t really see why it’s so hard to interview someone remotely because of AI or speech to text. I’m not aware of any system that is so advanced or fast enough that having a video call and decent conversation skills and carefully listening to the candidate can’t identify 99.99% of people using AI. Even without video, it’s no hard to have a conversation and guage their skill level and ask questions and listen to the answers carefully. You can tell when the conversation is not fluid or the answe given is at a level that is a mismatch with their apparent sophistication. You can ask an unreasonably obscure or difficult question and see if you get an LLM answer. You can suss it out. You don’t have to do in-person interviews.

👤 fergie
In my experience the real objective of the take home exercise is to gauge how compliant the candidate is, and how many obligations they have outside of work. Otherwise it always makes more sense to conduct simple, in-person assessments.

👤 littlestymaar
The coding interview I designed back in 2022 is still out of reach of all LLMs on the market while still being good at selecting people (I hired 6 people with it, all of them where great fit, out of 30ish applications so it doesn't seem to have a high false-negative rate either).

The main takeaway is that if you make design your interview questions to match the actual skill you're looking for, AI won't be an issue because it doesn't have those skills yet. In short: ask questions that are straightforward in surface but deep beneath with trade-offs that must be weighted by asking questions to in interviewer.


👤 __loam
There's still plenty of engineers that can't code their way out of a paper bag

👤 ciwchris
This question came up on Soft Skills Engineering[1]. To cut through the noise it's more important to have a blog or a project you contribute to which you can point to. And then yes, whiteboard type of questions which unfortunately are not reflective of day to day work.

[1]: https://softskills.audio/2025/02/03/episode-446-wading-throu...


👤 lordhexd
Unless it’s for a senior role, why not just find the candidate you and your team can work best with? Skim through CVs and drop the first 20 an email asking them about their last team/boss or their day or to tell you the last thing they learnt. Something personal, the goal is to note their communication style. Pick 10 you feel matches the energy you want. From those 10 get 2 in the office every day for 2 weeks to see how they work with peers/competitors and your team. By the end of two weeks you should have a relatively good hire which you can work with to fit the team.

👤 gibbonsrcool
Panel interviews seem to be more common. Curious if others have seen the same? I personally feel very uncomfortable coding in front of a group. First one of these I tried had like 5 people watching and I lost my nerve and bailed. :|

👤 gigatexal
It has always been more about the application of facts or knowledge than the retrieval of facts that makes for a really good interview.

It’s better to know when to use a Linked list than how to make one (because I’d just use the one in the library).

So the candidate can prompt well good. But how much of the knowledge can they apply to a problem or are they just masters of hacker rank (sic).

But more often than not most interviewers are lazy and just use canned hacker rank style questions or if it’s not laziness it’s being too overworked to craft a really good interview.


👤 yodsanklai
You can always interview in person. This has often been the norm, after some initial screening. I think it's the best option.

Even remotely, normally a coding interview isn't a candidate typing things for 45 min on a screen. There are interactions, follow-up questions, discussions about trade-offs and so on... I suppose it's possible for a good candidate to cheat and get some extra points, but the interview isn't broken yet.

You could also let the candidate use AI, and still gather all the relevant signals.


👤 code_for_monkey
Ive been interviewing at start ups to get out of my contractor job, but recently I gave up.

I had this one interview where I had to remove nodes from a linkedlist. Pretty trivial, right? It is, but as I was writing out the solution, probably for the 200th time, I thought "Ive never used a linkedlist, ever"

And I thought, I just cant do it anymore. I cant reverse one single more binary tree. I cant listen to some CTO tech bro in his early 30s explain to me why I need to be in an office he'll never set foot in for the 'company culture'.

Ive had so many interviews just like this. Ive done 7 hour on sites where you break down algorithms, system design, app development. Its brutal out there, no one seems to have any idea what their looking for so they just put you through the ringer.

So now im not interviewing, Im happy and content in my bank programming job. Im probably not a very good programmer, thats okay, Im a wonderful partner, family member, musician. Ill get by, hopefully, for another decade until everything collapses.


👤 zitterbewegung
I listened in to someone interviewing people since many people used AI. It's the same with googling the answer it is very obvious that someone is taking too long to get to the answer and or you can't see a separate screen. Mitigation is literally looking at the text window and seeing if they are not typing / taking too long to even make a bad implementation. There is now a problem if you allow for google since google will autogen a gemini query to solve it.

👤 ungreased0675
I’ve had people using AI in interviews to answer simple “get to know you” questions. People just disconnect their brain and read whatever the machine says.

👤 prakhar897
I interviewed a while back and wrote about it here: https://www.softwaredesign.ing/blog/ai-is-the-reason-intervi...

The kind of core conclusion is that companies need to do in person interviews now. There's no other way to prevent cheating.


👤 crazygringo
Not sure why interviews would change.

Even if you're using ChatGPT heavily it's your job to ensure it's right. And you need to know what to ask it. So you still need all the same skills and conceptual understanding as before.

I mean, I didn't observe interviews change after powerful IDE's replaced basic text editors.


👤 acheong08
Run competitions. If you're hiring fresh grads, this is probably the best way to filter by skill. If you can use AI to beat all the other candidates that's a skill by itself. In practice, those that use AI rarely ever make it into the top 10. Add a presentation/demo as part of the criteria to filter out those with bad communication skills.

👤 WatchDog
The last interview I ran was back when copilot came out.

My company at the same had been using the same coding exercise for years, and many candidates inevitably published their code to github, so copilot was well trained on the exercise.

I had a candidate that used copilot, still flubbed the interview, while ignoring perfectly good suggestions from the LLM.


👤 qq66
Why do you have to evaluate the person without AI? Unless they won't be able to use AI on the job (for security reasons or whatever) it seems like it makes more sense to have them pull up their favorite AI and use it to solve a problem. Give them some buggy code and ask them to fix it with any tools they want.

👤 hot_gril
It's still remote. I don't get how you could pass an interview using ChatGPT unless it's purely leetcode.

👤 DustinBrett
"when problems are trivially solvable by GPT" only the fake leet code ones that were never really what you'd do at work anyway. Sadly that is what people are giving devs in interviews.

Screen share to avoid cheating via AI, same as we were doing before AI when people could get friends or Google to help them cheat.


👤 fastily
I expect in person interviews are going to be the norm soon, assuming they’re not already. For now, the challenge I give candidates causes ChatGPT to produce convoluted code that a human never would. I then ask the person to explain the code line-by-line and they’re almost never able to give a satisfactory answer

👤 miah_
I feel like I'm one of the few people who won't touch the tech for anything work related. I've used it to generate character backstories in Dungeons and Dragons, but I wouldn't trust it for anything reality based. I don't know what everybody is smoking.

👤 esafak
These days I ask real-world debugging questions where the root cause is not the error on the screen. I allow LLM use as long as I see how.

In my unfortunate experience, candidates who covertly rely on LLMs also tend to have embellished resumes, so I try to root them out before asking technical questions.


👤 topkai22
I’m trying something new on my next interview- I’ve had an LLM solve a coding problem and I’ve uploaded the output to github, I’m going to ask the candidate to evaluate the generated code adapt it to new technical and non functional requirements.

👤 donatj
Oh man, I have kind of hunkered down and not interviewed anywhere since the start of COVID. I had not even thought about how AI might affect things let alone people not being in office.

Last time I interviewed I spent about half of it standing at a whiteboard.


👤 matrix87
I just got done interviewing with a big tech company and a successful outcome, I really don't understand how someone could even pull that shit off in an interview. It sounds harder than just doing it the honest way

👤 aristofun
If your interview process can be solved by AI - it only means your interview process sucks.

Which is the case for >90% of the companies I interviewed with (big and small)


👤 yieldcrv
I recently did an interview that that involved making pull requests for code updates across a time period

That seemed to thwart AI use, at least thwart one-shotting, and require understanding and experience with working in an organization

I liked that


👤 syrusakbary
Pro tip: ask about their previous experience and go deep there. It will be impossible for the AI to generate fake experience on a user, and very easy to catch for the interviewee

👤 khazhoux
With AI making traditional coding problems trivial, tech interviews are shifting toward practical, real-world challenges, system design, and debugging exercises rather than pure algorithm puzzles. Some companies are revisiting in-person whiteboarding to assess thought processes, while others embrace AI, evaluating how candidates integrate it into their workflow. There's also a greater focus on explaining decisions, trade-offs, and collaboration. Instead of banning AI, many employers now test how effectively candidates use it while ensuring they have foundational skills. The trend favors assessing problem-solving in real work scenarios rather than just coding ability under artificial constraints.

👤 codr7
About time, testing coding skills that way was always a bad idea.

👤 sshine
I can't speak for job interviewing, but having recently completed 3rd-semester trade-school oral exams in Java programming:

It is really important to watch people code.

Anyone can fake an abstract overview.


👤 epolanski
Nothing, we don't do technical interviews they are silly.

👤 tharne
> What is interviewing like now with everyone using AI?

Two main things I've seen:

1. Recommendations are a heck of a lot more important

2. Internal applicants are suddenly at a big advantage


👤 xyst
If a problem can be “trivially” solved by GPT. The problem is with your interview process, not the tool. It’s wild to me that interviewers still ask candidates for senior positions leetcode type questions. Yet the actual job is for some front end or devops position.

The gap between interview and actual on job duties is very wide at many — delusional - companies.


👤 shayarma
They still find a way to make it horrible and degrading.

👤 shafkathullah
it makes sense, because interview questions often lack novelty, it's usually very repeated, and almost all AI is very good at these, i think asking to switch of AI and doing a live coding or thinking outloud is the way forward. or else just give them a sample story point and see how well they respond, if they can do it well means they can work well in the company.

👤 mr90210
Some recruiters have tried to record our initial interview with one of those services that automatically captures notes. I REFUSED.

👤 skeptrune
I have been asking people questions about git a lot. It's useful to figure out whether or not they care about the craft.

👤 fragmede
Here's the question, here's the code ChatGPT produced, what's wrong with it, and why?

👤 anothernewdude
I make it clear on my resume that I won't work for companies that use AI in their hiring processes.

👤 CoolCold
not a problem yet - we still start asking hard question first - on differences on TCP and UDP and which one is better and why - very deep rabbit whole I must say. Then simple questions - how do you manage k8s clusters and what you suggest for multi DC setups

👤 acedTrex
we are doing in person interviews, I just interviewed someone a few days ago that was VERY CLEARLY using chatGPT audio to cheat. You could see his eyes jumping from word to word and the things he was saying was clearly gpt output.

Its a sad world out there these days.


👤 vunderba
I mentioned this in a different related post but there seems to be a pretty sad lack of basic integrity in the tech world where it's become a point of pride to develop and use apps which allow an applicant to blatantly cheat during interviews using LLMs.

As a result, several of my friends who assist in hiring at their companies have already returned to "on-site" interviews. The funny thing about this is that these are 100% remote jobs - but the interviews are being conducted at shared workspaces. This is what happens when the level of trust goes down to zero due to bad actors.


👤 thih9
Not everyone is using AI; and speaking as one such developer, interviewing is not fun.

👤 michelb
I think this mainly highlights how bad interviews were in general before AI.

👤 tzury
Google. StackOverflow. ChatGPT/Claude et al. Those are all tools. Tools of the craft, so a professional candidate should be using it proficiently.

How good one is in understanding a problem (the scope and source), and how good are they at designing a solution that is simple, yet will tackle other "upcoming" problems (configurable vs hard coded, etc).

This is what one should care about.

* StackOverflow early days raised the same question, and so were Google. trust me. I am old enough to remember this.


👤 Perenti
Can I hallucinate answers in an interview, and have them rated as acceptable?

👤 roland35
There are some tools that read your screen and can provide hints and solutions for coding type questions. I honestly don't trust myself to not mess it up, plus the whole ethics side of it, but I'm sure that will always be a problem for online assessments

👤 nathias
the biggest change I've seen as a remote dev is considerably less replies to my non-AI made cv, I think the competition for cheap labor got much harsher

👤 1270018080
You have to depend on your professional network to skip the BS of the modern day interview process (as an applicant and interviewer).

👤 hyperkinetic
Asking experienced engineers to do a coding test is stupid and will only give you candidates that can't think for themselves.

👤 EGreg
I mean, it could be really bad

Online, you don't know if the person you're interviewing is an AI or not.

They could have an AI resume, AI persona, AI profile, and then someone shows up who looks like that and it could be a deepfake, then they do the coding challenges, and then you hire the person remotely, and they continue to do great work with the AI, but actually this person doesn't exist, they just created 100 such people and they're just AI agents.

It sucks if you want to hire humans. And it sucks for the humans. But the work gets done for the same price and more reliably. So dunno


👤 cjkdkfktmymymyy
This conversation feels bizarrely tone deaf. The skill of being able to recall specific knowledge on demand is going away.

How LLMs will evaluate a skill they are making obsolete is a question I am not sure I understand.


👤 ionwake
PREFACE: LONG RANT

I dont know but Ill always remember the funniest thing I noticed once during my career in England..

A company called tripadvisor based in a very very small town where I was at the time a senior dev, working on my own things, had never reached out. yet I saw their ads and finally an actual article in a newspaper, basically where they were bragging about their latest hire. Let call him Pablo, who had apparently aced every single technical interview question, and so they had hired him, after interviewing tens of people. They were so happy with their hire, the article had been based on him and the "curiosity" that they were the first company to have hired him after he had failed I think it was something like 50 interviews.

Obviously they couldn't believe how lucky they were to have finally found someone who could have completed all the technical tests perfectly.

Now I have nothing against Pablo, and rooted for when when I read the article. But I found it hilarious, and still do almost a decade later, that this top tier company, based in a university town with perhaps the most famous university, had not realised they had simply over fitted for someone who could answer their leetcode selection perfectly. Not only not realised this but then commissioned the article.

Eventually they reached out for an interview with me where the recruiter promised there was no test for me, then I was "surprised by one" in a room with a manager who hadnt had time to read my expection ( which is fine ), but when he walked in and saw I hadn't done it I was "walked out". The whole interview having taken less than about 10 minutes, when I was the most qualified senior developer for hundreds of miles who was available at the time. No Im honestly not tryign to brag Im just saying the town was so small there just couldnt have been more during that short time period I was available.

I know this reads bitter,( my life is great now ) , I just remember it because, my point was at the time I was at my peak, and would have accepted the job if offered, but I was walked out within 10 mins.

Honestly just sharing this insight, the moral of the story I think for me is companies never were great hiring and well if anything the advent of LLMs might actually improve as LLMs start to assess people based on their profiles and work? One can hope, I dont, I want an edge in this market with my company.


👤 joshstrange
It doesn’t affect our process, we even let people use AI/LLMs on the coding test.

Just asked to add a small feature or make a small change after the present the initial code (without AI help). That makes it really easy to see who doesn’t understand the code they are presenting.

I don’t care how much AI you use if you understand the code that it writes and can build on top of it.


👤 fsloth
Trick problems and whiteboarding was always “jump through the hoops” bs IMHO.

In-person dialogue with multiple team members and sufficiently complex take-home assignment remains a pretty good method. LLM are excellent API docs and should not be avoided.


👤 andoveragain
> everyone

Who is everyone?


👤 sebazzz
I think it is actually more of a potential disappointment for the candidate than the employer. Because when the candidate becomes the employee, the employer will notice this person will never stick out, never stands out, and will only be handed basic tasks. Implementing basic product features. But something more difficult won’t be entrusted.