HACKER Q&A
📣 gtirloni

How are you dealing with AI-assisted interview cheaters?


Ethical questions aside, if you interview engineers remotely, how are you dealing with the proliferation of AI-assisted interview cheater software?

Have you had someone pass an interview and then later they can barely perform?


  👤 austin-cheney Accepted Answer ✓
This is only a problem if you are really really bad at conducting interviews. I have had interviews in the past that treated me like a child asking basic code literacy questions. These kinds of interviews aren't helpful to anybody.

Instead the way to get past this foolishness is to ask open-ended questions expecting precise answers where the questions are themselves not precise. This presents too much variance. For example: Talk to me about the methods of your favorite Node code library. In that case the candidate has to pick, on the fly, from any of the libraries that ship with Node and immediately start talking about what like about certain methods and how they would use them.

Another example: Tell me about full duplex communication on the web. AI will immediately point you to WebSockets, but it won't explain what full duplex means in 3 words or less or why WebSockets are full duplex and other solutions aren't.

Another example: Given a single page application what things would you recommend to get full state restoration within half a second of page request? AI barfs on that one. It starts explain what state restoration is, which doesn't answer the question.

In other words AI is not a problem so long as you don't suck as an interviewer.


👤 muzani
Leetcode style questions no longer work. If it's solvable with a few functions within 1 hour, AI will solve it in 5 minutes.

If the job is cutting trees, you can't measure them by how long they take to cut a tree, but whether they have stamina to cut through multiple trees.

Take home assignments work, and the good news is they can be shorter now. 1 day or 4 hours of work is enough of a benchmark. Something like a Wordle clone is about the right level of complexity.

Things we look for:

1. Do they use a library? Some people are a bit egoistic about doing things the easy way. GenAI will make a list of words, which is both wasteful and incomplete when they can find a dictionary of words. Do they cut down the dictionary to the right size? It should only be the words not definitions.

2. Architecture? What do they normally use? How do the parts link to one another? How do they handle errors?

3. Do they bring in something new? AI will usually use a 5 year old tech stack unless you give it a specific one, because that's around the average of code it's trained on. If they're experienced enough to tell AI to use the new tech, they're probably experienced enough.

4. Require a starting commit (probably gitignore) and ask them to add reasonable sized commits. Classic coding should look a bit like painting. Vibe coding looks like sculpting, where you chip bits off. This will also catch more critical cheating, like someone else doing the work on their behalf - the commits may be from the wrong emails or you'll see massive commits where nothing gets chipped off.

5. There are going to be people who think AI is a nuisance. Tests like this will help you benchmark the different factions. But don't give them so much toil that it puts the AI users at a large advantage and don't give overly complex "solved" questions that the AI can just pull out from training.


👤 scarface_74
If an AI can pass your interview and an AI can’t do the job you are hiring for, there is by definition something wrong with your interview process

👤 ActorNightly
Personally, if Im hiring junior engineers, and they have the ability to use LLM to solve the problem and explain it, I see no problem with that. When I worked for Amazon knowing the kind of development that happens someone who is able to code with an assistant could do the necessary work faster than someone who codes without it.

This is the same type of test as the takehome tests that a lot of companies give, where you can use google or stack overflow previously to do research.

If I was hiring for my own project and needed people that can problem solve, I would be asking more involved questions that LLMs could not solve, because LLMS will give you just the standard most commonly used solution.

For example an analogy to another industry, LLMs can't tell you how to design a mountain bike correctly at a level of detail that matters, because there is no guide online that tells you how to do this.


👤 oleks_j
Hey! Creator of stealthinterview.ai here.

After working for 15 years at FAANG, startups and no-name companies, I have conducted more than a 100 interviews for all levels, mostly in engineering.

In my experience with technical interviews specifically, there are 4 types of candidates:

- the memorizer - the mathematical brain - the project builder - the coding enthusiast that knows a language well but can’t do algorithms

Most of the time, I have encountered candidates in buckets 3 and 4. Many showed debugging skills, communication skills, but lacked the right answer to trapping rainwater.

I was told to reject candidates who couldn’t pass these problems mainly on the grounds of solved or not solved, even if they clearly communicated.

The only reason I got into FAANG companies myself is because I was the good memorizer. I couldn’t solve most of these problems today without months of prep.

At the end, I left my FAANG job recently because I realized it’s going to be the same or worse at any other company, because internally it’s all the same B.S. once you make it in. Sure, you’ll get a fat salary, but it’s a slow grind.

Instead, I chose to build things.

Is the interview process today bad? I think so.

What can candidates do about it? Take it in their own hands and play the game or get played.

What is the alternative? Depends. There are many companies that don’t do crazy algorithm interviews and pay really well. In my opinion, if you are hiring hundreds of engineers per year, take-home assessments and reviewing them is literally a job on its own. Making candidates build apps from scratch and deploy it, test it, present it? Maybe, but certainly opinionated, not binary.

New grads don’t have experience, but experienced engineers do. You shouldn’t need to ask people with 10 years of experience about trapping rainwater from LC. There are so many other things to ask, discuss, and gauge experience, scope and depth.


👤 blainm
I strongly believe technical interviews should try to mirror a pair programming session on a problem as if it was work, rather than a quiz or interrogation type format.

If someone asked during such a session (where it's cameras, screenshare) that they wanted to do something like google some documentation, I wouldn't see that as a problem. Obviously it's a problem if someone just googles for a solution and pastes it in.

I see LLMs in the same way. No issue with them using it do something like take the pseudo code they wrote in front of me and turned it into an implementation. Especially if they could talk through this code and make suggestions about further changes and so on, clearly showing they understand what's going on.

The real concern is going to be when sophisticated agents can impersonate (clone voice and video) in a convincing way, as well as the capabilities to see the screen and type away as if it was a real person, and they're responding to you in real time.

If the software is based on the models made by large companies, they'll be happy to give you recipes. They would refuse if the coding request mentioned something about cracking passwords or dumping credit cards. And all of them will have a meltdown if you try to ask them to say something politically incorrect (what a bizarre world that would be if that became the new captcha system for humans trying to figure out if they're wasting their time with a fake human).

That said, this is going to be a cat and mouse game. There will be nothing to stop people from fine tuning models to get around being "jailbreaked" to reveal themselves as LLMs. Perhaps the best means is taking the time to research problems that causes "vibe coding" to completely fall down. And that is likely going to be things that are novel and haven't been littered all over the internet. That has a knock on effect of making such interviews a bit more interesting for the people doing them too.


👤 nextts

👤 paulcole
I hire for a variety of knowledge work roles (albeit not software engineering).

If somebody can figure out a way to pass the interview with AI they can probably figure out how to do the job. If they can’t, they get fired. Some people who pass the interview without AI end up getting fired, too.

I don’t think there’s anything unethical about using AI to pass an interview.


👤 Dementor430
Well, if a AI cracked your interview: Don't do interviews. You honestly don't posses the required skills.

👤 ggwp99
I am able to know if someone is good or bad at an interview from how they talk and their logic more than their coding skills.

👤 dzonga
there's plenty of ways to gauge someone's technical competency without asking quiz style questions or leet-code questions in the interview.

your classic whiteboard - manual way, using physical from or the whiteboarding web tools.

you can discuss design patterns, how they would solve issues you've ran into in production and compare approaches.

it seems as if software 'engineers' are the only snowflakes that have to go through this ritual.


👤 davidajackson
Honestly though... let's take Copilot as an example. if they're good with Copilot, why wouldn't you just hire them and let them be good with copilot? We all just need to accept that the value of memorizing syntax will trend to zero.