HACKER Q&A
📣 amichail

Are AIs intentionally weak at debugging their code?


Maybe this is done to encourage software engineers to understand the code that AIs write?


  👤 not_your_vase Accepted Answer ✓
Have you noticed that Microsoft, Google and Apple software are still just as full of bugs as they were 5 years ago, if not more - even though all of them are all-in on AI, pedal to the metal? If LLMs would actually understand the code with human-like intelligence, then it would be a few minutes only to go through all open bug tickets, evaluate them, and to fix all the valid bugs, and reply to the invalid reports.

But to this day the best we have are the (unfortunately useless) volunteer replies on the relevant help forums. (And hundreds of unanswered github bugs per project)


👤 apothegm
Uh, no. OpenAI and Anthropic and Google and co really, really, really DNGAF whether or not you understand the code their LLMs write for you.

LLMs are not capable of reasoning or following code flow. They’re predictors of next tokens. They’re increasingly astonishingly good at predicting next tokens, to the point that they sometimes appear to be reasoning. But they can’t actually reason.


👤 Pinkthinker
When you think of all the efforts that humans undergo to produce structured pseudo code for their own flawed software, why would we be surprised that an AI would struggle with unstructured text prompts? LLMs will never get there. You need a way to tell the computer exactly what you want, ideally without having to spell out the logic.

👤 Jeremy1026
If they do a bad job writing it, what makes you think they'd be good at debugging it? If they could debug it, they'd just write it right the first time.