HACKER Q&A
📣 cryptography

Why LLMs confidently hallucinate instead of admitting knowledge cutoff?


I asked Claude about a library released in March 2025 (after its January cutoff). Instead of saying smth like "I don't know, that's after my cutoff," it fabricated a detailed technical explanation - architecture, API design, use cases. Completely made up, but internally consistent and plausible.

What's confusing: the model clearly "knows" its cutoff date when asked directly, and can express uncertainty in other contexts. Yet it chooses to hallucinate instead of admitting ignorance.

Is this a fundamental architecture limitation, or just a training objective problem? Generating a coherent fake explanation seems more expensive than "I don't have that information."

Why haven't labs prioritized fixing this? Adding web search mostly solves it, which suggests it's not architecturally impossible to know when to defer.

Has anyone seen research or experiments that improve this behavior? Curious if this is a known hard problem or more about deployment priorities.


  👤 bigyabai Accepted Answer ✓
> Yet it chooses to hallucinate instead of admitting ignorance.

LLMs don't "choose" to do anything. They inference weights. Text is an extremely limiting medium, and doesn't afford LLMs the distinction between fiction and reality.


👤 barrister
If I ask Grok about anything that occurred this morning, he immediately starts reading and researching in real time. "Summarize what Leavitt said this morning." "Tell me what's new in python 3.14." Etc.. What do you mean by "cutoff", it seems unlikely that Claude is that limited.

👤 perrygeo
Because LLMs do not have a model of the world. They can only make more words. They can't compare their own output to any objective reality.

👤 gus_massa
Bad training weights. They gave 1 point for each correct answer and 0 for each incorrect one, so the model learned to bullshit and complete with random nonsense.

Next time, they will use 1 point for each correct answer and -.1 for each incorrect one, and 0 for "I don't know" and the model will behave. (And perhaps add some intermediate value for "I guess that [something]".)

We do that in the university. If the exam has 0 points for bad answers, I encourage my students to answer all of them.


👤 NoFunPedant
Claude doesn't know when it's lying or when it's telling the truth. It doesn't know anything. It's a computer program. It manipulates symbols according to mathematical rules. It doesn't know what the symbols mean. It doesn't know how the symbols relate to real-world facts.