How do you deal with that? Do you try to tell them about hallucinations and that LLMs have no concept of true or false? Or do you just let them be? What do you do when they do that in a conversation with you or encounter LLMs being used as a source for something that affects you?
LLMs aren't a special case to me. Glue doesn't belong on pizza and you shouldn't eat one rock a day but we've been giving and getting bad advice forever. The person needs to take ownership for the output and getting it right, no matter the source, is their responsibility.
I doubt you can stop them from asking machines for answers. What you can do is aide them to learn how to distrust the answers competently, but outside their field of knowledge, applying skepticism is hard.
The irony of Gell-Mann Amnesia is that Michael Chrichton, who is said to have named it, suffered from it badly: Wrote well within his field, misapplied sciences to write well outside it, and said things which were indefensible.
Like most things that go mainstream it will take a good while before people understand, by which point they will have learnt a lot of things that aren't true and they will never let them go. We might get a healthy use of current AI at some point in the future or if the product drastically improves.
All you can do now is hold them to the same standard you normally would, if you catch them lying whether an AI did it or not its their responsibility and you treat them accordingly.
However, if I notice a friend is about to harm themselves in some way I’ll pull open their ChatGPT and show them directly how sycophantic it is by going completely 180 on what they prompted. It’s enough to make them second guess. I also correct people who say “he or she” when referring to an LLM to say “it” in dialog, and explain that it’s a tool, like a calculator. So gentle reframing has helped.
Sometimes I’ll ask them to pause and ask their gut first, but people are already disconnected from their own truths.
It’s going to be bumpy. Save your mental health.
I'm saying that because they were not going to be critical of the search results, and google is not exactly showing objective truth in the first positions nowadays.
I treat the LLM like a diety. Every sane person understands well enough that the Bible is not to be taken literally. And then when someone talks about using LLMs, I always rephrase that as prayer.
So I said, don't ever trust the output of an LLM without verification. However this caused me some hassle with the AI adoption manager. We have minimum-use AI KPI's for employees and he asked me to stop saying these things or people will use it less.
In the end I just hated the company a little bit more.
I’ve found that the best way to handle this is to ask for the edge cases. "That looks plausible, but how does it handle [specific edge case]?" Usually, that’s where the LLM’s logic falls apart, and it forces the person to actually engage with the problem instead of just copy-pasting.
A: Why is drinking coffee every day so good for you?
B: Why is drinking coffee every day so bad for you?
Question A responds that it has "several health benefits", antioxidants, liver health, reduced risk of diabetes and Parkinson's.
Question B responds that it may lead to sleep disruption, digestive issues, risk of osteoporosis.
Same question. One word difference. Two different directions.
This makes me take everything with a pinch of salt when I ask "Would Library A be a good fit for Problem X" - which is obviously a bit leading; I don't even trust what I hope are more neutral inputs like "How does Library A apply to Problem Space X", for example.
It usually involves some form of "well, no, hold on..."
Ask AI to cite sources and then investigate the sources, or have another agent fact check the relevancy of the sources.
You can use this thing called ralph that let's you burn a lot of tokens at scale by simply having a detailed prompt work on a task and refining something from different lenses. It too AI about an hour to write: https://nexivibe.com/avoid.civil.war.web/
I do this on things that I know very well, and the moment I let it cook and iterate, collect feedback, the results become chef's kiss.
The agentic era that we are in is... very interesting.
This of course doesn’t apply to high-stakes settings. In these cases I find LLMs are still a great information retrieval approach, but it’s a starting point to manual vetting.
Some comments here equating it to people who blindly believe things on the internet, but it's worse than that. Many previously rational people essentially getting hypnotized by LLM use and loosing touch of their rational thinking.
It's concerning to watch.
If they ask what I think, I tell them.
If they don't want my opinion I keep it to myself.
Can you give an example of what kind of question you mean here?
Given that most people's idea of a reputable source is whatever comes up on the first page of Google or YouTube, I think we should use that as the comparison rather than dismissing LLM results. And we should do some empirical testing before making assumptions, otherwise we're just as bad as the people we are complaining about.
Whatever results we get, the real problem is that most people's ability to verify information was not good before LLMs, and it's still not good now.
So now you're dealing with LLM hallucinations, and before you were dealing with the ravings of whatever blogger or YouTuber managed to rank for this particular query.
Is this something you can control or is this outside your control?
I didn't tell her why LLMs can make mistakes or hallucinate because I thought that she would not appreciate my mansplaining.
Looking forward though, my boring answer would still be education. It is going to take time. But without understanding LLMs, they will not be easily persuaded.
"Tell me about all the potential pitfalls of blindly trusting LLM output, and relate a couple or three true stories about when LLM misinformation has gone badly wrong for people."
These people may be idiots who are impossible to reason with, but at least for now the LLMs have not been completely driven into the ground by SEO. They might actually be getting a taste of what it feels like to not be an idiot. I'm happy for them, but they'll snap out of it when their trust is broken. It's probably sometime soon anyway.
Now they got another "God" in LLM.
How to deal? Just ignore. There is way more stupid people with stupid opinions than we can possibly estimate.
For me, for example have seen and experienced doctors making mis diagnosis (and they a reputable source), so what is the difference really?
I guess your question depends on the context they using the LLM as well for and what sort of questions they are asking.
Scientific fact based or opinion questions?
News reporters and editors have their biases. Book authors have their biases. Scientists and research papers have their biases. Search engines have their biases. Google too.
All human-created systems have biases shaped by the environments, social norms, education, traditions, etc. of their creators and managers.
So, the concepts of "objective truth" and "reputable" need to be analyzed more critically.
They seem to be labels given to sources we have learned to trust by habit. Some people trust newspapers over TV. Some people trust some newspapers over other newspapers. All of it often on emotional grounds of agreeability with our own biases. Then we seem to post-rationalize this emotion of agreeability using terms like "objective truth" and "reputable".
Is Google search engine that leads to NY Times or Fox News or Wikipedia and makes us manually choose sources as per our biases "better" than Google's Gemini engine that summarizes content from all the above sources and gives an average answer? (Note: "average answer" as of current versions; in future, its training too may be explicitly biased, like Grok and DeepSeek did).
Perhaps we can start using terms like "human sources of information" versus "AI sources of information" and get rid of the contentious terms.
Then critically analyze whether one set of sources is better than the other, or they complement each other.
I’ll take LLMs any day over what search and the rest of the Internet has turned into.
If they’re employees I’ll try find better ones.
If they’re friends I might tell them.
This works especially if you studied in that subject matter, you should be able to immediately detect anything answer that is inconsistent or if they give hallucinated sources.
That is called the Gell-Mann Amnesia effect.
I'm genuinely unsure of whether or not this is better. LLMs make mistakes, but so do humans. So often. I really don't know how often LLMs are wrong in comparison, or how you'd find out. Regardless, computers have become a terrible way to learn things if you aren't a rigorous person. Simultaneously, they've become an absolute dream beyond the imagination of most humans in history, if you are. That's very strange.
As someone who ended up studying philosophy, there seems to be a real gulf between folks who sort of believe stuff they hear, folks who believe "facts" that they hear from (various levels of) credible sources, and folks that take solipsism seriously understand that even in the most ideal scenario, we still wouldn't have a very good understanding about the world... much less dealing with the inherent flaws in our research and information systems.
Knowledge is hard. It usually takes me a couple minutes to figure out what type "truth" my interlocutor uses. Typically good-faith disagreements are just walking up the chain of presuppositions we use to find out exactly where we diverge in our premises.
There's multitude of reasons someone would blindly trust LLM: laziness, lack of confidence, need for assurance, you name it.
You just gotta stand your ground and end up agreeing to disagree
https://grok.com/share/c2hhcmQtMg_b036e24b-3211-4655-bd77-da...
I laugh in their face, let them know how ridiculous they are, and then walk away laughing in tears, never talking to them again.
A wise man's life is based around “fuck you”.
Somebody wants me to do something because of or listen to his AI psychosis bullshit, “fuck you”.
Boss has AI psychosis, “fuck you!”.
You are the King of the US? You have a navy? Greatest army in the history of mankind?
Fuck you! Blow me.
Using or not using a LLM is not itself a measure of how deluded someone is, for example anytime I ask a LLM a question (it can be nice for long form questions that don't translate well to a google search, I require that it provides source links for every claim. This tends to make it reply more accurately but also lets me read the page source for their top level explanation.
Sadly one type asks a question (search, prompt) using Google or an LLM and takes the first response as truth.
The other asks follow ups based on the responses and their critical thinking skills. They often even go read the linked article and make sure it's still applicable.
Pretty much the same when you're talking to a real person, critical thinking (much more than just knowing reputable sources) is key.
So very similar issues, luckily LLMs can do so much more than a simple search, and help with your critical thinking tasks. Ask the LLM to provide opposing viewpoints, historical analysis, identify sources.
Building an agent debugging tool taught me this concretely: LLMs will return structurally valid, fluent garbage mid-loop and the agent keeps running. No error. No warning. Just wrong output propagating forward silently.
The people who deal with LLMs best treat them like a junior dev who writes clean-looking code that hasn't been tested. You don't distrust everything they say — you just never skip the review step.
I do spend some time on the bedbugs subreddit and LLM failings come up a lot because they are very bad at figuring if a photo is a bed bug or not. So I say don't worry, AI is crap at that.