Unfortunately many believe they can, and it is impossible to disprove. So now real people need to write avoiding certain styles, because a lot of other people have decided those are "LLM clues." Bullets, EM Dash, certain common English phases or words (e.g. Delve, Vibrant, Additionally, etc)[0].
Basicaly you need to sprinkle subtle mistakes, or lower the quality of your written communications to avoid accusations that will side-track whatever youre writing into a "you're a witch" argument. Ironically LLM accusations are now a sign of the high quality written word.
[0] https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
This is an artifact of the default LLM writing style, cross-poisoned through training on outputs -- not an "universal" property.
For humans I think it just comes down to interacting with LLMs enough to realize their quirks, but that's not really fool-proof.
Specific language tells, such as: unusual punctuation, including em–dashes and semicolons; hedged, safe statements, but not always; and text that showcases certain words such as “delve”.
Here’s the kicker. If you happen to include any of these words or symbols in your post they’ll stop reading and simply comment “AI slop”. This adds even less to the conversation than the parent, who may well be using an LLM to correct their second or third language and have a valid point to make.
I think the better question to ask is: What are your goals? Is it to prevent AI SPAM, or to discourage people copy-pasting AI? Those are two very different problems: in the case of AI SPAM you look for patterns of usage, (IE, unusually high interaction from a single IP, timing patterns around when things are read and the response comes in,) and in the other case it all comes down to cultural norms.
As far as how I / other people do it, there are some obvious styles that reek of LLMs, I think it’s chatgpt.
There’s a very common structure of “nice post, the X to Y is real. miscellaneous praise — blah blah blah. Also curious about how you asjkldfljaksd?"
From today:
This comment is almost certainly AI-generated: https://news.ycombinator.com/item?id=47658796
And I'm suspicious of this one too - https://news.ycombinator.com/item?id=47660070 - reads just a bit too glazebot-9000 to believe it's written by a person.
If the text is full of punchy three word phrases or nonsense GenAI images then that's an obvious sign. But so is if the other person has some revolutionary project with great results but they can't really explain why their solution works where presumably many failed in the past (or it's a word salad, or some lengthy writing that doesn't show any signs of getting you to an "aha, that's some great insight" moment).
A good sign is also if the author had something interesting going before 2022, and they didn't fall into the earliest low quality LLM waves. Unfortunately some genuinely talented people have started using LLMs to turbocharge their output while leaving some quality on the table nowadays, so I don't really know. I'm becoming a lot more sceptical of the Internet, to be honest.
Stylistic tells like 'delve' and bullet formatting are just RLHF training artifacts. Already shifting between model versions, compare GPT-4 to 4o output and the word frequency distributions changed noticeably.
Long term the only thing with real theoretical legs is watermarking at generation time, but that needs provider buy-in and it slightly hurts output quality so adoption has been basically nonexistent.
I asked an LLM to rewrite this to make it nicer and got the following. I'd flag the first because I don't usually hear "majority of your interactions" in conversation but I might miss it. The second will probably get by me. As for the third, I never say "considerably easier" unless I'm trying to sound artificially posh.
1. It becomes much more noticeable when the majority of your interactions are with non-native English speakers.
2.It tends to stand out more when most of the people you interact with speak English as a second language.
3. It's considerably easier to identify when most of your interactions involve people whose primary language isn't English.
I am writting an LLM captcha system, here is the proof of concept: https://gitlab.com/kaindume/llminate
To me, it often feels like the text version of the uncanny valley.
But again, that's just "feels", I don't have proof or anything.
There are a couple of tells like em dashes and similar patterns but you should be able to suppress that with even a simple prompt.