I'm not the CEO, I can't order people to stop. The CEO does it too.
I try talking to people directly, but people get defensive and there's always the chance they didn't use AI. I need indirect means of socializing change.
Looking for anything I can use to socialize against AI-washing: Articles, memes, policies that other companies have successfully used- whatever.
This is strikingly different from development. In development, AI increases my productivity fivefold, but in texts, it slows me down.
I thought, maybe the problem is simply that I don't know how to write texts, but I do know how to develop? But the thing is, AI development uses standard code, with recognized patterns, techniques, and architecture. It does what (almost) the best programmer in their field would do. And its code can be checked with linters and tests. It's verifiable work.
But AI is not yet capable of writing text the way a living person does. Because text cannot be verified.
Also, how much of this communication is actually necessary? If someone doesn't care about an issue enough to write their own email, then why are they sending an email about it in the first place?
I think it's totally legit to ask, and specify that you are looking for new insights, proposals, etc. and not regurgitated AI summaries.
Email being a send once, what you said persists forever, is a little scarier. It'd be nice to have a messaging protocol used at work where a typo or wrong URL pasted isn't so consequential. I've been at this for 14 years now, and I still re-read emails I send to clients 10+ times to make sure I am not making even the most minor of mistakes.
if the incentive / whiff / hint from-the-top is "those not using AI are out"... there's no stopping that..
1. Introduce a “Clarity Standard” (Not an Anti-AI Rule) Don’t frame it as anti-AI. Frame it as decision hygiene. Propose lightweight norms in a team doc or retro:
TL;DR (≤3 lines) required
One clear recommendation
Max 5 bullets
State assumptions explicitly
If AI-assisted, edit to your voice
This shifts evaluation from how it was written to how usable it is. Typical next step: Draft a 1-page “Decision Writing Guidelines” and float it as “Can we try this for a sprint?”
2. Seed a Meme That Rewards Brevity Social proof beats argument. Examples you can casually share in Slack:
“If it can’t fit in a screenshot, it’s not a Slack message.”
“Clarity > Fluency.”
“Strong opinions, lightly held. Weak opinions, heavily padded.”
Side-by-side: AI paragraph → Edited human version (cut by 60%)
You’re normalizing editing down, not calling out AI. Typical next step: Post a before/after edit of your own message and say: “Cut this from 300 → 90 words. Feels better.”
3. Cite Credible Writing Culture References Frame it as aligning with high-signal orgs:
High Output Management – Emphasizes crisp managerial communication.
The Pyramid Principle – Lead with the answer.
Amazon – Narrative memos, but tightly structured and decision-oriented.
Stripe – Known for clear internal writing culture.
Shopify – Publicly discussed AI use, but with expectations of accountability and ownership.
You’re not arguing against AI; you’re arguing for ownership and clarity. Typical next step: Share one short excerpt on “lead with the answer” and say: “Can we adopt this?”
4. Shift the Evaluation Criteria in Meetings When someone posts AI-washed text, respond with:
“What’s your recommendation?”
“If you had to bet your reputation, which option?”
“What decision are we making?”
This conditions brevity and personal ownership. Typical next step: Start consistently asking “What do you recommend?” in threads.
5. Propose an “AI Transparency Norm” (Soft) Not mandatory—just a norm:
“If you used AI, cool. But please edit for voice and add your take.”
This reframes AI as a drafting tool, not an authority. Typical next step: Add a line in your team doc: “AI is fine for drafting; final output should reflect your judgment.”
6. Run a Micro-Experiment Offer:
“For one sprint, can we try 5-bullet max updates?”
If productivity improves, the behavior self-reinforces.
Strategic Reality If the CEO models AI-washing, direct confrontation won’t work. Culture shifts via:
Incentives (brevity rewarded)
Norms (recommendations expected)
Modeling (you demonstrate signal-dense writing)
You don’t fight AI. You make verbosity socially expensive.
If helpful, I can draft:
A 1-page clarity guideline
A Slack post to introduce it
A short internal “writing quality” rubric
A meme template you can reuse
Which lever feels safest in your org right now?
Otherwise, people will take every time savings possible. If I'm using AI for anything, it's because it's important enough to someone else for me to do but not important enough to sacrifice my own time.
I don't think it's about people being scared, at least from what I've seen. It's about people being exhausted.
- signal disclosure as a norm: whenever you use AI, say “BTW I used AI to write this”, when you don’t use AI, say “No AI used in this document”
- add an email footer to your messages that states you do not use AI because [shameful reasons]
- normalize anti-AI language (slop, clanker, hallucination, boiling oceans)
- celebrate human craftsmanship (highlight/compliment well written documentation, reports, memos)
- share AI-fail memes
- gift anti-AI/pro-human stickers
- share news/analysis articles about the AI productivity myth [0], AI-user burnout [1], reverse centaur [2], AI capitalism [3]
[0] https://hbr.org/2025/09/ai-generated-workslop-is-destroying-... [1] https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies... [2] https://pluralistic.net/2025/12/05/pop-that-bubble/ [3] https://80000hours.org/problem-profiles/extreme-power-concen...