AI needs enforcement.
The breaking point was Auto Compact. After context compression, Claude consistently ignores CLAUDE.md - the very file Anthropic tells you to create. It's like hiring someone who forgets their job description every 2 hours.
Core issues I couldn't solve with instructions alone:
- Post-compact amnesia: "interprets" previous session, often destructively
- Session memory loss: asks the same questions like a new intern daily
- TODO epidemic: "I implemented it!" (narrator: it was just a TODO)
- Command chaos: rm -rf, repetitive curl prompts, git commits with "by Claude"
- Guidelines = suggestions: follows them... when it feels like it
After 6 months struggling with this, I built enforcement hooks (command restrictor, auto-summarizer before compact, TODO detector, commit validator). They work, but I feel like I'm working against the tool rather than with it.
Questions for the community:
1. Is this everyone's experience, or am I using Claude Code fundamentally wrong?
2. For those on Cursor/Copilot/etc - same enforcement issues?
3. Is "markdown guidelines → AI follows them" just... not viable at scale?
The hooks are on GitHub (mostly Korean but universally functional):
https://github.com/meloncafe/claude-code-hooks
Really curious if this is a universal AI coding problem or a skill issue on my end.
Then kill the session and start fresh. At least this way I know there’s a fighting chance of not falling into a death spiral of the llm guessing about what it’s supposed to be doing as if it hasn’t just had a self inflicted stroke.