HACKER Q&A
📣 pqdbr

Has Claude Code quality dropped significantly over the last few days?


I've noticed a sharp drop in Claude Code's performance starting about 5 days ago. Curious if others are experiencing this.

Examples I'm seeing:

- Incomplete responses: Asked if 3 files were still needed post-refactoring; got an answer for only 1, other 2 ignored completely

- Context failures: Asked for the deploy command after modifying our Ansible script; gave wrong answer without checking obviously related files Code quality regression: Simple refactors that used to work smoothly now require multiple iterations

- Agent execution issues: Agents not triggering without explicit @ mentions. Spent 1+ hour getting a 'cypress-test-runner' agent to work despite clear instructions in CLAUDE.md. Even when triggered, it couldn't follow basic instructions (kept paraphrasing test names instead of returning them unchanged)

Has anyone else noticed the same?


  👤 sanskarix Accepted Answer ✓
this feels familiar - noticed similar patterns with GPT when they roll out changes. sometimes they quietly adjust rate limits or context windows and don't announce it properly.

I've learned to treat these tools like unreliable junior devs. when they work, they're amazing. when they don't, you need fallback workflows. never depend on them for critical path stuff, especially if you're on a deadline.

here's what actually helps: keep old conversation contexts that worked, and if quality tanks, explicitly reference those older examples. also try being more direct - instead of "is X still needed" say "answer yes or no for each: 1. is X needed? 2. is Y needed?" boring but it forces completeness.