It will refrain from obeying what I say or suggest. Instead, it comes up with its own analogy and starts suggesting and implementing that logic. Even when I already know its going in wrong path and I have proposed the direction for the same problem.
Many times I’ve had to explicitly tell it: don’t think, don’t force your analogy, just implement what I am putting in front of you. Still this does not stick with it always.
Has anyone else dealt with this? How are you handling it?
``` We're building feature X.
- You might need `a`, `b`, `c`. (any libraries, url documentation etc)
- The requirements are:
- x
- y
- z
```Any negatives (prevent going down a path) would go into requirement.
This would be a prompt to sisyphus if small or prometheus if big. (Using opencode + oh-my-openagent).
If I believe the agent won't understand what it is supposed to do or if there are multiple solutions of which only some are allowed I add "DO NOT make or edit any business rules before asking me".
I also have a script that forces Claude to generate a summary of work [2] if it hadn't done so on it's own.
[0]: https://code.claude.com/docs/en/hooks
[1]: https://gist.github.com/Looking4OffSwitch/c3d5848935fec5ac3b...
[2]: https://gist.github.com/Looking4OffSwitch/3b13b65e40284be899...