However, I'm wondering:
Is the Agent SDK too abstracted or hard to debug?
Has anyone actually used it in a real production app yet?
Would I be better off just implementing the logic myself on top of the plain OpenAI SDK for more control and transparency?
Appreciate any insights.
This is precisely why I've created AI-gent Workflows (launched on HN today [0]), which comes with a purpose-built state machine and devtools. Unlike LangGraph, it starts already in the lowest layer and everything is state-based. You can time travel and even modify states of a live agent.