I've been building something along the same lines [2]. I'd define an agent as a piece of software that can autonomously reason based on the contextual information and follow a non-pre-defined path to achieve an outcome and self-correct.
Most of the "agents" people build today have their control flow encoded in some kind of a graph. I don't think this will yield to a useful result as reasoning capability improves. I think that setting the constraints via tool calling and letting the control flow by dynamic (with human in the loop) is the way to go.
[1] https://www.anthropic.com/research/building-effective-agents
[2] https://www.inferable.ai/blog/posts/functions-as-ai-agents