The terminology of AI is wrong.
“At minimum, an agent has beliefs — not hunches, not vibes, but quantifiable representations of what it thinks is true and how certain it is. An agent has goals — not a prompt that says “be helpful,” but an objective function it’s trying to maximise. And an agent decides — not by asking a language model what to do next, but by evaluating its options against its goals in light of its beliefs.
By this standard, the systems we’re calling “AI agents” are none of these things.”