Comment by janalsncm
2 days ago
It’s a bit amusing that so much ink has been spilled over what the definition of an “AI agent” is.
I don’t care. I care what your software can do. I don’t care if it’s called AI or machine learning or black magic. I care if it can accomplish a task reliably so that I don’t have to do it myself or pay someone to do it.
We had the same argument about 3 years ago when everyone started calling things “AI”. They use LLMs to generate text. Usually they have outsourced all of the interesting technical work to a handful of providers backed by big Web 2.0 companies.
>I don’t care.
The particular problem with poorly defined definitions is they cause a lot of spilled ink later on.
For example the term AGI. Or, even deeper, the definition of intelligence, gets debated again and again with all the goal post dragging one expects these days.
Even breaking out simple categories can help like
Type I agent: Script driven, uses LLM for intelligent actions.
Type II agent: LLM driven, uses scripts and tools. May still need human input.
Type III agent: Builds a time machine to kill John Connor.
Now we’re talking. That’s a useful framework because it acknowledges there are gradations of independence. It’s not an all or nothing thing.
It's fine that you as a user of these systems don't care, but nevertheless this is useful terminology for people looking to design such systems.
agree. a really good definition leads to a really good mental model which leads to really good design. however people can get in a penis measuring contest over definitions too which is often not great