Comment by plato65
10 hours ago
> So if Bob can do things with agents, he can do things.
I think the key issue is whether Bob develops the ability to choose valuable things to do with agents and to judge whether the output is actually right.
That’s the open question to me: how people develop the judgment needed to direct and evaluate that output.
There's a long, detailed, often repeated answer to your open question in the article.
Namely, if you can't do it without the AI, you can't tell when it's given you plausible sounding bullshit.
So Bob just wasted everyone's time and money.
You can verify by running the code and see if it works.
Seriously? The article is about scientists learning to do science, not programming.
I know we're not supposed to say RTFA, but your comment really takes the cake.
1 reply →