Comment by ActorNightly
2 days ago
This MCP stuff is leading dev down the wrong path. We should be focusing on llms using self discovery to figure out information.
2 days ago
This MCP stuff is leading dev down the wrong path. We should be focusing on llms using self discovery to figure out information.
I had that opinion too.
You can ask an agent to browse a web page and click a button etc. They will work out how to use a browser automation library.
But it’s not worth the cost, time spent waiting or the inconsistency between implementations.
MCP just offloads that overload, much like how they can use bash tools when they are quite capable of writing an implementation of grep etc.
The whole point is that you shouldn't have to worry about implementation. AI should do it for you.
At the end of the day you often need to consider the energy efficiency of a system which is also reflected in the cost of operating it. For use cases where this is relevant the suggested MCP approach potentially offers large benefits compared to what's probably meant here by "AI". However, the disadvantages of public access discussed in other threads need to be considered as well, therefore, I expected this only to be used for certain niche use cases. Testing and retro fitting non-public websites come to mind.
1 reply →
AI is a very leaky abstraction. You will always be worried
1 reply →
Can you expand? What does that mean, and why is the right (or better) path
Manually coding things is not how we get better AI. For AI to be truly useful in the area of figuring things out (i.e actually reasoning), one of the core components of a model would be building its own knowledge trees across multi modal information. So when you ask a model to do something, it should figure out how to do it on its own.
I don’t think OP is trying to create better AI. That’s someone else’s job. OP is trying to give current LLMs better ways to interact with websites.
Two different goals.