← Back to context

Comment by poisonborz

9 hours ago

Almost everything requires a UI. There's just nothing faster than quick glances and taps. It's why voice assistants or hand-waving gesture controls never took over. Having an agent code all those - possibly very complex things - is just impossible without AGI. How would it even work?

- Would the agent go through current app user flows OpenClaw style? Wildly insecure, error-prone, expensive.

- Tapping in to some sort of third party APIs/MCPs. authed, metered, documented how and by which standard to be not abused and hacked?

The unhyped truth is that LLMs are just wildly more competent autocomplete, and there is no such disruption in sight. The status quo of developers and users mostly remains.