← Back to context

Comment by botanrice

15 hours ago

while these examples might be easy fodder for criticism I do feel like this whole idea of talking to an LLM across multiple applications and anything your pointer is on will give it context is pretty powerful and cool idea.

I'm imagining a webpage with a link - instead of opening a new link to quickly google something or opening three new tabs based on hyperlinks, i can point at a paragraph or line and ask it to tell me about it.

Maybe I can point at a song on Spotify and have it find me the youtube video, or vice versa (of course this is assuming a tool like this wouldn't stay locked into one ecosystem.. which it will).

Point is that the concept of talking to the computer with mouse as pointer is pretty cool and i guess a step closer to that whole sci-fi "look at this part of the screen and do something"

I agree that AI audio interfaces will be the future but not because they are better UIs for users as we understand the term "user" today. The future users of UI are not users of UI at all, they want nothing to do with learning UI or what buttons to press or where to type something. They want to go to the shopping site and instead of typing anything into a search field they want to say "Find me some boots for the summer, I wanna look fresh" and then tell it to complete the purchase via voice as well once it found something. At most they'll still click some filters in the results page and on individual results but that will be it.

Yeah. We just need 10x more compute. But constant ai analysing of everything is the ultimate direction.