← Back to context

Comment by jeroenhd

17 hours ago

The video they show (which is probably exaggerated by cutting out LLM generation time) is pretty sci-fi. I don't know how it works in practice, but it looks fun to try out. If this could run locally, I'd love to have a feature like that.

Most people don't really seem to care about data collection when it comes to AI usage. A lot of people who will feed Gemini/ChatGPT/Bing/Claude/shady clusters across the internet for bargain bin prices/Mistral every detail of their lives will probably be fine with Gemini as long as it doesn't interfere unnecessarily.

It probably works similar to how Gemini works in Android for a while now.

You can point or select anywhere on the screen and it understands and searches the context. If you select a text block, even text inside an image, it allows to copy or search the text online. Otherwise it can search the image.

I use it often. It's intuitive and fast even on non-flagship phones.

I'd wager their A/B tests went well enough to warrant a port from phones to their new "Chromebook".

  • Their video is completely different from what Gemini does now. It analyses mouse movements, like circling around things, underlining things with the mouse, pointing at things to indicate where they need to go. It's a lot like the interfaces you might see in sci-fi movies, where generic gestures are understood within context in a way that modern computers can't handle.

> Most people don't really seem to care about data collection when it comes to AI usage.

That assumes you intended to use AI. People are going to accidentally upload random private content to google.

  • If you buy the Google Gemini AI Agentic Laptop or whatever they will market this as, you're going to want to try AI. What else is the point of buying a Chromebook, as nice and slick as it may look, when similar or even better alternatives exist.