← Back to context

Comment by wuschel

1 day ago

It is a small model, so what utility can I / Google expect from it? What is the on-board model used for?

It's not a very good small model to be honest.

That said, you might be surprised to learn that some of the models from 3b-9b could probably replace 80% of the things nonvibe coders use chatgpt for.

Its a good idea to run small models locally if your computer can host them for privacy and cash saving reasons. But how can you trust Google to autoinstall one on your machine in 2026? I just couldn't do it.

  • Sure, local models good and yes, there's no way we can trust Google.

    We can be positive the entire motivation of Chrome is user behavior surveillance. There's not a nano-chance in all the multiverses that Chrome model is doing anything privately. They've gone to extraordinary length to accomplish this. It's not for free.

    • It is entirely about user surveillance as well as pushing their product on to their users because they have the install base. Google Chrome has become Microsoft IE6 in hostile user behavior.

      6 replies →

    • I don't trust them either, but the same Google makes Gemma 4 available to run as locally and privately as you want, and those models are pretty amazing for their size.

      1 reply →

    • LLMs are costing Google a ton of money in compute and storage right now. If they can farm any of that off to the users, it makes economical sense.

      But yes, there is a 100% chance that logs will get sent back to Google too.

      1 reply →

  • > But how can you trust Google to autoinstall one on your machine

    Why are AI models something I'd be uniquely unable to trust Google to install, compared all the other code included in Chrome updates? Is your point just that you shouldn't trust Chrome in general?

    • Yes I would not trust Google or chrome. They have a history of class action lawsuits for doing shady things to users. Enabling them to condense data on your machine and transmit it however they want, should they choose too is suspect to me.

    • Google is probably still sucking up the contents of your LLM requests even with the model running locally.

    • Yeah, so unclear why yer again everyone is so quickly running for the pitchforks & torches. The model doesn't do anything, it's just a sandbox.

      I'm really tired of such overinflated ridiculousness shrillness against Google. Yes there are very real tensions to this company and their as business is scary as heck.

      But folks don't seem capable of processing duality, don't seem to be able to do much but ad-hominem until they pass out. Its really so exhausting having such empty energy charging in every single time, and it keeps obstructing any ability to think straight or assess.

      4 replies →

  • All that matters is some MBA product manager at Google was celebrated for shipping this. Hooray!

    • Everyone who implemented or approved this should be prosecuted under the Computer Fraud and Abuse Act (18 U.S.C. § 1030). If I was on a jury, I wouldn't hesitate to send them to prison where they belong.

      3 replies →

  • > That said, you might be surprised to learn that some of the models from 3b-9b could probably replace 80% of the things nonvibe coders use chatgpt for.

    Really? I'm a total amateur when it comes to doing anything with local models but I tried a few in this range using ollama at this point, and they didn't seem to know much about anything, and I couldn't figure out how to get them to search the web or run other tools, so that was where the experiment ended.

    A small local model that can use bash would be a bit of a game-changer for me.

    • The latest small models are now reliable enough at simple tools like web search I think. It's just afaik none of the user friendly harnesses like ollama or LMStudio have a real one-click setup flow for this. You'll need to download models and do a fair bit of tool configuration.

    • Local models are improving quickly so if you keep an eye open you’ll find something soon enough. But from experience, I’ll warn you that local models can lose the plot very quickly. Their little self arguments when they get stuck usually come down to:

      - It failed? This must be a mistake, I’ll try it again. It failed? This must be a mistake, I’ll try it again because then I will complete the task (repeat about every six seconds until you rescue it).

      - You know, the best way to deal with a permissions problem is to erase the entire system. That’ll definitely solve those pesky permissions and I’ll complete the task.

  • Which is why I uninstalled Chrome a (short...) while ago and my life went on unbothered.

    • I am amused when people fret about not using Chrome. I get it but… I have literally NEVER used Chrome. Perhaps I just don’t know what I am missing but the web seems to work just fine for me without it?

      1 reply →

  • Half of the reason to use local AI is to circumvent the censorship that Google, OpenAI and so on have. I don't want this Google crap on my computer.

It's based on Gemma 3n, and it's not the best.

I find it works fine for simple classification, translation, interpretation of images & audio. It can write longer prose, but it's pretty bad.

It can also write text in the format of a JSON schema or regexp for anything you might want to do with structured data.

I find models of this size (not tested this one specifically) at being very good at simple data extraction from user input. Think about things like parsing date and time of an event from a description or parsing a human-typed description of a repeating event rule.

this is considered a large model. i think you might be surprised how many "small" models chrome has already pulled down on your disk.

but to answer your question: one of the services that uses a small model: PermissionsAIv4

""" Use the Permission Predictions Service and the AIv4 model to surface permission notification requests using a quieter UI when the likelihood of the user granting the permission is predicted to be low. Requires `Make Searches and Browsing Better` to be enabled. – Mac, Windows, Linux, ChromeOS, Android """

I ran a fairly large production test of this and on _every_ measure except for privacy it was worse than a free tier server hosted LLM.

Not happy about that as I would like to see more local models but that's the current state of things.

https://sendcheckit.com/blog/ai-powered-subject-line-alterna...

  • > on _every_ measure except for privacy it was worse than a free tier server hosted LLM

    Would you be able to compare this to other local models in it's class and a above that would fit consumer-grade hardware?

> It is a small model, so what utility can I / Google expect from it?

Precedence for shipping models alongside consumer software.

Potentially without consent if it truly is a silent install.

Something to do with serving more ads. My guess is they will use this to “better target” or to drain more information from you for their ads.