Comment by 2ndorderthought

1 day ago

It's not a very good small model to be honest.

That said, you might be surprised to learn that some of the models from 3b-9b could probably replace 80% of the things nonvibe coders use chatgpt for.

Its a good idea to run small models locally if your computer can host them for privacy and cash saving reasons. But how can you trust Google to autoinstall one on your machine in 2026? I just couldn't do it.

Sure, local models good and yes, there's no way we can trust Google.

We can be positive the entire motivation of Chrome is user behavior surveillance. There's not a nano-chance in all the multiverses that Chrome model is doing anything privately. They've gone to extraordinary length to accomplish this. It's not for free.

  • It is entirely about user surveillance as well as pushing their product on to their users because they have the install base. Google Chrome has become Microsoft IE6 in hostile user behavior.

    • If Google were focused on surveillance, why haven't they been collecting keystroke data (like grammarly) for years?

    • You either die a hero or live long enough to see yourself become a villain.

      What did we expect when they dropped "don't be evil" from their company values?

      2 replies →

  • I don't trust them either, but the same Google makes Gemma 4 available to run as locally and privately as you want, and those models are pretty amazing for their size.

    • Both can be true: they give a nice local model so you find it useful AND the chrome harness captures every token in and out for exfiltration.

  • LLMs are costing Google a ton of money in compute and storage right now. If they can farm any of that off to the users, it makes economical sense.

    But yes, there is a 100% chance that logs will get sent back to Google too.

    • > farm

      Ooh, this is interesting. There's nothing stopping them from sending jobs down to local machines. That's some 3 billion nodes. We went through this with coin mining and spam botting.

      Nothing stopping it except your ire if it's discovered.

> But how can you trust Google to autoinstall one on your machine

Why are AI models something I'd be uniquely unable to trust Google to install, compared all the other code included in Chrome updates? Is your point just that you shouldn't trust Chrome in general?

  • Yes I would not trust Google or chrome. They have a history of class action lawsuits for doing shady things to users. Enabling them to condense data on your machine and transmit it however they want, should they choose too is suspect to me.

  • Google is probably still sucking up the contents of your LLM requests even with the model running locally.

  • Yeah, so unclear why yer again everyone is so quickly running for the pitchforks & torches. The model doesn't do anything, it's just a sandbox.

    I'm really tired of such overinflated ridiculousness shrillness against Google. Yes there are very real tensions to this company and their as business is scary as heck.

    But folks don't seem capable of processing duality, don't seem to be able to do much but ad-hominem until they pass out. Its really so exhausting having such empty energy charging in every single time, and it keeps obstructing any ability to think straight or assess.

    • I was waiting for Google to pull a local LLM onto Chrome/Android devices. It opens up some revenue streams that weren't easily possible before: for example the often memed "I was talking about cigars with my wife one single time and now all I see are adsense ads for cigars" gets much easier with a local model doing speech to text and topic classification.

    • > Yeah, so unclear why yer again everyone is so quickly running for the pitchforks & torches.

      Cause everyone loves a good bonfire and a fresh hot roast.

    • > The model doesn't do anything, it's just a sandbox.

      Doesn't that make it worse? They forced everyone to download 4GB of crap for nothing. They could have done one of two things:

      (1) bundle the model with the application so you can tell ahead of time you're signing up for 4GB of bandwidth usage or

      (2) make downloading the model some kind of opt-in thing.

      Either of those would have worked. Just because you can easily tolerate 4GB of unplanned bandwidth usage doesn't mean everyone who can't is wrong.

    • The point is that what you're "sick of" isn't actually authentic human thought, but in reality you're responding to a recent european-driven propaganda campaign with the goal of deriding anything and everything related to US tech.

All that matters is some MBA product manager at Google was celebrated for shipping this. Hooray!

  • Everyone who implemented or approved this should be prosecuted under the Computer Fraud and Abuse Act (18 U.S.C. § 1030). If I was on a jury, I wouldn't hesitate to send them to prison where they belong.

    • A fair and impartial jury is a fundamental part of freedom. I genuinely cannot believe that we have been reduced to wanting to destroy the jury system to punish companies we don’t agree with. At this point, this is less activism and more weaponized disrespect for fundamental freedoms.

      1 reply →

> That said, you might be surprised to learn that some of the models from 3b-9b could probably replace 80% of the things nonvibe coders use chatgpt for.

Really? I'm a total amateur when it comes to doing anything with local models but I tried a few in this range using ollama at this point, and they didn't seem to know much about anything, and I couldn't figure out how to get them to search the web or run other tools, so that was where the experiment ended.

A small local model that can use bash would be a bit of a game-changer for me.

  • The latest small models are now reliable enough at simple tools like web search I think. It's just afaik none of the user friendly harnesses like ollama or LMStudio have a real one-click setup flow for this. You'll need to download models and do a fair bit of tool configuration.

  • Local models are improving quickly so if you keep an eye open you’ll find something soon enough. But from experience, I’ll warn you that local models can lose the plot very quickly. Their little self arguments when they get stuck usually come down to:

    - It failed? This must be a mistake, I’ll try it again. It failed? This must be a mistake, I’ll try it again because then I will complete the task (repeat about every six seconds until you rescue it).

    - You know, the best way to deal with a permissions problem is to erase the entire system. That’ll definitely solve those pesky permissions and I’ll complete the task.

Which is why I uninstalled Chrome a (short...) while ago and my life went on unbothered.

  • I am amused when people fret about not using Chrome. I get it but… I have literally NEVER used Chrome. Perhaps I just don’t know what I am missing but the web seems to work just fine for me without it?

Half of the reason to use local AI is to circumvent the censorship that Google, OpenAI and so on have. I don't want this Google crap on my computer.