← Back to context

Comment by croes

20 days ago

> We're on the precipice of something incredible.

Total dependence on a service?

On a scale that would make big tobacco blush.

  • Big Oil too

    • Personally I’m less bullish on oil as the metaphor given how much of the modern world is underpinned by cheap and ubiquitous oil. If oil disappeared tomorrow, society would collapse — if tobacco disappeared tomorrow, it would make some subset of the population very unhappy for a few weeks.

      Software engineering AI API dependence seems to have already screamed past the tobacco mark but we’re still a long ways away from oil. Though I would bet that we hit it sometime in the next few decades once the bulk of the industry has never written code in any serious capacity.

The quality of local models has increased significantly since this time last year. As have the options for running larger local models.

  • The quality of local models is still abysmal compared to commercial SOTA models. You're not going to run something like Gemini or Claude locally. I have some "serious" hardware with 128G of VRAM and the results are still laughable. If I moved up to 512G, it still wouldn't be enough. You need serious hardware to get both quality and speed. If I can get "quality" at a couple tokens a second, it's not worth bothering.

    They are getting better, but that doesn't mean they're good.

    • Good by what standard? Compared to SOTA today? No they're not. But they are better than the SOTA in 2020, and likely 2023.

      We have a magical pseudo-thinking machine that we can run locally completely under our control, and instead the goal posts have moved to "but it's not as fast as the proprietary could".

      4 replies →

  • These takes are terrible.

    1. It costs 100k in hardware to run Kimi 2.5 with a single session at decent tok p/s and its still not capable for anything serious.

    2. I want whatever you're smoking if you think anyone is going to spend billions training models capable of outcompeting them are affordable to run and then open source them.

    • Quantize it and you can drop a zero from that price.

      How much serious work can it do versus chatgpt3 (SOTA only a few years ago)?

Between the internet, or more generally computers, or even more generally electricity, are we not already?

  • The power companies aren't harvesting the data on your core product. Not to mention, being in roughly the same business as you.

    Those things are also regulated as utilities.

Yes this is the issue. We truly have something incredible now. Something that could benefit all of humanity. Unfortunately it comes at $200/month from Sam Altman & co.

  • If that was the final price, no strings attached and perfect, reliable privacy then I might consider it. Maybe not for the current iteration but for what will be on offer in a year or two.

    But as it stands right now, the most useful LLMs are hosted by companies that are legally obligated to hand over your data if the US gov. had decided that it wants it. It's unacceptable.

    • That 200/month price isn’t sustainable either. Eventually they’re going to have to jack that up substantially.

    • > legally obligated to hand over your data if the US gov. had decided that it wants it

      Not to mention they could just sell it to the highest bidder, or simply use it to produce competition and put you out of business. Especially if you're using their service to do the development...

From the beginning the providers have been interchangeable and subject to competition. Do we have reason to believe that this will change?

prefrontal cortex as a service

  • yup, all these folks claiming AI is the bees knees are delegating their thinking to a roulette that may or may not give proper answers. the world will become more and more like the movie idiocracy