← Back to context

Comment by josephg

3 days ago

> cost thousands to millions of dollars in tokens

> Even if the price falls by a thousand fold, why would you spend thousands of dollars on tokens to develop an OS when there's already one you can use?

Why do you assume token price will only fall a thousand fold? I'm pretty sure tokens have fallen by more than that in the last few years already - at least if we're speaking about like-for-like intelligence.

I suspect AI token costs will fall exponentially over the next decade or two. Like Dennard scaling / Moore's law has for CPUs over the last 40 years. Especially given the amount of investment being poured into LLMs at the moment. Essentially the entire computing hardware industry is retooling to manufacture AI clusters.

If it costs you $1-$10 in tokens to get the AI to make a bespoke operating system for your embedded hardware, people will absolutely do it. Especially if it frees them up from supply chain attacks. Linux is free, but linux isn't well optimized for embedded systems. I think my electric piano runs linux internally. It takes 10 seconds to boot. Boo to that.

Token prices have literally gone up, where are you getting this information from.... Noone would have pay for a bespoke linux made by an stochastic llm when security is a concern, even if it was $10.00 which it will never be.

The hardware required to run these things has all ballooned in price, there are no efficiencies coming. To run Kimi2.5 4bit you're sitll spending 100k in hardware, and its not nearly as reliable as Claude. Also Agentic Tooling have made their token consumption go up to increase revenue, and models are becoming more verbose in their output (wonder why). You're smoking something.

  • > Token prices have literally gone up, where are you getting this information from.

    I said like for like. You can't compare GPT5.2 tokens with GPT3.5 tokens. They're different products.

    You can run local AI models today which can compete with early chatgpt releases for a fraction of what those models cost to use at the time. Thats the claim I'm making.