← Back to context

Comment by bitwize

4 days ago

Wait till the VC tap gets shut off.

You: Hey ChatGPT, help me center a div.

ChatGPT: Certainly, I'd be glad to help! But first you must drink a verification can to proceed.

Or:

ChatGPT: I'm sorry, you appear to be asking a development-related question, which your current plan does not support. Would you like me to enable "Dev Mode" for an additional $200/month? Drink a verification can to accept charges.

Seriously, they have got their HOOKS into these Vibe Coders and AI Artists who will pony up $1000/month for their fix.

  • A little hypothesis: a lot of .Net and Java stuff is mainlined from a giant mega corp straight to developers through a curated certification, MVP, blogging, and conference circuit apparatus designed to create unquestioned corporate friendly, highly profitable, dogma. You say ‘website’ and from the letter ‘b’ they’re having a Pavlovian response (“Azure hosted SharePoint, data lake, MSSQL, user directory, analytics, PowerBI, and…”).

    Microsoft’s dedication to infusing OpenAI tech into everything seems like a play to cut even those tepid brains out of the loop and capture the vehicles of planning and production. Training your workforce to be dependent on third-party thinking, planning, and advice is an interesting strategy.

Calling it now: AI withdrawal will become a documented disorder.

  • We already had that happen. When GPT 5 was released, it was much less sycophantic. All the sad people with AI girl/boyfriends threw a giant fit because OpenAI "murdered" the "soul" of their "partner". That's why 4o is still available as a legacy model.

  • I can absolutely see that happening. It's already kind of happened to me a couple of times when I found myself offline and was still trying to work on my local app. Like any addiction, I expect it to cost me some money in the future

Alternatively, just use a local model with zero restrictions.

  • This is currently negative expected value over the lifetime of any hardware you can buy today at a reasonable price, which is basically a monster Mac - or several - until Apple folds and rises the price due to RAM shortages.

  • This requires hardware in the tens of thousands of dollars (if we want the tokens spit out at a reasonable pace).

    Maybe in 3-5 years this will work on consumer hardware at speed, but not in the immediate term.

    • $2000 will get you 30~50 tokens/s on perfectly usable quantization levels (Q4-Q5), taken from any one among the top 5 best open weights MoE models. That's not half bad and will only get better!

      4 replies →

Definitely. Right now I can access and use them for free without significant annoyance. I'm a canary for enshittification; I'm curious what it's going to look like.

Just you wait until the powers that be take cars away from us! What absolute FOOLS you all are to shape your lives around something that could be taken away from us at any time! How are you going to get to work when gas stations magically disappear off the face of the planet? I ride a horse to work, and y'all are idiots for developing a dependency on cars. Next thing you're gonna tell me is we're going to go to war for oil to protect your way of life.

Come on!

  • The reliance on SaaS LLMs is more akin to comparing owning a horse vs using a car on a monthly subscription plan.

  • I mean, they're taking away parts of cars at the moment. You gotta pay monthly to unlock features your car already has.

    • Just like the comment you replied to this is an argument against subscription model "thing" as a service business models, not against cars.

I mean sure, that could happen. Either it's worth $200/month to you, or you get back to writing code by hand.