Comment by ldng

3 days ago

Yeah. OR. You just ignore the bullshit until the bubble burst. Then we'll see what's left and it will not be what the majority think.

There seems to be a lot of churn, like how js was. We can just wait and see what the react of llms ends up being.

The "bubble" is in the financial investment, not in the technology. AI won't disappear after the bubble bursts, just like the web didn't disappear after 2000. If anything, bursting the financial bubble will most likely encourage researchers to experiment more, trying a larger range of cheaper approaches, and do more fundamental engineering rather than just scaling.

AI is here to stay, and the only thing that can stop it at this stage is a Butlerian jihad.

  • AI has been here long before LLM’s… also I dislike the people seemingly tying the two terms together as one.

  • I maintain, the web today is not what people though it would be in 1998. The tech has it's uses, it's just not what snake oil sellers are making it to be. And talking about Butlerian jihad is borderline snake oil selling.

    • Interesting. What particular 1998 claims do you have in mind that were not (at least approximately) fulfilled?

  • Borg logic consists of framing matters of choice as "inevitable". As long as those with power convince everyone that technological implementation is "inevitable", people will passively accept their self-serving and destructive technological mastery of the world.

    The framing allows the rest of us to get ourselves off the hook. "We didn't have a choice! It was INEVITABLE!"

    And so, we have chosen.

    • But history shows that it is inevitable. Can you give me an example of a single useful technology that humans ever stopped developing because of its negative externalities?

      > "We didn't have a choice! It was INEVITABLE!"

      There is no "we". You can call it the tragedy of the commons, or Moloch, or whatever you want, but I don't see how you can convince every single developer and financial sponsor on the planet to stop using and developing this (clearly very useful) tech. And as long as you can't, it's socially inevitable.

      If you want a practice run, see if you can stop everyone in the world from smoking tobacco, which is so much more clearly detrimental. If you manage that, you might have a small chance at stopping implementation of AI.

      1 reply →