← Back to context

Comment by __Joker

3 days ago

"While I continue to believe that many people are going to collectively lose trillions of dollars ultimately pursuing "AI" at this stage"

Can you please explain more why you think so ?

Thank you.

It's a hype cycle with many of the hypers and deciders having zero idea about what AI actually is and how it works. ChatGPT, while amazing, is at its core a token predictor, it cannot ever get to an AGI level that you'd assume to be competitive to a human, even most animals.

And just as every other hype cycle, this one will crash down hard. The crypto crashes were bad enough but at least gamers got some very cheap GPUs out of all the failed crypto farms back then, but this time so much more money, particularly institutional money, is flowing around AI that we're looking at a repeat of Lehman's once people wake up and realize they've been scammed.

  • Those glorified token predictors are the missing piece in the puzzle of general intelligence. There is a long way to go still in putting all those pieces together, but I don't think any of the steps left are in the same order of "we need a miracle breakthrough".

    That said, I believe that this is going one of two ways: we use AI to make things materially harder for humans, in a scale from "you don't get this job" to "oops, this is Skynet", with many unpleasant stops in the middle. By the amount of money going into AI right now and most of the applications I'm seeing being hyped, I don't think we have have any scruples with this direction.

    The other way this can go, and Cerebras is a good example, is that we increase our compute capability and our AI-usefulness to a point where we can fight cancer and stop/revert aging, both being a computational problem at this point. Even if most people don't realize it, or most people have strong moral objections to this outcome and don't even want to talk about it, so it probably won't happen.

    In simpler words, I think we want to use AI to commit species suicide :-)

    • I'm sure there are more missing pieces.

      We are more than Broca's areas. Our intelligence is much more than linguistic intelligence.

      However, and this is also an important point, we have built language models far more capable than any language model a single human brain can have.

      Makes me shudder in awe of what's going to happen when we add the missing pieces.

      1 reply →

  • > And just as every other hype cycle, this one will crash down hard.

    Isn't that an inherent problem with pretty much everything nowadays: crypto, blockchain, AI, even the likes of serverless and Kubernetes, or cloud and microservices in general.

    There's always some hype cycle where the people who are early benefit and a lot of people chasing the hype later lose when the reality of the actual limitations and the real non-inflated utility of each technology hits. And then, a while later, it all settles down.

    I don't think the current "AI" is special in any way, it's just that everyone tries to get rich (or benefit in other ways, as in the microservices example, where you still very much had a hype cycle) quick without caring about the actual details.

    • > I don't think the current "AI" is special in any way

      As someone who loves to pour ice water on AI hype, I have to say: you can't be serious.

      The current AI tech has opened up paths to develop applications that were impossible just a few years ago. Even if the tech freezes in place, I think it will yield substantial economic value in the coming years.

      It's very different from crypto, the main use case for which appears to be money laundering.

      5 replies →

  • While I basically agree with everything you say, I have to add some caveats:

    ChatGPT, while being as far from true AGI as the Elisa chatbot written in Lisp, is extraordinarily more useful, and being used for many things that previously required humans to write the bullshit, like lobbying and propaganda.

    And Crypto... right now BTC is at an historical highest. It could even go higher. And it will eventually crash again. It's the nature of that beast.

  • Why do you think that an AGI can't be a token predictor?

    • By analogy with human brains: Because our own brains are far more than the Broca's areas in them.

      Evolution selects for efficiency.

      If token prediction could work for everything, our brains would also do nothing else but token prediction. Even the brains of fishes and insects would work like that.

      The human brain has dedicated clusters of neurons for several different cognitive abilities, including face recognition, line detection, body parts self perception, 3D spatial orientation, and so on.

      4 replies →

    • Because an LLM _by definition_ cannot even do basic maths (well, except if you're OpenAI and cheat your way around it by detecting if the user asks a simple math question).

      I'd expect an actually "general" intelligence Thing to be able to be as versatile in intellectual tasks as a human is - and LLMs are reasonably decent at repetition, but cannot infer something completely new from the data it has.

      2 replies →

  • All the big LLMs are no longer just token predictors. They are beginning to incorporate memory, chain of thought, and other architectural tricks that use the token predictor in novel ways to produce some startlingly useful output.

    It's certainly the case that an LLM alone cannot achieve AGI. As a component of a larger system though? That remains to be seen. Maybe all we need to do is duct tape a limbic system and memory onto an LLM and the result is something sort of like an AGI.

    It's a little bit like saying that a ball bearing can't possibly ever be an internal combustion engine. While true, it's sidestepping the point a little bit.

I would guess you're not asking a serious question here but if you were feel free to contact me, it's why I put my email address in my profile.

  • Really sorry, if the question came as snarky or if otherwise. Those were not my intent.

    Related to AI given all around noise, really wanted to understand kind of contrarian view of monetary aspects.

    Once again, apologies if the question seems frivolous.