← Back to context

Comment by ACCount37

21 hours ago

LLMs of today advance in incremental improvements.

There is a finite amount of incremental improvements left between the performance of today's LLMs and the limits of human performance.

This alone should give you second thoughts on "AI doomerism".

That is not necessarily true. That would be like arguing there is a finite number of improvements between the rockets of today and Star Trek ships. To get warp technology you can’t simply improve combustion engines, eventually you need to switch to something else.

That could also apply to LLMs, that there would be a hard wall that the current approach can’t breach.

  • If that's the case, then, what's the wall?

    The "walls" that stopped AI decades ago stand no more. NLP and CSR were thought to be the "final bosses" of AI by many - until they fell to LLMs. There's no replacement.

    The closest thing to a "hard wall" LLMs have is probably online learning? And even that isn't really a hard wall. Because LLMs are good at in-context learning, which does many of the same things, and can do things like set up fine-tuning runs on themselves using CLI.

    • Agree completely with your position.

      I do think though that lack of online learning is a bigger drawback than a lot of people believe, because it can often be hidden/obfuscated by training for the benchmarks, basically.

      This becomes very visible when you compare performance on more specialized tasks that LLMs were not trained for specifically, e.g. playing games like Pokemon or Factorio: General purpose LLMs are lagging behind a lot in those compared to humans.

      But it's only a matter of time until we solve this IMO.

      1 reply →

    • Hallucinations are IMO a hard wall. They have gotten slightly better over the years but you still get random results that may or may not be true, or rather, are in a range between 0-100% true, depending on which part of the answer you look at.

      1 reply →

    • The wall is training data. Yes, we can make more and more of post training examples. No, we can never make enough. And there are diminishing returns to that process.

    • > If that's the case, then, what's the wall?

      I didn’t say that is the case, I said it could be. Do you understand the difference?

      And if it is the case, it doesn’t immediately follow that we would know right now what exactly the wall would be. Often you have to hit it first. There are quite a few possible candidates.

      4 replies →

pole-vaulting records improve incrementally too. and there is finite distance left to the moon. without deep understanding and experience and numbers to back up the opinion, any progress seems about to reach arbitrary goals.

AI doomerism was sold by the AI companies as some sort of "learn it or you'll fall behind". But they didnt think it through, now that AI is widely seen as a bad thing by general public (except programmers who think they can deliver slop faster). Who would be buying $200/month sub when they get laid off, I am not sure the strategy of spreading fear was worth it. I also don't think this tech can ever be profitable. I hope it burns more money at this rate.

  • The employer buys the AI subscription, not the employee. An employee that sends company code to an external AI is somebody looking for troubles.

    In the case of contractors, the contractors buy the subscription but they need authorization to give access to the code. That's obvious if the property of the code is of the customer but there might be NDAs even if the contractor owns the code.

    • If companies have very little no of employees, AI companies are expecting regular people to pay for AI access. Then who would be buying $200/month for a thing that took their job? By cutting employees strat, the AI companies also lose much more in revenue.