Comment by whizzter

8 hours ago

That doesn't make them incorrect, investors, media and even many developers have been duped by the impressive linguistic human mimikry that LLM's represent.

LLM/"AI" tools _will_ continue to revolutionize a lot of fields and make tons of glorified paper pushers jobless.

But they're not much closer to actual intelligence than they were 10 years ago, singluarity level upheavals that OpenAI,et al are valued on are still far away and people are beginning to notice.

Spending money today to buy heating elements for 2030 is mostly based on FOMO.

This is a different claim than what I was responding to, which is that the claim that the letter was based on science and common sense experts.

If you grant that it wasn't then we're in agreement, although your stating that people have been "duped" is somewhat begging the question.

At any rate, my goal here isn't to respond to every claim AI skeptics are making, only to point out that taking an anti-science view is more risky to Europe than a politician stating that AI will approach human reasoning in 2026. AI has already approached or surpassed human reasoning in many tasks so that's not a very controversial opinion for a politician to hold.

And it's a completely separate question from whether the market has valued future cash flows of AI companies too highly or whatever debates people want to have over the meaning of intelligence or AGI.

  • You're asserting that they're unscientific by sampling some random signatories.

    Looking through the signatories a bit closer there's a bunch of comp-sci professors and phd's, some of them had been working directly with neural network based methods, bunch of other that are in adjacent fields related to speech systems I encountered during my studies that have been upended by neural networks so they should also have a fair grasp of what capabilities have been added over the years.

    One of the papers listed in the letter you linked to does seem to cut directly to the argument that there's a correlation in the data LLM's store that give people an exaggerated view of AI's capabilities by successfully encoding knowledge data.

    I do agree that we shouldn't base policy on unscientific claims, and that's the main crux, since Der Leyens's statements mostly seems to be parroting Altmans hype (and Altman is primarily an executive with a vested interest in keeping up the valuation of OpenAI to justify all the investments).