Comment by _def

7 hours ago

That's not making any sense as it presumes the arguments here are not based on science and common sense of experts, which is not the case.

The letter is not based on science and common sense of experts.

You can read the letter here https://www.iccl.ie/wp-content/uploads/2025/11/20251110_Scie...

It doesn't make any positive claims other than a statement from a budget speech relied on marketing "driven by profit-motive and ideology" that are "manifestly bound with their financial imperatives". So exactly the same AI-skeptic line of attack that's currently being played out in forums and social media.

If you look at the signatories and randomly sample a few, it's a lot of people in social sciences, gender studies, cultural studies, branches of AI critique (e.g. AI safety), linguistics, and the occasional cognitive scientist. These aren't the people who have the technical expertise to evaluate the current state of AI, however impressive their credentials are in their own fields.

  • That doesn't make them incorrect, investors, media and even many developers have been duped by the impressive linguistic human mimikry that LLM's represent.

    LLM/"AI" tools _will_ continue to revolutionize a lot of fields and make tons of glorified paper pushers jobless.

    But they're not much closer to actual intelligence than they were 10 years ago, singluarity level upheavals that OpenAI,et al are valued on are still far away and people are beginning to notice.

    Spending money today to buy heating elements for 2030 is mostly based on FOMO.

    • This is a different claim than what I was responding to, which is that the claim that the letter was based on science and common sense experts.

      If you grant that it wasn't then we're in agreement, although your stating that people have been "duped" is somewhat begging the question.

      At any rate, my goal here isn't to respond to every claim AI skeptics are making, only to point out that taking an anti-science view is more risky to Europe than a politician stating that AI will approach human reasoning in 2026. AI has already approached or surpassed human reasoning in many tasks so that's not a very controversial opinion for a politician to hold.

      And it's a completely separate question from whether the market has valued future cash flows of AI companies too highly or whatever debates people want to have over the meaning of intelligence or AGI.

      1 reply →