← Back to context

Comment by krainboltgreene

6 hours ago

Man, what are we supposed to do with people who think the above?

>ChatGPT (o3): Scored 136 on the Mensa Norway IQ test in April 2025

If you don't want to believe it, you need to change the goal posts; Create a test for intelligence that we can pass better than AI.. since AI is also better at creating test than us maybe we could ask AI to do it, hang on..

>Is there a test that in some way measures intelligence, but that humans generally test better than AI?

Answer:Thinking, Something went wrong and an AI response wasn't generated.

Edit, i managed to get one to answer me; the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI). Created by AI researcher François Chollet, this test consists of visual puzzles that require inferring a rule from a few examples and applying it to a new situation.

So we do have A test which is specifically designed for us to pass and AI to fail, where we can currently pass better than AI... hurrah we're smarter!

  • To be intelligent is to realise that any test for intelligence is at best a proxy for some parts of it. There's no objective way to measure intelligence as a whole, we can't even objectively define intelligence.

  • >Create a test for intelligence that we can pass better than AI

    Easy? The best LLMs score 40% on Butter-Bench [1], while the mean human score is 95%. LLMs struggled the most with multi-step spatial planning and social understanding.

    [1] https://arxiv.org/pdf/2510.21860v1

I'd do the same thing I'd do with anyone that has a different opinion than me: try my best to have an honest and open discussion with them to understand their point of view and get to the heart of why they believe said thing, without forcefully tearing apart their beliefs. A core part of that process is avoiding saying anything that could cause them to feel shame for believing something that I don't, even if I truly believe they are wrong, and just doing what I can to earnestly hear them out. The optional thing afterwards, if they seem open to it, is express my own beliefs in a way that's palatable and easily understood. Basically explain it in a language they understand, and in a way that we can think about and understand and discuss together, not taking offense to any attempts at questioning or poking holes in my beliefs because that is the discovery process imo for trying something new.

Online is a little trickier because you don't know if they're a dog. Well, now a days it's even harder, because they could also not have a fully developed frontal lobe, or worse, they could be a bot, troll, or both.

I don't know, it's kinda terrifying how this line of thinking is spreading even on HN. AI as we have it now is just a turbocharged autocomplete, with a really good information access. It's not smart, or dumb, or anything "human" .

  • Do you think your own language processing abilities are significantly different from autocomplete with information access? If so, why?

Just brace for the societal correction.

There's a lot of things going on in the western world, both financial and social in nature. It's not good in the sense of being pleasant/contributing to growth and betterment, but it's a correction nonetheless.

That's my take on it anyway. Hedge bets. Dive under the wave. Survive the next few years.