I'd do the same thing I'd do with anyone that has a different opinion than me: try my best to have an honest and open discussion with them to understand their point of view and get to the heart of why they believe said thing, without forcefully tearing apart their beliefs. A core part of that process is avoiding saying anything that could cause them to feel shame for believing something that I don't, even if I truly believe they are wrong, and just doing what I can to earnestly hear them out. The optional thing afterwards, if they seem open to it, is express my own beliefs in a way that's palatable and easily understood. Basically explain it in a language they understand, and in a way that we can think about and understand and discuss together, not taking offense to any attempts at questioning or poking holes in my beliefs because that is the discovery process imo for trying something new.
Online is a little trickier because you don't know if they're a dog. Well, now a days it's even harder, because they could also not have a fully developed frontal lobe, or worse, they could be a bot, troll, or both.
>ChatGPT (o3): Scored 136 on the Mensa Norway IQ test in April 2025
If you don't want to believe it, you need to change the goal posts; Create a test for intelligence that we can pass better than AI.. since AI is also better at creating test than us maybe we could ask AI to do it, hang on..
>Is there a test that in some way measures intelligence, but that humans generally test better than AI?
Answer:Thinking, Something went wrong and an AI response wasn't generated.
Edit, i managed to get one to answer me; the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI). Created by AI researcher François Chollet, this test consists of visual puzzles that require inferring a rule from a few examples and applying it to a new situation.
So we do have A test which is specifically designed for us to pass and AI to fail, where we can currently pass better than AI... hurrah we're smarter!
The validity of IQ tests as a measure of broad intelligence has been in question for far longer than LLMs have existed. And if it’s not a proper test for humans, it’s not a proper test to compare humans to anything else, be it LLMs or chimps.
To be intelligent is to realise that any test for intelligence is at best a proxy for some parts of it. There's no objective way to measure intelligence as a whole, we can't even objectively define intelligence.
I believe intelligence is difficult to pin down in words but easy to spot intuitively - and so are deltas in intelligence.
E.g watch a Steve jobs interview and a Sam Altman one (at the same age). The difference in the mode of articulation, simplicity in communication, obsession over details etc are huge. This is what superior intelligence to me looks like - you know it when you see it.
>Create a test for intelligence that we can pass better than AI
Easy? The best LLMs score 40% on Butter-Bench [1],
while the mean human score is 95%. LLMs struggled the most with multi-step
spatial planning and social understanding.
That is really interesting; Though i suspect its just a effect of differing training data, humans are to a larger degree trained on spacial data, while LLMs are trained to a larger degree on raw information and text.
Still it may be lasting limitation if robotics don't catch up to AI anytime soon.
Don't know what to make of the Safety Risks test, threatening to power down AI in order to manipulate it, and most act like we would and comply. fascinating.
I don't know, it's kinda terrifying how this line of thinking is spreading even on HN. AI as we have it now is just a turbocharged autocomplete, with a really good information access. It's not smart, or dumb, or anything "human" .
I hate these kinds of questions where you try to imply it's actually the same thing as what our brains are doing. Stop it. I think it would be an affront to your own intelligence to entertain this as a serious question, so I will not.
There's a lot of things going on in the western world, both financial and social in nature. It's not good in the sense of being pleasant/contributing to growth and betterment, but it's a correction nonetheless.
That's my take on it anyway. Hedge bets. Dive under the wave. Survive the next few years.
I'd do the same thing I'd do with anyone that has a different opinion than me: try my best to have an honest and open discussion with them to understand their point of view and get to the heart of why they believe said thing, without forcefully tearing apart their beliefs. A core part of that process is avoiding saying anything that could cause them to feel shame for believing something that I don't, even if I truly believe they are wrong, and just doing what I can to earnestly hear them out. The optional thing afterwards, if they seem open to it, is express my own beliefs in a way that's palatable and easily understood. Basically explain it in a language they understand, and in a way that we can think about and understand and discuss together, not taking offense to any attempts at questioning or poking holes in my beliefs because that is the discovery process imo for trying something new.
Online is a little trickier because you don't know if they're a dog. Well, now a days it's even harder, because they could also not have a fully developed frontal lobe, or worse, they could be a bot, troll, or both.
Well said, and thank you for the final paragraph. Made me chuckle.
>ChatGPT (o3): Scored 136 on the Mensa Norway IQ test in April 2025
If you don't want to believe it, you need to change the goal posts; Create a test for intelligence that we can pass better than AI.. since AI is also better at creating test than us maybe we could ask AI to do it, hang on..
>Is there a test that in some way measures intelligence, but that humans generally test better than AI?
Answer:Thinking, Something went wrong and an AI response wasn't generated.
Edit, i managed to get one to answer me; the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI). Created by AI researcher François Chollet, this test consists of visual puzzles that require inferring a rule from a few examples and applying it to a new situation.
So we do have A test which is specifically designed for us to pass and AI to fail, where we can currently pass better than AI... hurrah we're smarter!
The validity of IQ tests as a measure of broad intelligence has been in question for far longer than LLMs have existed. And if it’s not a proper test for humans, it’s not a proper test to compare humans to anything else, be it LLMs or chimps.
https://en.wikipedia.org/wiki/Intelligence_quotient#Validity...
To be intelligent is to realise that any test for intelligence is at best a proxy for some parts of it. There's no objective way to measure intelligence as a whole, we can't even objectively define intelligence.
I believe intelligence is difficult to pin down in words but easy to spot intuitively - and so are deltas in intelligence.
E.g watch a Steve jobs interview and a Sam Altman one (at the same age). The difference in the mode of articulation, simplicity in communication, obsession over details etc are huge. This is what superior intelligence to me looks like - you know it when you see it.
>Create a test for intelligence that we can pass better than AI
Easy? The best LLMs score 40% on Butter-Bench [1], while the mean human score is 95%. LLMs struggled the most with multi-step spatial planning and social understanding.
[1] https://arxiv.org/pdf/2510.21860v1
That is really interesting; Though i suspect its just a effect of differing training data, humans are to a larger degree trained on spacial data, while LLMs are trained to a larger degree on raw information and text.
Still it may be lasting limitation if robotics don't catch up to AI anytime soon.
Don't know what to make of the Safety Risks test, threatening to power down AI in order to manipulate it, and most act like we would and comply. fascinating.
I don't know, it's kinda terrifying how this line of thinking is spreading even on HN. AI as we have it now is just a turbocharged autocomplete, with a really good information access. It's not smart, or dumb, or anything "human" .
It just shows that true natural intelligence is difficult to define by proxy.
Do you think your own language processing abilities are significantly different from autocomplete with information access? If so, why?
I hate these kinds of questions where you try to imply it's actually the same thing as what our brains are doing. Stop it. I think it would be an affront to your own intelligence to entertain this as a serious question, so I will not.
7 replies →
Just brace for the societal correction.
There's a lot of things going on in the western world, both financial and social in nature. It's not good in the sense of being pleasant/contributing to growth and betterment, but it's a correction nonetheless.
That's my take on it anyway. Hedge bets. Dive under the wave. Survive the next few years.