← Back to context

Comment by futuraperdita

8 hours ago

What worries me is that _a lot of people seem to see LLMs as smarter than themselves_ and anthropmorphize them into a sort of human-exact intelligence. The worst-case scenario of Utah's law is that when the disclaimer is added that the report is generated by AI, enough jurists begin to associate that with "likely more correct than not".

One problem here is "smarter" is an ambiguous word. I have no problem believing the average LLM has more knowledge than my brain; if that's what "smarter" means, them I'm happy to believe I'm stupid. But I sure doubt an LLM's ability to deduce or infer things, or to understand its own doubts and lack of knowledge or understanding, better than a human like me.

  • Yeah my thought is that you wouldn't trust a brain surgeon who has read every paper on brain surgery ever written but who has never touched a scalpel.

    Similarly, the claim is that ~90% of communication is nonverbal, so I'm not sure I would trust a negotiator who has seen all of written human communication but never held a conversation.

Maybe it's just my circle, but anecdotally most of the non-CS folks I know have developed a strong anti-AI bias. In a very outspoken way.

If anything, I think they'd consider AI's involvement as a strike against the prosecution if they were on a jury.

  • Why do people in your circle not like AI? I have similar a experience about friends and family not liking AI, but usually it’s due to water and energy reasons, not because of an issue with the model reasoning

    • If your circle has any artists in it, chances are they'll also have a very negative perception, although influenced heavily by the proliferation of AI-generated art.

      At least personally, I've seen basically three buckets of opinions from non-technical people on AI. There's a decent-sized group of people who loathe anything to do with it due to issues you've mentioned, the art issue I mentioned, or other specific things that overall add up to the point that they think it's a net harm to society, a decent-sized group of people who basically never think about it at all or go out of their way to use anything related to it, and then a small group of people who claim to be fully aware of the limitations and consider themselves quite rational but then will basically ask ChatGPT about literally anything and trust what it says without doing any additional research. It's the last group that I'm personally most concerned about because I've yet to find any effective way of getting them to recognize the cognitive dissonance (although sometimes at least I've been able to make enough of an impression that they stop trying to make ChatGPT a participant in every single conversation I have with them).

    • Most people I know deeply hate AI. The more left their politics, the more they hate it. This issue has fully polarized for people outside tech. My intuition is that hating AI is now a permanent part of the leftist catechism.

      For some, AI users are now self-allied with bigots and racists. I avoid discussing AI with anyone whose politics I don’t know anymore. If you haven’t experienced this personally, it’s hard to describe.

> a lot of people seem to see LLMs as smarter than themselves

Well, in many cases they might be right..

  • As far as I can tell from poking people on HN about what "AGI" means, there might be a general belief that the median human is not intelligent. Given that the current batch of models apparently isn't AGI I'm struggling to see a clean test of what AGI might be that a human can pass.

    • LLMs may appear to do well on certain programming tasks on which they are trained intensively, but they are incredibly weak. If you try to use an LLM to generate, for example, a story, you will find that it will make unimaginable mistakes. If you ask an LLM to analyze a conversation from the internet it will misrepresent the positions of the participants, often restating things so that they mean something different or making mistakes about who said what in a way that humans never do. The longer the exchange the more these problems are exacerbated.

      We are incredibly far from AGI.

      5 replies →

    • > there might be a general belief that the median human is not intelligent

      This is to deconstruct the question.

      I don't think it's even wrong - a lot of people are doing things, making decisions, living life perfectly normally, successfully even, without applying intelligence in a personal way. Those with socially accredited 'intelligence' would be the worst offenders imo - they do not apply their intelligence personally but simply massage themselves and others towards consensus. Which is ultimately materially beneficial to them - so why not?

      For me 'intelligence' would be knowing why you are doing what you are doing without dismissing the question with reference to 'convention', 'consensus', someone/something else. Computers can only do an imitation of this sort of answer. People stand a chance of answering it.

    • Being an intelligent being is not the same as being considered intelligent relative to the rest of your species. I think we’re just looking to create an intelligence, meaning, having the attributes that make a being intelligent, which mostly are the ability to reason and learn. I think the being might take over from there no?

      With humans, the speed and ease with which we learn and reason is capped. I think a very dumb intelligence with stay dumb for not very long because every resource will be spent in making it smarter.

      3 replies →

  • > ChatGPT (o3): Scored 136 on the Mensa Norway test in April 2025

    So yes, most people are right in that assumption, at least by the metric of how we generally measure intelligence.

    • Does an LLM scoring well on the Mensa test translate to it doing excellent and factual police reporting? It is probably not true of humans doing well on the Mensa, why would it be true of an LLM?

      We should probably rigorously verify that, for a role that itself is about rigorous verification without reasonable doubt.

      I can immediately, and reasonably, doubt the output of an LLM, pending verification.

    • Yeah I certainly associate LLMs with high intelligence when they provide fake links to fake information, I think, man this thing is SMART

Reading how AI is being approached in China, the focus is more on achieving day to day utilty, without eviscerating youth employment.

In contrast, the SV focus of AI has been about skynet / singularity, with a hype cycle to match.

This is supported by the lack of clarity on actual benefits, or clear data on GenAI use. Mostly I see it as great for prototyping - going from 0 to 1, and for use cases where the operator is highly trained and capable of verifying output.

Outside of that, you seem to be in the land of voodoo, where you are dealing with something that eerily mimics human speech, but you don't have any reliable way of finding out its just BS-ing you.

> a lot of people seem to see LLMs as smarter than themselves

I think the anthropomorphizing part is what messes with people. Is the autocomplete in my IDE smarter than I am? What about the search box on Google? What about a hammer or a drill?

Yet, I will admit that most of the time I hear people complaining about how AI written code is worse than that produced by developers, but it just doesn't match my own experience - it's frankly better (with enough guidance and context, say 95% tokens in and 5% tokens out, across multiple models working on the same project to occasionally validate and improve/fix the output, alongside adequate tooling) than what a lot of the people I know could or frankly do produce in practice.

That's a lot of conditions, but I think it's the same with the chat format - people accepting unvalidated drivel as fact, or someone using the web search and parsing documents and bringing up additional information that's found as a consequence of the conversation, bringing in external data and making use of the LLM ability to churn through a lot of it, sometimes better than the human reading comprehension would.

AI is smarter than everyone already. Seriously, the breadth of knowledge the AI possesses has no human counterpart.

  • Just this weekend it (Gemini) has produced two detailed sets of instructions on how to connect different devices over bluetooth, including a video (that I didn’t watch), while the devices did not support doing the connections in that direction. No reasonable human reading the involved manuals would think those solutions feasible. Not impressed, again.

  • It's pretty similar to looking something up with a search engine, mashing together some top results + hallucinating a bit, isn't it? The psychological effects of the chat-like interface + the lower friction of posting in said chat again vs reading 6 tabs and redoing your search, seems to be the big killer feature. The main "new" info is often incorrect info.

    If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations. (I'm guessing someone is gonna compare this to the old Dropbox post, but whatever.)

    It has no human counterpart in the same sense that humans still go to the library (or a search engine) when they don't know something, and we don't have the contents of all the books (or articles/websites) stored in our head.

    • > If you could get the full page text of every url on the first page of ddg results and dump it into vim/emacs where you can move/search around quickly, that would probably be similarly as good, and without the hallucinations.

      Curiously, literally nobody on earth uses this workflow.

      People must be in complete denial to pretend that LLM (re)search engines can’t be used to trivially save hours or days of work. The accuracy isn’t perfect, but entirely sufficient for very many use cases, and will arguably continue to improve in the near future.

      8 replies →

  •   > the breadth of knowledge
    

    knowledge != intelligence

    If knowledge == intelligence then Google and Wikipedia are "smarter" than you and the AGI problem has been solved for several decades.

  • AI has more knowledge than everyone already, I wouldn't say smarter though. It's like wisdom vs intelligence in D+D (and/or life).. wisdom is knowing things, intelligence is how quick you can learn / create new things.

    • Knowledge is what I see equivalent with a big library. It contains mostly correct information in the context of the book (which might be incorrect in general) and "ai" is very good at taking everything out of context, Smashing a probability distribution over it and picking an answer which humans will accept. E.g. it does not contain knowledge, at best the vague pretense of it.

    • AI has zero knowledge, as to know something is to have done it, or seen it first hand. AI has access to a great deal of data, much of it aquired through criminal action, but no way to evaluate that information other than cross checking for citations and similar occurances. Even for a human, infering things is difficult and uncertain, and so we regularly see AI fall of the cliff of cohearant word salading. We are heading strait at an idiocracy writ large that is trying to hide there raciorilgio insanity behind algorythims. Sometimes it's hard to tell, but it seems that a hairdresser has just been put in charge of the US passport office, which is highy sugestive of a new top level program to issue US citizenship on demand, but everbody else will be subject to the "impartiality" of privatly owned and operated AI policing.

  • It's like saying google search is smarter than everyone, amount of information indexed by it has no human counterpart, such a silly take...

  • Man, what are we supposed to do with people who think the above?

    • >ChatGPT (o3): Scored 136 on the Mensa Norway IQ test in April 2025

      If you don't want to believe it, you need to change the goal posts; Create a test for intelligence that we can pass better than AI.. since AI is also better at creating test than us maybe we could ask AI to do it, hang on..

      >Is there a test that in some way measures intelligence, but that humans generally test better than AI?

      Answer:Thinking, Something went wrong and an AI response wasn't generated.

      Edit, i managed to get one to answer me; the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI). Created by AI researcher François Chollet, this test consists of visual puzzles that require inferring a rule from a few examples and applying it to a new situation.

      So we do have A test which is specifically designed for us to pass and AI to fail, where we can currently pass better than AI... hurrah we're smarter!

      3 replies →

    • I'd do the same thing I'd do with anyone that has a different opinion than me: try my best to have an honest and open discussion with them to understand their point of view and get to the heart of why they believe said thing, without forcefully tearing apart their beliefs. A core part of that process is avoiding saying anything that could cause them to feel shame for believing something that I don't, even if I truly believe they are wrong, and just doing what I can to earnestly hear them out. The optional thing afterwards, if they seem open to it, is express my own beliefs in a way that's palatable and easily understood. Basically explain it in a language they understand, and in a way that we can think about and understand and discuss together, not taking offense to any attempts at questioning or poking holes in my beliefs because that is the discovery process imo for trying something new.

      Online is a little trickier because you don't know if they're a dog. Well, now a days it's even harder, because they could also not have a fully developed frontal lobe, or worse, they could be a bot, troll, or both.

      1 reply →

    • I don't know, it's kinda terrifying how this line of thinking is spreading even on HN. AI as we have it now is just a turbocharged autocomplete, with a really good information access. It's not smart, or dumb, or anything "human" .

      2 replies →

    • Just brace for the societal correction.

      There's a lot of things going on in the western world, both financial and social in nature. It's not good in the sense of being pleasant/contributing to growth and betterment, but it's a correction nonetheless.

      That's my take on it anyway. Hedge bets. Dive under the wave. Survive the next few years.