Comment by virgildotcodes

6 days ago

I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique.

It's this pervasive belief that underlies so much discussion around what it means to be intelligent. The null hypothesis goes out the window.

People constantly make comments like "well it's just trying a bunch of stuff until something works" and it seems that they do not pause for a moment to consider whether or not that also applies to humans.

If they do, they apply it in only the most restrictive way imaginable, some 2 dimensional caricature of reality, rather than considering all the ways that humans try and fail in all things throughout their lifetimes in the process of learning and discovery.

There's still this seeming belief in magic and human exceptionalism, deeply held, even in communities that otherwise tend to revolve around the sciences and the empirical.

The ability to learn and infer without absorbing millions of books and all text on internet really does make us special. And only at 20 watts!

  • Last I checked humans didn't pop into existence doing that. It happened after billions of years of brute force, trial and error evolution. So well done for falling into the exact same trap the OP cautions. Intelligence from scratch requires a mind boggling amount of resources, and humans were no different.

    • To be fair, it is still pretty remarkable what the human brain does, especially in early years - there is no text embedded in the brain, just a crazily efficient mechanism to learn hierarchical systems. As far as I know, AI intelligence cannot do anything similar to this - it generally relies on giga-scaling, or finetuning tasks similar to those it already knows. Regardless of how this arose, or if it's relevant to AGI, this is still a uniqueness of sorts.

      1 reply →

    • Do you think evolutionary pressures are the best explanation for why humans were able to posit the Poincaré conjecture and solve it? While our mental architecture evolved over a very long time, we still learn from miniscule amounts of data compared to LLMs.

      4 replies →

    • How is that relevant? The human brain is at the point of birth (or some time before that). We compare that with an LLM model doing inference. The training part is irrelevant, the same way the human brains' evolution is.

  • We have a tremendous amount of raw information flowing through our brains 24/7 from before we are born, from the external world through all our senses and from within our minds as it attempts to make sense of that information, make predictions, generally reason about our existence, hallucinate alternative realities, etc. etc.

    If you were able to somehow capture all that information in full detail as you've had access to by the age of say 25, it would likely dwarf the amount of information in millions of books by several orders of magnitude.

    When you are 25 years old and are presented a strange looking ball and told to throw it into a strange looking basket for the first time. You are relying on an unfathomable amount of information turned into knowledge and countless prior experiments that you've accumulated/exercised to that point relating to the way your body and the world works.

    • Humans are "multi-modal". Sure we get plenty of non-textual information, but LLMs were trained on basically every human-written world ever. They definitely see many orders of magnitude more language than any human has ever seen. And yet humans get fluent based after 3+ years.

      3 replies →

  • 20 watts ignores the startup cost: Tens of millions of calories. Hundreds of thousands of gallons of water. Substantial resources from at least one other human for several years.

  • Just an interesting thought experiment: if you took all the sensory information that a child experiences through their senses (sight, hearing, smell, touch, taste) between, say, birth and age five, how many books worth of data would that be? I asked Claude, and their estimate was about 200 million books. Maybe that number is off ± by an order of magnitude. ...but then again Claude is only three years old, not five.

  • To be fair, the knowledge embedded in an LLM is also, at this point, a couple orders of magnitude (at least) larger than what the average human being can retain. So it's not like all those books and text in the internet are used just to bring them to our level, they go way beyond.

  • Now multiply that with 7 billion to distill that one who will solve frontier math problem.

  • Most people have absorbed way too few books to be able to infer properly. Hell, most people are confused by TV remotes.

It's only because humans came up with a problem, worked with the ai and verified the result that this achievement means anything at all. An ai "checking its own work" is practically irrelevant when they all seem to go back and forth on whether you need the car at the carwash to wash the car. Undoubtedly people have been passing this set of problems to ai's for months or years and have gotten back either incorrect results or results they didn't understand, but either way, a human confirmation is required. Ai hasn't presented any novel problems, other than the multitudes of social problems described elsewhere. Ai doesn't pursue its own goals and wouldn't know whether they've "actually been achieved".

This is to say nothing of the cost of this small but remarkable advance. Trillions of dollars in training and inference and so far we have a couple minor (trivial?) math solutions. I'm sure if someone had bothered funding a few phds for a year we could have found this without ai.

  • >It's only because humans came up with a problem, worked with the ai and verified the result that this achievement means anything at all.

    Replace ai with human here and that's...just how collaborative research works lol.

  • The only things moving faster than AI are the goalposts in conversations like this. Now we're at "sure, AI can solve novel problems, but it can't come up with the problems themselves on its own!"

    I'm curious to see what the next goalpost position is.

    • > I'm curious to see what the next goalpost position is.

      I am as well. That's the point. Ai can do some things well and other things better than humans, but so can a garden hose and all technology. Is ai just a tool or is it the future of all work? By setting goalposts we can see whether or not it is living up to the hype that we're collectively spending trillions on.

      The garden hose manufacturers aren't claiming that they're going to replace all human workers, so we don't set those kinds of goalposts to measure whether it's doing that.

  • Funding a few PhDs for a year costs orders of magnitude more than it did to solve this problem in inference costs. Also, this has been active research for some time. Or I guess the people working on it are just not as good as a random bunch of students? It's amazing the lengths that people go to maintain their worldview, even if it means belittling hardworking people.

    I take it you're not a mathematician. This is an achievement, regardless of whether you like LLMs or not, so let's not belittle the people working on these kinds of problems please.

    • >It's amazing the lengths that people go to maintain their worldview, even if it means belittling hardworking people.

      This is the most baffling and ironic aspects of these discussions. Human exceptionalism is what drives these arguments but the machines are becoming so good you can no longer do this without putting down even the top percenter humans in the process. Same thing happening all over this thread (https://news.ycombinator.com/item?id=47006594). And it's like they don't even realize it.

    • > Funding a few PhDs for a year costs orders of magnitude more than it did to solve this problem in inference costs.

      I don't think PhD students are sitting around and solving one problem for a year. Also PhD students are way cheaper

      2 replies →

    • Inference costs are heavily subsidised. My point was that we've spent trillions collectively on ai, and so far we have a few new proofs. It's been active research but the problem estimates only 5-10 people are even aware that it is a problem. I wrote "math phd's" not "random students", but regardless, I wouldn't know how you interpreted my statement that people could have discovered without ai this as "belittling the people working on this". You seem like a stupid person with an out of control chatbot that can't comprehend basic arguments.

      3 replies →

> I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique.

Because, empirically, we have numerous unique and differentiable qualities, obviously. Plenty of time goes into understanding this, we have a young but rigorous field of neuroscience and cognitive science.

Unless you mean "fundamentally unique" in some way that would persist - like "nothing could ever do what humans do".

> People constantly make comments like "well it's just trying a bunch of stuff until something works" and it seems that they do not pause for a moment to consider whether or not that also applies to humans.

I frankly doubt it applies to either system.

I'm a functionalist so I obviously believe that everything a human brain does is physical and could be replicated using some other material that can exhibit the necessary functions. But that does not mean that I have to think that the appearance of intelligence always is intelligence, or that an LLM/ Agent is doing what humans do.

  • >But that does not mean that I have to think that the appearance of intelligence always is intelligence, or that an LLM/ Agent is doing what humans do.

    You can think whatever you want, but an untestable distinction is an imaginary one.

    • First of all, that's not true. Not every position has to be empirically justified. I can reason about a position in all sorts of ways without testing. Here's an obvious example that requires no test at all:

      1. Functional properties seem to arise from structural properties

      2. Brains and LLMs have radically different structural properties

      3. Two constructs with radically, fundamentally different structural properties are less likely to have identical functional properties

      Therefor, my confidence in the belief that brains and LLMs should have identical functional properties is lowered by some amount, perhaps even just ever so slightly.

      Not something I feel like fleshing out or defending, just an example of how I could reason about a position without testing it.

      Second, I never said it wasn't testable.

      1 reply →

  • No, but it does mean that you should know we don't understand what intelligence is, and that maybe LLMs are actually intelligent and humans have the appearance of intelligence, for all we know.

    • You're just defining intelligence as "undefined", which okay, now anything is anything. What is the point of that?

      Indeed, there's quite a lot of work that's been done on what these terms mean. The fields of neuroscience and cognitive science have contributed a lot to the area, and obviously there are major areas of philosophy that discuss how we should frame the conversation or seek to answer questions.

      We have more than enough, trivially, to say that human intelligence is distinct, so long as we take on basic assertions like "intelligence is related to brain structures" since we know a lot about brain structures.

      10 replies →

Re: "I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique."

Perhaps this might better help you understand why this assumption still holds: https://en.wikipedia.org/wiki/Orchestrated_objective_reducti...

  • It doesn't. I actually completely reject that theory, and it's nice to see that Wikipedia notes that it is "controversial". There are extremely good reasons to reject this theory. For one thing, any quantum effects are going to be quite tiny/ trivial because the brain is too large, hot, wet, etc, to see larger effects, so you have to somehow make a leap to "tiny effects that last for no time at all" to "this matters fundamentally in some massive way".

    It likely requires rejection of functionalism, or the acceptance that quantum states are required for certain functions. Both of those are heavy commitments with the latter implying that there are either functions that require structures that can't be instantiated without quantum effects or functions that can't be emulated without quantum effects, both of which seem extremely unlikely to me.

    Probably for the far more important reason, it doesn't solve any problem. It's just "quantum woo, therefor libertarian free will" most of the time.

    It's mostly garbage, maybe a tiny tiny bit of interesting stuff in there.

    It also would do nothing to indicate that human intelligence is unique.

> I don't know why I am still perpetually shocked that the default assumption is that humans are somehow unique.

Uh, because up until and including now, we are...?

  • Every living thing on Earth is unique. Every rock is unique in virtually infinite ways from the next otherwise identical rock.

    There are also a tremendous number of similarities between all living things and between rocks (and between rocks and living things).

    Most ways in which things are unique are arguably uninteresting.

    The default mode, the null hypothesis should be to assume that human intelligence isn't interestingly unique unless it can be proven otherwise.

    In these repeated discussions around AI, there is criticism over the way an AI solves a problem, without any actual critical thought about the way humans solve problems.

    The latter is left up to the assumption that "of course humans do X differently" and if you press you invariably end up at something couched in a vague mysticism about our inner-workings.

    Humans apparently create something from nothing, without the recombination of any prior knowledge or outside information, and they get it right on the first try. Through what, divine inspiration from the God who made us and only us in His image?

    • I doubt you can even define intelligence sufficiently to argue this point. Since that's an ongoing debate without a resolution thus far.

      But you claimed that humans aren't unique. I think it's pretty obvious we are on many dimensions including what you might classify as "intelligence". You don't even necessarily have to believe in a "soul" or something like that, although many people do. The capabilities of a human far surpass every single AI to date, and much more efficiently as well. That we are able to brute-force a simulacrum of intelligence in a few narrow domains is incredible, but we should not denigrate humans when celebrating this.

      > There's still this seeming belief in magic and human exceptionalism, deeply held, even in communities that otherwise tend to revolve around the sciences and the empirical.

      Do you ever wonder why that is? I often wonder why tech has so many reductionist, materialist, and quite frankly anti-human, thinkers.

      16 replies →

    • Humans are obviously unique in an interesting way. People only "move the goalpost" because it's not an interesting question that humans can do some great stuff, the interesting question is where the boundary is. (Whether against animals or AI).

      Some example goals which makes human trivially superior (in terms of intelligence): invention of nuclear bomb/plants, theory of relativity, etc.

      4 replies →