← Back to context

Comment by pocketarc

2 days ago

When people imagined AI/AGI, they imagined something that can reason like we can, except at the speed of a computer, which we always envisioned would lead to the singularity. In a short period of time, AI would be so far ahead of us and our existing ideas, that the world would become unrecognizable.

That's not what's happening here, and it's worth remembering: A caveman from 200K years ago would have been just as intelligent as any of us here today, despite not having language or technology, or any knowledge.

In Carolyn Porco's words: "These beings, with soaring imagination, eventually flung themselves and their machines into interplanetary space."

When you think of it that way, it should be obvious that LLMs are not AGI. And that's OK! They're a remarkable piece of technology anyway! It turns out that LLMs are actually good enough for a lot of use cases that would otherwise have required human intelligence.

And I echo ArekDymalski's sentiment that it's good to have benchmarks to structure the discussions around the "intelligence level" of LLMs. That _is_ useful, and the more progress we make, the better. But we're not on the way to AGI.

The amount of things LLMs can do is insane.

It's interesting to me how much effort the AI companies (and bloggers) put into claiming they can do things they can't, when there's almost an unlimited list of things they actually can do.

  • This reminds me of "Devin". You know, the first "AI software engineer", which had the hype of the day but turned into a huge flop.

    They had ridiculous demos of Devin e.g. working as a freelancer and supposedly earning money from it.

  • And many of them so unexpected, given the unusual nature of their intellegence emerging from language prediction. They excel wherever you need to digest or produce massive amounts of text. They can synthesize some pretty impressive solutions from pre-existing stuff. Hell, I use it like a thesaurus to sus out words or phrases that are new or on the tip of my tounge. They have a great hold on the general corpus of information, much better than any search engine (even before the internet was cluttered with their output). It's much easier to find concrete words for what you're looking for through an indirect search via an LLM. The fact that, say, a 32GB model seemingly holds approximate knowlege of everything implies some unexplored relationship between inteligence and compression.

    What they can't they do? Pretty much anything reliably or unsupervised. But then again, who can?

    They also tend to fail creatively, given their synthesize existing ideas. And with things involving physical intuition. And tasks involving meta-knowlege of their tokens (like asking them how long a given word is). And they tend to yap too much for my liking (perhaps this could be fixed with an additional thinking stage to increase terseness before reporting to the user)

    • My current way of thinking about LLMs is "an echo of human intelligence embedded in language".

      It's kind of like in those sci fi or fantasy stories where someone dies and what's left behind as a ghost in the ether or the machine isn't actually them; it's just an echo, an shallow, incomplete copy.

      2 replies →

    • > some unexplored relationship between inteligence and compression.

      I don't think it's unexplored at all, this is basically what information theory is all about. At some level, it becomes incompressible....

  • Only because they have compressed and encoded the entire sum of human knowledge at their disposal. There are models for everything in there, but they can only do what has been done before.

    What's more amazing to me is the average human, only able to hold a relatively small body on knowledge in their mind, can generate things that are completely novel.

    • People assume training on past data means no novelty, but novelty comes from recombination. No one has written your exact function, with your exact inputs, constraints, and edge cases, yet an LLM can generate a working version from a prompt. That’s new output. The real limitation isn’t novelty, it’s grounding.

      2 replies →

    • I hear this constantly. Can you produce something novel, right here, demonstrably, that an LLM couldnt have produced? Nobody ever can, but it’s sure easy to claim.

      8 replies →

  • Because most of these things are not multi-trillion-dollar ideas. "We found a way to make illustrators, copyeditors, and paralegals, and several dozen other professions, somewhat obsolete" in no way justifies the valuations of OpenAI or Nvidia.

    • >Because most of these things are not multi-trillion-dollar ideas.

      That's right, but there's more. When you think about the cost of compute and power for these LLM companies, they have no choice. It MUST be a multi-trillion-dollar idea or it's completely uninvestable. That's the only way they can sucker more and more money into this scheme.

    • I don't know about OpenAI, but Nvidia's valuation seems more justifiable based on their actually known revenue and profit, and because it's publicly traded.

      Though if the bubble(?) bursts and Nvidia starts selling fewer units year-over-year, that could be problematic.

  • The hype has gotta keep going or the money will dry up. And hype can be quantified by velocity and acceleration, rather than distance. They need to keep the innovation accelerating, or the money stops. This is of course completely unreasonable, but also why the odd claims keep happening.

    • Why would the money dry up when we have companies willing to spend $1000/developer/month on AI tooling when they would have balked at $5/u/mo for some basic tooling 2-3 years ago?

      1 reply →

  • for example?

  • I've been pushing Opus pretty hard on my personal projects. While repeatability is very hard to do, I'm seeing glimpses of Opus being well beyond human capabilities.

    I'm increasingly convinced that the core mechanism of AGI is already here. We just need to figure out how to tie it together.

This is a bit of an anti-evolutionary perspective. At some point in our past, we were something much less intelligent than we are now. Our intelligence didn't spring out of thin air. Whether or not AI can evolve is yet to be seen I think.

  • Sure, but then basically whatever it was, it was not "us". "Us" and our intelligence had to appear at some point. It's 100% not "anti-evolutionary" to say some years ago humans became as mentally capable as a baby born today. We just have to figure out how many years ago that was. It wasn't last decade. As far as I know most anthropologists agree it was around ~70k years ago (not 200k).

  • I could gather that you disagreed with GP, but I don't see a salient point in your response? You are ostensibly challenging GP on the idea that a homo sapien baby from 200,000 years ago would have been capable of modern mental feats if raised in the present day.

    > This is a bit of an anti-evolutionary perspective.

    Nice, seems like you have something meaningful to add.

    > At some point in our past, we were something much less intelligent than we are now.

    I agree with this, but "at some point in our past"? Is that the essence of this rebuttal?

    > Our intelligence didn't spring out of thin air.

    Again, I could not tell what this means, nor do I see the relevance.

    > Whether or not AI can evolve is yet to be seen I think.

    The OP is very pointedly talking about LLMs. Is that what you mean to reference here with "AI"?

    I implore you to contribute more meaningfully. Especially when leading with statements like "This is a bit of an anti-evolutionary perspective", you ought to elaborate on them. However, your username suggests maybe you are just trolling?

    • If you think you are equipped to discuss the topic of evolution of general intelligence in homo, and you haven't read about GWAS and EDU PGS, then at this point you are either a naive layman, or a convinced discourse commando.

      Because it is really hard and hopeless endeavor to make an objective case that the current human populations have similar PGS scores on key mental traits and diseases compared to 200k years ago.

      1 reply →

How do you arrive at the statement that a cavemen would have the same intelligence as a human today? Intelligence is surely not usually defined as the cognitive potential at birth but as the current capability. And the knowledge an average human has today through education surely factors into that.

  • Your attempt to commingle intelligence and knowledge is not needed to support your initial question. The original statement that a caveman 200K years ago would have the same intelligence as a modern human was blankly asserted without any supporting evidence, and so it is valid to simply question the claim. You do not need to give a counterclaim, as that is unnecessarily shifting the burden of proof.

  • Knowledge is a thing you can use intelligence on, but not a component of intelligence itself.

    • The knowledge that everything is made out of atoms/molecules however makes it much easier to reason about your environment. And based on this knowledge you also learn algorithms, how to solve problems etc. I dont think its possible to completely separate knowledge from intelligence.

      4 replies →

  • I think the core idea is that if a baby with "caveman genetics" so to speak were to be born today, they could achieve similar intellectual results to the (average?) rest of the population. At least that's how I interpret it.

  • It's even sillier than that. You can look at populations in the modern world and see there are huge differences in intelligence due to various factors such as cousin marriage and nutrition.

> A caveman from 200K years ago would have been just as intelligent as any of us here today, despite not having language or technology, or any knowledge.

Source? This does not sound possibly true to me (by any common way we might measure intelligence).

  • The phrase you’re looking for is “anatomically modern human”, which has been around for 200,000 years: https://en.wikipedia.org/wiki/Early_modern_human

    • I'm not disagreeing that humans 200,000 years ago were approximately anatomically equivalent to humans today; I'm disagreeing that they would be just as intelligent without today's language, technology, or knowledge. I don't think you can define or measure intelligence in a way that ignores those things.

> A caveman from 200K years ago would have been just as intelligent as any of us here today, despite not having language or technology, or any knowledge.

Doubt. If we would teleport cavemen babies right out of the womb to our times, I don't think they'd turn into high IQ individuals. People knowledgeable on human history / human evolution might now the correct answer.

  • From what I understand, in terms of genetic changes to intellectual abilities, there's not much evidence to suggest we're so much smarter that your proposed teleported baby would be noticeably stupider - at best they'd be on the tail of the bell curve, well within a normal distribution. Maybe if we teleported ten thousand babies, their bell curve would be slightly behind ours. Take a look at "wild children" for the very few examples we can find of modern humans developed without culture. Seems like above everything, our culture, society, and thus education is what makes us smart. And our incredibly high calorie food, of course.

    • That is exactly what civilization is about - for new generations to start not from scratch, but from some baseline their parents achieved (accumulated knowledge and culture). This allows new generations to push forward instead of retreading the same path.

    • it's impossible to prove the counterfactual (I guess, as I imagine we don't have enough gene information that far back). But I'd imagine that the high calorie food you can get starting with the advent of agriculture is exactly what could drive evolution in a certain direction that helps brains grow. We've had ~1000 generations since then, that should be enough for some change to happen. Our brains use up 20% of the body's energy. Do we know that this was already the case during the stone age?

      2 replies →

  • It is known that 200k years ago human brain sizes were actually greater than today, even if this does not necessarily correlate with a lower IQ in the present, because it is more likely that the parts of the brain that have reduced may have been related with things like fine motor skills and spatial orientation, which are no longer important today for most people.

  • Its complicated. It depends.

    A human being has the potential for intelligence. For that to get realized, you need circumstances, you need culture aka "societal" software and the resources to suspend the grind of work in formative years and allow for the speed-running of the process of knowledge preloading before the brain gets stable.

    The parents then must support this endeavor under sacrifices.

    There is also a ton of chicken-egg catch22s buried in this whole thing.

    If the society is not rich then no school, instead childlabour. If child-labour society is pre-industrial ineffective and thus, no riches to support and redistribute.

    Also is your societies culture root-hardened. Means - on a collapse of complexity in bad times, can it recover even powering through the usual "redistribute the nuts and bolts from the bakery" sentiments rampant in bad times. Can it stay organize and organize centralizing of funds for new endeavors. Organizing a sailing ship in a medieval society, means in every village 1 person starves to death. Can your society accomplish that without riots?

    Thus.

    • > A human being has the potential for intelligence.

      Were we "human" 200.000 years ago the way we are now?

      Was the required brain and vocal hardware present?

      4 replies →

  • Can you articulate why you think so? This kind of response "I just don't agree" reads as zero useful information. At least to me.

    • Evolutionary brain development.

      We all come from monke, monkey from 10 million years ago would definitely be unable to even learn spoken language at a basic level. Would he even have the anatomy to produce the required sounds? I don't think so.

      What about monke from 1 million years ago? 200 thousand years ago?

      ChatGpt says spoken language only emerged 50k - 200k years ago and that a cavemen baby from 200k years ago could learn spoken language if brought up by modern parents.

      But I prefer human answers over AI slop.

      2 replies →

>In a short period of time, AI would be so far ahead of us and our existing ideas, that the world would become unrecognizable.

>That's not what's happening here ...

On the contrary, it very much is.

I'd argue AGI is already achieved via LLMs today, provided they've excellent external cognitive infrastructure supporting.

However, the gap from AGI to ASI is perhaps longer than anticipated such that we're not seeing a hard takeoff immediately after arriving at the first.

Just, you know—potential mass unemployment on a scale never seen before. When you frame it that way, whether LLMs qualify as AGI is largely semantics.

That said, I really hope you're right and I'm wrong.

  • Ah yes, the 0.50$/h support infrastructure from the places that cannot refuse the deal. "frontier" LLMs currently cosplay a dunk with google and late alzheimer's. Surely, they speed up brute-forcing correct answer a lot by trying more likely texts. And? This overfed markov chain doesn't need supporing infrastructure — it IS supporting infastructure, for the cognitive something that is not being worked on prominently, because all resources are needed to feed the markov chain.

    The silence surrounding new LLM architectures is so loud that an abomination like "claw" gets prime airtime. Meanwhile models keep being released. Maybe the next one will be the lucky draw. It was pure luck, finding out how well LLMs scale, in the first place. Why shouldn't the rest of progress be luck driven too?

    Kerbal AGI program...

    • Pretty much, it's just that these overfed Markov chains when given a proper harness and agentic framework are able to produce entire software projects in a fraction of the time it used to take.

      Kerbel AGI program hits the nail on the head.

      1 reply →

> That _is_ useful, and the more progress we make, the better.

I would be happy to agree if we had the solutions for the societal problems that will create in hand.

> A caveman from 200K years ago would have been just as intelligent as any of us here today

In other words, intelligence offers zero evolutionary advantage?

  • 200k years just isn't much time for significant evolutionary changes considering the human population "reset" a couple times to very very small numbers.

    • If you read the papers and analyze the historical DNA, you can make case for significant PGS shifts in populations across a few centuries.

      People really haven't processed this fact and its implications just yet.

  • Our big brains are a recent mutation and haven't been fully field tested. They seem like more of a liability than anything, they've created more existential risks for us than they've put to rest.

  • It looks like quite the disadvantage, in fact. We're killing ourselves and a lot of other stuff in the process.

    • Yes, but also antibiotics, vaccinations, child mortality down down down, life expectancy up up up. I wouldn't trade for living even 100 years prior compared to today, or 500-200k years ago for that matter.

      With everything wrong and sick with today's world, let's not take the achievements of our species for granted.

      6 replies →

I posted my own comment but I agree with you. Our modern society likes to claim we are somehow "more intelligent" than our predecessors/ancestors. I couldn't disagree more. We have not changed in terms of intelligence for thousands of years. This is a matter that's beyond just engineering, it's also a matter of philosophy and perspective.

> caveman from 200K years ago would have been just as intelligent as any of us here today, despite not having language

There is evidence to the contrary. Not having language puts your mental faculties in a significant disadvantage. Specifically, left brain athropy. See the critical period hypothesis. Perhaps you mean lacking spoken language rather than having none at all?

https://linguistics.ucla.edu/people/curtiss/1974%20-%20The%2...

Humans, like all animals, have not stopped evolving. A random caveman from 200K years ago would have very different genetics to that of a typical HN reader and even more so of the best of the HN readers.

Around 3,200 years ago there was a notable uptick in alleles associated with intelligence.