← Back to context

Comment by bbarnett

2 months ago

AGI may use the same hardware, or same compute concepts.

But LLMs (like low/high pressure wing flight) will never result in AGI (you won't get to the moon with a wing).

You're making my point.

I thought your point was terrible about aerospace. And since you're insisting I follow you further into the analogy, I think it's terrible here.

LLMs may be a key building block for early AGI. The jury is still out. Will a LLM alone do it? No. You can't build a space vehicle from fins and fairings and control systems alone.

O1 can reach pretty far beyond past LLM capabilities by adding infrastructure for metacognition and goal seeking. Is O1 the pinnacle, or can we go further?

In either case, planes and rocket-planes did a lot to get us to space-- they weren't an unrelated evolutionary dead end.

> Yet powered flight has nothing to do with space travel, no connection at all.

Fully disagree.

  • You're missing the point, I think.

    The relationships you are describing are why airflight/spaceflight and AI/AGI are a good comparison.

    We will never get AGI from an LLM. We will never fly to the moon via winged flight. These are examples of how one method of doing a thing, will never succeed in another.

    Citing all the similarities between airflight and spaceflight makes my point! One may as well discuss how video games are on a computer platform, and LLMs are on a computer platform, and say "It's the same!", as say airflight and spaceflight are the same.

    Note how I was very clear, and very specific, and referred to "winged flight" and "low/high pressure", which will never, ever, ever get one even to space. Nor allow anyone to navigate in space. There is no "lift" in space.

    Unless you can describe to me how a fixed wing with low/high pressure is used to get to the moon, all the other similarities are inconsequential.

    Good grief, people are blathering on about metallurgy. That's not a connection, it's just modern tech, has nothing to do with the method of flying (low/high pressure around the wing), and is used in every industry.

    I love how incapable everyone has been in this thread of concept focus, incapable of separating the specific from the generic. It's why people think, generically, that LLMs will result in AGI, too. But they won't. Ever. No amount of compute will generate AGI via LLM methods.

    LLMs don't think, they don't reason, they don't infer, they aren't creative, they come up with nothing new, it's easiest to just say "they don't".

    One key aspect here is that knowledge has nothing to do with intelligence. A cat is more intelligent than any LLM that will ever exist. A mouse. Correlative fact regurgitation is not what intelligence is, any more than a book on a shelf is intelligence, or the results of Yahoo search 10 years ago were.

    The most amusing is when people mistake shuffled up data output from an LLM as "signs of thought".

Your point is good enough any spaceflight, despite some quibbling from commenters.

But I haven't seen where you make a compelling argument why it's the same thing in AI/AGI.

In your old analogy, we're all still the guys on the ground saying it'll work. You're saying it won't. But nobody has "been to space" yet. You have no idea if LLMs will take us to AGI.

I personally think they'll be the engine on the spaceship.

  • This is a fair response, thank you.

    From another post:

    No amount of compute will generate AGI via LLM methods.

    LLMs don't think, they don't reason, they don't infer, they aren't creative, they come up with nothing new, it's easiest to just say "they don't".

    One key aspect here is that knowledge has nothing to do with intelligence. A cat is more intelligent than any LLM that will ever exist. A mouse. Correlative fact regurgitation is not what intelligence is, any more than a book on a shelf is intelligence, or the results of Yahoo search 10 years ago were.

    The most amusing is when people mistake shuffled up data output from an LLM as "signs of thought".

    From where I sit, I don't even see LLMs as being some sort of memory store for AGIs even. The knowledge isn't reliable enough. An AGI would need to ingress and then store knowledge in its own mind, not use an LLM as a reference.

    Part of what makes intelligence, intelligent, is the ability to see information and learn on the spot. And further to learn via its own senses.

    Let's look at bats. A bat is very close to humans, genetically. Yet if somehow we took "bat memories", and were able to implant them in humans, how on earth would that help? How do you use bat memories of using sound for navigation, to "see" work? Of flying? Of social structure?

    For example, we literally don't have them brain matter to see spatially the same way bats do. So when access those memories, they would be so foreign, that their usefulness is greatly reduced. They'd be confusing, unhelpful.

    Think of it. Ingress of data and information is sensorially derived. Our mental image of the world depends upon this data. Our core being is built upon this foundation. An AGI using an LLM as "memories" would be experiencing something just as foreign.

    So even if LLMs were used to allow an AGI to query things, it wouldn't be used as "memory". And the type of memory store that LLMs exhibit, is most certainly not how intelligence as we know it stores memory.

    We base our knowledge upon directly observed and verified fact, but further based upon the senses we have. And all information derived from those senses is actually filtered, and processed by specialized parts of our brains, before we even "experience" it.

    Our knowledge is so keyed in and tailored directly to our senses, and the processing of that data, that there is no way to separate the two. Our skill, experience, and capabilities are "whole body".

    An LLM is none of this.

    The only true way to create an AGI via LLMs would be to simulate a brain entirely, and then start scanning human brains during specific learning events. Use that data to LLM your way into an averaged and probabilistic mesh, and then use that output to at least provide full sense memory input to an AGI.

    Even so, I suspect that may be best used to create a reliable substrate. Use that method to simulate and validate and modify that substrate so it is capable of using such data, thereby verifying that it stands solid as a model for an AGI's mind.

    Then wipe and allow learning to begin entirely separately.

    Yet to do even this, we'd need to ensure that sensor input at least to a degree enables the same sort of sense input. I think that Neuralink might be best in play to enable this, for as it works at creating an interface for, say, sight, and other senses... it could then use this same series of mapped inputs for a simulated human brain.

    This of course works best with a physical form to also taste the environment around it, and who also is working on an actual android for day to day use?

    You might say this focuses too much on creating a human style AGI, but frankly it's the only thing we can try to make and work into creating a true AGI. We have no other real world examples of intelligence to use, and every brain on the planet is part of the same evolutionary tree.

    So best to work with something we know, something we're getting more and more apt at understanding, and with brain implants of the calibre and quality that neurolink is devising, something we can at least understand in far more depth than ever before.

    • > The first plane ever flies, and people think "we can fly to the moon soon!". Yet powered flight has nothing to do with space travel, no connection at all.

      You eventually said winged flight much later-- trying to make your point a little more defensible. That's why I started explaining to you the very big connections between powered flight and space travel ;)

      I pretty much completely disagree with your wall of text, and it's not a very well reasoned defense of your prior handwaving. I'm going to move on now.

      4 replies →