← Back to context

Comment by izzydata

6 months ago

Not only do I not think it is right around the corner. I'm not even convinced it is even possible or at the very least I don't think it is possible using conventional computer hardware. I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence. If we ever crack artificial intelligence it's highly possible that in its first form it is of very low intelligence by humans standards, but is truly capable of learning on its own without extra help.

I think the only way that it’s actually impossible is if we believe that there’s something magical and fundamentally immeasurable about humans that leads to our general intelligence. Otherwise we’re just machines, after all. A human brain is theoretically reproducible outside standard biological mechanisms, if you have a good enough nanolathe.

Maybe our first AGI is just a Petri dish brain with a half-decent python API. Maybe it’s more sand-based, though.

  • -- A human brain is theoretically reproducible outside standard biological mechanisms, if you have a good enough nanolathe.

    Sort of. The main issue is the energy requirements. We could theoretically reproduce a human brain in SW today, it's just that it would be a really big energy hog and run very slowly and probably become insane quickly like any person trapped in a sensory deprived tank.

    The real key development for AI and AGI is down at the metal level of computers- the memristor.

    https://en.m.wikipedia.org/wiki/Memristor

    The synapse in a brain is essentially a memristive element, and it's a very taxing one on the neuron. The equations is (change in charge)/(change in flux). Yes, a flux capacitor, sorta. It's the missing piece in fundamental electronics.

    Making simple 2 element memristors is somewhat possible these days, though I've not really been in the space recently. Please, if anyone knows where to buy them, a real one not a claimed to be one, let me know. I'm willing to pay good money.

    In Terms of AI, a memristor would require a total redesign of how we architect computers ( goodbye busses and physically separate memory, for one). But, you'd get a huge energy and time savings benefit. As in, you can run an LLM on a watch battery or small solar cell and let the environment train them to a degree.

    Hopefully AI will accelerate their discovery and facilitate their introduction into cheap processing and construction of chips.

  • > and fundamentally immeasurable about humans that leads to our general intelligence

    Isn't AGI defined to mean "matches humans in virtually all fields"? I don't think there is a single human capable of this.

  • If by "something magical" you mean something we don't understand, that's trivially true. People like to give firm opinions or make completely unsupported statements they feel should be taken seriously ("how do we know humans intelligence doesn't work the same way as next token prediction") about something nobody understand.

    • I mean something that’s fundamentally not understandable.

      “What we don’t yet understand” is just a horizon.

  • > if we believe that there’s something magical and fundamentally immeasurable about humans that leads to our general intelligence

    It’s called a soul for the believers.

  • A brain in a jar, with wires so that we can communicate with it, already exists. Its called the internet. My brain is communicating with you now through wires. Replacing my keyboard with implanted electrodes may speed up the connection, but it wont fundimentally change the structure or capabilities of the machine.

  • Our silicon machines exist in a countable state space (you can easily assign a unique natural number to any state for a given machine). However, 'standard biological mechanisms' exist in an uncountable state space - you need real numbers to properly describe them. Cantor showed that the uncountable is infinitely more infinite (pardon the word tangle) than the countable. I posit that the 'special sauce' for sentience/intelligence/sapience exists beyond the countable, and so is unreachable with our silicon machines as currently envisaged.

    I call this the 'Cardinality Barrier'

    • Cantor talks about countable and uncountable infinities, both computer chips and human brains are finite spaces. The human brain has roughly 100b neurons, even if each of these had an edge with each other and these edges could individually light up signalling different states of mind, isn't that just `2^100b!`? That's roughly as far away from infinity as 1.

      6 replies →

    • That’s an interesting thought. It steps beyond my realm of confidence, but I’ll ask in ignorance: can a biological brain really have infinite state space if there’s a minimum divisible Planck length?

      Infinite and “finite but very very big” seem like a meaningful distinction here.

      I once wondered if digital intelligences might be possible but would require an entire planet’s precious metals and require whole stars to power. That is: the “finite but very very big” case.

      But I think your idea is constrained to if we wanted a digital computer, is it not? Humans can make intelligent life by accident. Surely we could hypothetically construct our own biological computer (or borrow one…) and make it more ideal for digital interface?

      4 replies →

    • It sounds like you are making a distinction between digital (silicon computers) and analog (biological brains).

      As far as possible reasons that a computer can’t achieve AGI go, this seems like the best one (assuming computer means digital computer of course).

      But in a philosophical sense, a computer obeys the same laws of physics that a brain does, and the transistors are analog devices that are being used to create a digital architecture. So whatever makes you brain have uncountable states would also make a real digital computer have uncountable states. Of course we can claim that only the digital layer on top matters, but why?

    • Please describe in detail how biological mechanisms are uncountable.

      And then you need to show how the same logic cannot apply to non-biological systems.

    • > 'standard biological mechanisms' exist in an uncountable state space

      Everything in our universe is countable, which naturally includes biology. A bunch of physical laws are predicated on the universe being a countable substrate.

      1 reply →

    • Physically speaking, we don’t know that the universe isn’t fundamentally discrete. But the more pertinent question is whether what the brain does couldn’t be approximated well enough with a finite state space. I’d argue that books, music, speech, video, and the like demonstrate that it could, since those don’t seem qualitatively much different from how other, analog inputs stimulate our intellect. Or otherwise you’d have to explain why an uncountable state space would be needed to deal with discrete finite inputs.

    • Can you explain why you think the state space of the brain is not finite? (Not even taking into account countability of infinities)

Then there's the other side of the issue. If your tool is smarter than you.. how do you handle it ?

People are joking online that some colleagues use chatgpt to answer questions from other teammates made by chatgpt, nobody knows what's going on anymore.

>I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence.

Measuring intelligence is hard and requires a really good definition of intelligence, LLMs have in some ways made the definition easier because now we can ask the concrete question against computers which are very good at some things "Why are LLMs not intelligent?" Given their capabilities and deficiencies, answering the question about what current "AI" technology lacks will make us better able to define intelligence. This is assuming that LLMs are the state of the art Million Monkeys and that intelligence lies on a different path than further optimizing that.

https://en.wikipedia.org/wiki/Infinite_monkey_theorem

I think the issue is going to turn out to be that intelligence doesn't scale very well. The computational power needed to model a system has got to be in some way exponential in how complex or chaotic the system is, meaning that the effectiveness of intelligence is intrinsically constrained to simple and orderly systems. It's fairly telling that the most effective way to design robust technology is to eliminate as many factors of variation as possible. That might be the only modality where intelligence actually works well, super or not.

  • What does "scale well" mean here? LLMs right now aren't intelligent so we're not scaling from that point on.

    If we had a very inefficient, power hungry machine that was 1:1 as intelligent as a human being but could scale it very inefficiently to be 100:1 a human being it might still be worth it.

I think you are very right to be skeptical. It’s refreshing to see another such take as it is so strange to see so many supposedly technical people just roll down the track of assuming this is happening when there are some fundamental problems with this idea. I understand why non-technical are ready to marry and worship it or whatever but for serious people I think we need to think more critically.

I agree. There is no define or agreed upon consensus of what AGI even means or implies. Instead, we will continue to see incremental improvements at the sort of things AI is good at, like text and image generation, generating code, etc. The utopia dream of AI solving all of humanity's problems as people just chill on a beach basking in infinite prosperity are unfounded.

  • > There is no define or agreed upon consensus of what AGI even means or implies.

    Agreed, however defining ¬AGI seems much more straightforward to me. The current crop of LLMs, impressive though they may be, are just not human level intelligent. You recognize this as soon as you spend a significant amount of time using one.

    It may also be that they are converging on a type of intelligence that is fundamentally not the same as human intelligence. I’m open to that.

why not?

  • I'm not an expert by any means, but everything I've seen of LLMs / machine learning looks like mathematical computation no different than what computers have always been doing at a fundamental level. If computers weren't AI before than I don't think they are now just because the maths they are doing has changed.

    Maybe something like the game of life is more in the right direction. Where you set up a system with just the right set of rules with input and output and then just turn it on and let it go and the AI is an emergent property of the system over time.

    • Why do you have a preconception of what an implementation of AGI should look like? LLMs are composed of the same operations that computers have always done. But they're organized in novel ways that have produced novel capabilities.

      2 replies →

I think the same.

How do you call people like us? AI doomers? AI boomers?!

Exactly. Ive said this from the start.

AGI is being able to simulate reality in high enough accuracy, faster than reality (which includes being able to simulate human brains), which so far doesn't seem to be possible (due to computational irreducebility)

There is something easy you can always do to tell if something is just hype: we will never be able to make something smarter than a human brain on purpose. It effectively has to happen either naturally or by pure coincidence.

The amount of computing power we are putting in only changes that luck by a tiny fraction.

  • > we will never be able to make something smarter than a human brain on purpose. It effectively has to happen either naturally or by pure coincidence.

    Why is that? We can build machines that are much better than humans in some things (calculations, data crunching). How can you be certain that this is impossible in other disciplines?

    • that's just a tiny fraction of what a human brain can do, sure we can get something better in very narrow subjects, but something as being able to recognize patterns apply that to solve problems is something way beyond anything we can even think of right now.

      2 replies →