← Back to context

Comment by FloorEgg

7 days ago

https://archive.is/D4EYW

For anyone seeing 404

The skepticism surrounding AGI often feels like an attempt to judge a car by its inability to eat grass. We treat "cognitive primitives" like object constancy and causality as if they are mystical, hardwired biological modules, but they are essentially just high-dimensional labels for invariant relationships within a physical manifold. Object constancy is not a pre-installed software patch; it is the emergent realization of spatial-temporal symmetry. Likewise, causality is nothing more than the naming of a persistent, high-weight correlation between events. When a system can synthesize enough data at a high enough dimension, these so-called "foundational" laws dissolve into simple statistical invariants. There is no "causality" module in the brain, only a massive correlation engine that has been fine-tuned by evolution to prioritize specific patterns for survival.

The critique that Transformers are limited by their "one-shot" feed-forward nature also misses the point of their architectural efficiency. Human brains rely on recurrence and internal feedback loops largely as a workaround for our embarrassingly small working memory—we can barely juggle ten concepts at once without a pen and paper. AI doesn't need to mimic our slow, vibrating neural signals when its global attention can process a massive, parallelized workspace in a single pass. This "all-at-once" calculation of relationships is fundamentally more powerful than the biological need to loop signals until they stabilize into a "thought."

Furthermore, the obsession with "fragility"—where a model solves quantum mechanics but fails a child’s riddle—is a red herring. Humans aren't nearly as "general" as we tell ourselves; we are also pattern-matchers prone to optical illusions and simple logic traps, regardless of our IQ. Demanding that AI replicate the specific evolutionary path of a human child is a form of biological narcissism. If a machine can out-calculate us across a hundred variables where we can only handle five, its "non-human" way of knowing is a feature, not a bug. Functional replacement has never required biological mimicry; the jet engine didn't need to flap its wings to redefine flight.

  • Hey, thanks for responding. You're a very evocative writer!

    I do want to push back on some things:

    > We treat "cognitive primitives" like object constancy and causality as if they are mystical, hardwired biological modules, but they are essentially just

    I don't feel like I treated them as mystical - I cite several studies that define what they are and correlate them to certain structures in the brain that have developed millennia ago. I agree that ultimately they are "just" fitting to patterns in data, but the patterns they fit are really useful, and were fundamental to human intelligence.

    My point is that these cognitive primitives are very much useful for reasoning, and especially the sort of reasoning that would allow us to call an intelligence general in any meaningful way.

    > This "all-at-once" calculation of relationships is fundamentally more powerful than the biological need to loop signals until they stabilize into a "thought."

    The argument I cite is from complexity theory. It's proof that feed-forward networks are mathematically incapable of representing certain kinds of algorithms.

    > Furthermore, the obsession with "fragility"—where a model solves quantum mechanics but fails a child’s riddle—is a red herring.

    AGI can solve quantum mechanics problems, but verifying that those solutions are correct still (currently) falls to humans. For the time being, we are the only ones who possess the robustness of reasoning we can rely on, and it is exactly because of this that fragility matters!

    • > The argument I cite is from complexity theory. It's proof that feed-forward networks are mathematically incapable of representing certain kinds of algorithms.

      Claiming FFNs are mathematically incapable of certain algorithms misses the fact that an LLM in production isn't a static circuit, but a dynamic system. Once you factor in autoregression and a scratchpad (CoT), the context window effectively functions as a Turing tape, which sidesteps the TC0 complexity limits of a single forward pass.

      > AGI can solve quantum mechanics problems, but verifying that those solutions are correct still (currently) falls to humans. For the time being, we are the only ones who possess the robustness of reasoning we can rely on, and it is exactly because of this that fragility matters!

      We haven't "sensed" or directly verified things like quantum mechanics or deep space for over a century; we rely entirely on a chain of cognitive tools and instruments to bridge that gap. LLMs are just the next layer of epistemic mediation. If a solution is logically consistent and converges with experimental data, the "robustness" comes from the system's internal logic.

  • If human biological intelligence is our reference for general intelligence, then being skeptical about AGI is reasonable given its current capabilities. This isn't biological narcissism, this is setting a datum (this wasn't written by chatgpt I promise).

    Humans have a great capacity for problem solving and creativity which, at its heights, completely dwarfs other creatures on this planet. What else would we reference for general intelligence if not ourselves?

    My skepticism towards AGI is primarily supported by my interactions with current systems that are contenders for having this property.

    Here's a recent conversation with chatgpt.

    https://chatgpt.com/share/69930acc-3680-8008-a6f3-ba36624cb2...

    This system doesn't seem general to me it seems like a specialized tool that has really good logic mimicry abilities. I asked it if the silence response was hard coded, it said no then went on to explain how the silence was hard coded via a separate layer from the LLM portion which would just respond indefinitely.

    It's output is extremely impressive, but general intelligence it is not.

    On your final point about functional replacement not requiring biological mimicry. We don't know whether biological mimicry is required or not. We can only test things until we find out or gain some greater understanding of reality that allows us to prove how intelligence emerges.