Comment by devmor

15 hours ago

"Exponentials all tend to become sigmoids but you can't predict exactly when" is a true statement, but I'm not sure it needed an article.

This doesn't say much, and the author fights their own points a couple times, suggesting that they maybe didn't think through what they wanted to write until they were in the middle of writing it and started realizing their assumptions didn't match what they expected the data to say.

I really don't get the point of what I just read.

The point is the tiring arguments from AI skeptics saying “things are flattening, they have to” which while technically correct says nothing because no one knows when that will happen and we see no mechanism for this yet. Lindy’s law as a reasonable prediction under total uncertainty is interesting and insightful and a lot of people don’t know about it or why it holds. I did enjoy the reference to this!

  • Nah this is making a category error. You're assuming that AI skeptics agree that models are demonstrating intelligence along the same axis as humans and that with further improvement they will become equivalent to humans. I am an AI skeptic, and I disagree with this assessment.

    Model reasoning is on an s-curve, which is improving.

    Model intelligence is not the same as reasoning. It's a different axis, and one I have not seen much movement on.

    See, humans have a recursive form of intelligence which is capable of self-reflection and introspection. LLMs can only reason about tokens which have already been emitted. Humans and LLMs do not share the same form of reasoning, and general human-like intelligence will not arise from the current architecture of LLMs. Therefore it is a mistake to assume that continual improvement on the reasoning scale will result in something that is equivalent enough to humans on the intelligence axis to replace all labor.

    • > You're assuming that AI skeptics agree that models are demonstrating intelligence along the same axis as humans and that with further improvement they will become equivalent to humans.

      No definitely not saying this and I don’t quite know what it means

      > Model reasoning is on an s-curve, which is improving.

      Is this saying two different things? I think I might agree with this in principle as in maybe there is some sort of s curve or something like it but do we see evidence of this? Where?

      > Model intelligence is not the same as reasoning. It's a different axis, and one I have not seen much movement on.

      Can you clarify this? What is the distinction and what makes you say you have “not seen much progress?”

      > See, humans have a recursive form of intelligence which is capable of self-reflection and introspection. LLMs can only reason about tokens which have already been emitted

      LLMs do self reflection and introspection in context, and tweaks such as value functions (serving a similar purpose to intuition or emotion) may make this better? Why do you feel self reflection and introspection are a fundamental limitation here? Models reason over tokens they have emitted and also with their own sense and learned behavior already. Are you just talking about continual learning? Also I feel people just latch onto LLMs as if this is all of AI. Why? SSMs, memory networks, recurrent neural networks etc etc etc are all part of AI but aren’t as popular because they can’t yet compete with LLMs in terms of scaling laws and training efficiency due to e.g. hardware and software optimization and investment being focused on LLMs. If something else comes along that works better we’ll just start scaling that.

      > Humans and LLMs do not share the same form of reasoning, and general human-like intelligence will not arise from the current architecture of LLMs.

      Very strong statement, any theoretical or experimental basis for this? I also don’t particularly care personally other than as a point of curiosity. Why does it matter if AI systems will develop equivalent reasoning mechanisms as humans? In fact it may be much better not to.

      > Therefore it is a mistake to assume that continual improvement on the reasoning scale will result in something that is equivalent enough to humans to replace all labor.

      Idk I didn’t say this explicitly but I also dont think it matters if we have a system “equivalent to humans” or one that “replaces all labor”.

      1 reply →

  • But those skeptics are initially responding to the constant AI hype claims that we are exponentially growing to AGI. So this article is in fact just a (very poorly thought through) attempt at saying “nuh uh, the hype might be true, you can’t prove it’s not yet!

    • Yet the evidence is on the side of the hype? We don’t see any mechanism or cogent framework for what limits exist here theoretically that I’m aware of, are you? Epoch had a great article a year ago looking at several bottlenecks in terms of scale and back then we were about 4 orders of magnitude away from hitting them. We’re probably now closer to 3. Yet scale is only part of the performance equation, a fairly big chunk of progress is from algorithmic or curation related contributions. The point of the article is:

      > But those skeptics are initially responding to the constant AI hype claims that we are exponentially growing to AGI.

      This is a meaningless statement or at best just strawmanning.

      1 reply →