Comment by flatline

1 day ago

I believe they can create a novel instance of a system from a sufficient number of relevant references - i.e. implement a set of already-known features without (much) code duplication. LLMs are certainly capable of this level of generalization due to their huge non-relevant reference set. Whether they can expand beyond that into something truly novel from a feature/functionality standpoint is a whole other, and less well-defined, question. I tend to agree that they are closed systems relative to their corpus. But then, aren't we? I feel like the aperture for true novelty to enter is vanishingly small, and cultures put a premium on it vis-a-vis the arts, technological innovation, etc. Almost every human endeavor is just copying and iterating on prior examples.

Almost all of the work in making a new operating system or a gameboy emulator or something is in characterizing the problem space and defining the solution. How do you know what such and such instruction does? What is the ideal way to handle this memory structure here? You know, knowledge you gain from spending time tracking down a specific bug or optimizing a subroutine.

When I create something, it's an exploratory process. I don't just guess what I am going to do based on my previous step and hope it comes out good on the first try. Let's say I decide to make a car with 5 wheels. I would go through several chassis designs, different engine configurations until I eventually had something that works well. Maybe some are too weak, some too expensive, some are too complicated. Maybe some prototypes get to the physical testing stage while others don't. Finally, I publish this design for other people to work on.

If you ask the LLM to work on a novel concept it hasn't been trained on, it will usually spit out some nonsense that either doesn't work or works poorly, or it will refuse to provide a specific enough solution. If it has been trained on previous work, it will spit out something that looks similar to the solved problem in its training set.

These AI systems don't undergo the process of trial and error that suggests it is creating something novel. Its process of creation is not reactive with the environment. It is just cribbing off of extant solutions it's been trained on.

  • I'm literally watching Claude Code "undergo the process of trial and error" in another window right now.

Here's a thought experiment: if modern machine learning systems existed in the early 20th century, would they have been able to produce an equivalent to the theory of relativity? How about advance our understanding of the universe? Teach us about flight dynamics and take us into space? Invent the Turing machine, Von Neumann architecture, transistors?

If yes, why aren't we seeing glimpses of such genius today? If we've truly invented artificial intelligence, and on our way to super and general intelligence, why aren't we seeing breakthroughs in all fields of science? Why are state of the art applications of this technology based on pattern recognition and applied statistics?

Can we explain this by saying that we're only a few years into it, and that it's too early to expect fundamental breakthroughs? And that by 2027, or 2030, or surely by 2040, all of these things will suddenly materialize?

I have my doubts.

  • >Here's a thought experiment: if modern machine learning systems existed in the early 20th century, would they have been able to produce an equivalent to the theory of relativity? How about advance our understanding of the universe? Teach us about flight dynamics and take us into space? Invent the Turing machine, Von Neumann architecture, transistors?

    Only a small percentage of humanity are/were capable of doing any of these. And they tend to be the best of the best in their respective fields.

    >If yes, why aren't we seeing glimpses of such genius today?

    Again, most humans can't actually do any of the things you just listed. Only our most intelligent can. LLMs are great, but they're not (yet?) as capable as our best and brightest (and in many ways, lag behind the average human) in most respects, so why would you expect such genius now ?

    • > Only a small percentage of humanity are/were capable of doing any of these. And they tend to be the best of the best in their respective fields.

      Sure, agreed, but the difference between a small percentage and zero percentage is infinite.

    • > Only a small percentage of humanity are/were capable of doing any of these. And they tend to be the best of the best in their respective fields.

      A definite, absolute and unquestionable no, and a small, but real chance is absolutely different categories.

      You may wait for a bunch of rocks to sprout forever, but I would put my money on a bunch of random seeds, even if I don't know how they were kept.

    • > LLMs are great, but they're not (yet?) as capable as our best and brightest (and in many ways, lag behind the average human) in most respects, so why would you expect such genius now ?

      I'm not expecting novel scientific theories today. What I am expecting are signs and hints of such genius. Something that points in the direction that all tech CEOs are claiming we're headed in. So far I haven't seen any of this yet.

      And, I'm sorry, I don't buy the excuse that these tools are not "yet" as capable as the best and brightest humans. They contain the sum of human knowledge, far more than any individual human in history. Are they not intelligent, capable of thinking and reasoning? Are we not at the verge of superintelligence[1]?

      > we have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them.

      If all this is true, surely we should be seeing incredible results produced by this technology. If not by itself, then surely by "amplifying" the work of the best and brightest humans.

      And yet... All we have to show for it are some very good applications of pattern matching and statistics, a bunch of gamed and misleading benchmarks and leaderboards, a whole lot of tech demos, solutions in search of a problem, and the very real problem of flooding us with even more spam, scams, disinformation, and devaluing human work with low-effort garbage.

      [1]: https://blog.samaltman.com/the-gentle-singularity

      2 replies →

    • Were they the best of the best? or were they just at the right place and time to be exposed to a novel idea?

      I am skeptical of this claim that you need a 140IQ to make scientific breakthroughs, because you don't need a 140IQ to understand special relativity. It is a matter of motivation and exposure to new information. The vast majority of the population doesn't benefit from working in some niche field of physics in the first place.

      Perhaps LLMs will never be at the right place and the right time because they are only trained on ideas that already exist.

      2 replies →