Comment by aurizon
5 days ago
This is a lot like the 50 million monkeys on 50 million typewriters will eventually write shakespeare... We have all heard this, pity the poor proof readers who will proof them all in a search for the holy grail = zero errors. In a similar way, LLM's are permutational cross associating engines, matched with sieves to filter out the dross. Less filtering = more dross, AKA slop. It can certainly create enormous masses of bad code and with well filtered screens for dross, we can see it can create passable code, however stray flaws(flies) can creep in and not get filtered, and humans are better at seeing flies in their oatmeal. AI seems very good at permutational code assaults on masses of code to find the flies(zero days), so I expect it to make code more secure as few humans have the ability/time to mount that sort of permutational assault on code bases. I see this idea has already taken root within code writers as well as hackers/China etc. These two opposing forces will assault code bases, one to break and one to fortify. In time there will be fewer places where code bases have hidden flaws as soon all new code will be screened by AI to find breaks so that little or no code will contain these bugs.
> This is a lot like the 50 million monkeys on 50 million typewriters will eventually write shakespeare...
"Eventually" here is something on the order of a few expected lifespans of the universe.
The fact that we're getting meaningful results out of LLMs on a human timescale means that they're doing something very different.
Yes, the space is indeed deep/wide, but LLMs probably cull the herd as they proceed so they eliminate swathes as they go. Smart fuzzing in a way.