Comment by ActorNightly

10 months ago

Anything that is forward compute only is never ever going to be anywhere close to AI. The LLMs are a dead end.

>or simulating the evolution of digital creatures in the wild web.

You are on the right track with this thinking.

Fundamentally, AI in the actual sense of having intelligence will be something that can run simulations in parallel and pick the winning result, much like genetic algorithms. The rules for the simulation it will obtain from interacting without outside world, and the map of input to output will be stored in a LLM like structure as memory.

The big question is how do you build it. Imagine its running on a hardware, with a UART card that is hooked up to a network cable. It should eventually be able to figure out how go on the internet simply by setting 1s and zeros in the right places at the right time, how to host a server and build an interface that a person can connect to and talk to it for more information (if it decides that this is even necessary), and so on.

I don't think an objective function that it can minimize/maximize is really applicable, so by extension I don't think we can get to this AI agent through traditional training, the process to make this algorithm has to mimic evolution. I.e we basically create some ambiguous structure of a neural net with a clock and recursive connections, and then start doing something like a genetic algorithm, with a fitness function of being able to figure more shit out. Obviously this will take exponentially more compute than the world has currently for running LLMS.