Comment by dinfinity

20 hours ago

> I think the rest of us should rest easy knowing that LLM's can't [...]

What if (when?) (AI-assisted) research moves AI beyond LLMs? Do you think that can't happen?

Not in the next decade. Won't get funded.

  • Advanced Machine Intelligence (AMI), a new Paris-based startup cofounded by Meta’s former chief AI scientist Yann LeCun, announced Monday it has raised more than $1 billion to develop AI world models.

    LeCun argues that most human reasoning is grounded in the physical world, not language, and that AI world models are necessary to develop true human-level intelligence. “The idea that you’re going to extend the capabilities of LLMs [large language models] to the point that they’re going to have human-level intelligence is complete nonsense,” he said. [0]

    [0] https://www.wired.com/story/yann-lecun-raises-dollar1-billio...

    • Why on earth would you start your ai startup in Paris? Of all places in western Europe it's one of the hardest to find, attract and keep talented people. The wages are super low, housing is high and language is an issue.

      1 reply →

I mean, Google already has Mu Zero, which Im willing to bet has evolved quite a bit in private because if anything is going to get us closer to actual AI its that.

Realistically, one can build a AI capable of reasoning (i.e recurrent loops with branches) using very basic models that fit on a 3090, with multi agent configuration along the lines https://github.com/gastownhall/gastown. Nobody has done it yet because we don't know what the number of agents is required and what the prompts for those look like.

The fundamental philosophical problem is if that configuration is possible to arrive at using training, or do ai agents have to go through equivalent "evolution epocs" to be able to do all that in a simulated environment. Because in the case of those prompts and models, they have to be information agnostic.