← Back to context

Comment by idiotsecant

2 days ago

All the big LLMs are no longer just token predictors. They are beginning to incorporate memory, chain of thought, and other architectural tricks that use the token predictor in novel ways to produce some startlingly useful output.

It's certainly the case that an LLM alone cannot achieve AGI. As a component of a larger system though? That remains to be seen. Maybe all we need to do is duct tape a limbic system and memory onto an LLM and the result is something sort of like an AGI.

It's a little bit like saying that a ball bearing can't possibly ever be an internal combustion engine. While true, it's sidestepping the point a little bit.