← Back to context

Comment by devmor

1 day ago

LLMs in their entirety are unlikely to move past tokenization - it is the inescapable core from the roots of NLP and Markov Chains.

The future of AI and all of ML in general likely does exist beyond tokenization, but I find it unlikely we will get there without moving past LLMs as a whole.

We need to focus on the strengths of LLMs and abandon the incredibly wasteful amount of effort being put into trying to make them put on convincing facsimiles of things they can't do just because the output is in natural language and easily fools humans at first glance.

This is valid but also hard to back up with any alternatives. At the end of the day it’s just a neural network with backprop. New architectures will likely only be marginally better. So either we add new algorithms on top of it like RL, create a new learning algorithm (for example forward-forward), or we figure out how to use more energy efficient compute (analog etc) to scale several more magnitudes. It’s gonna take some time

  • Yeah, that's fair - it's very easy to tell that LLMs are not the end state, but it's near impossible to know what comes next.

    Personally I think LLMs will be relegated to transforming output and input from whatever new logic system is brought forth, rather than pretending they're doing logic by aggregating static corpora like we are now.

    • They already can do calculations by using tools and without pretending. Why not to make them write code for logic too. This will extend their 'range'. End user can be provided only summary to keep it look simple.