Comment by jdthedisciple

1 year ago

GPTs are trained on natural language.

Why should it surprise anyone that it would fail at cellular automata?

Current LLM architectures have fundamental limitations, which means they can not learn some problems regardless of training.

A simple example is that they fundamentally can not balance parentheses more than half their context width.