← Back to context

Comment by mrob

6 hours ago

Character counting errors are a side effect of tokenization, which is a performance optimization. If we scaled the hardware big enough we could train on raw bytes and avoid it.

No, tokenization is not the only reason. A next-word predictor has fundamentally a hard time executing algorithms, even as simple as counting.

  • Counting is one of the algorithms that can be expressed by a RASP program, which transformers closely approximate.

    • Close famously counts in horseshoes and hand grenades. Algorithms, just as famously, are a domain where off-by-one is still wrong.