Comment by nathan_compton

1 day ago

This has long been how I have explained LLMs to non-technical people: text transformation engines. To some extent, many common, tedious, activities basically constitute a transformation of text into one well known form from another (even some kinds of reasoning are this) and so LLMs are very useful. But they just transform text between well known forms.

And while it appears that lots of problems can be contorted into translation, "if all you have is a hammer, everything looks like a nail". Maybe we do hit a brick wall unless we can come up with a model that more closely aligns with actual human reasoning.