← Back to context

Comment by heavyset_go

1 month ago

Reasonable humans understand the request at hand. LLMs just output something that looks like it will satisfy the user. It's a happy accident when the output is useful.

Sure, but that doesn't prove anything about the properties of the output. Change a few words, and this could be an argument against the possibility of what we now refer to as LLMs (which do, of course, exist).