← Back to context

Comment by unkulunkulu

12 days ago

I came to comments to ask a question, but considering that it is two days old already, I will try to ask you in this thread.

What you think about his argument about “not being able to distinguish possible language from impossible”?

And why is it inherent in ML design?

Does he assume that there could be such an instrument/algorithm that could do that with a certainty level higher than LLM/some ml model?

I mean, certainly they can be used to make a prediction/answer to this question, but he argues that this answer has no credibility? I mean, LLM is literally a model, ie probability distribution over what is language and what is not, what gives?

Current models are probably tuned more “strictly” to follow existing languages closely, ie that will say “no-no” to some yet-unknown language, but isn’t this improvable in theory?

Or is he arguing precisely that this “exterior” is not directly correlated with “internal processes and faculties” and cannot make such predictions in principle?