← Back to context

Comment by cfn

2 years ago

The point is that we know many things as facts that we cannot explain. We may be looking for the explanation but, as of yet, we don't know why many things are as they are (as in the example above).

Actually, LLMs are also a good example. We don't know why chatGPT generates apparently cogent text and answers. What we know is that, if we train it this way and do a bunch of optimizations we get a machine that appears to be thinking or, at least, we can have a decent conversation with it. There are many efforts to explain it (I remember reading recently a paper analysing the GPT3 neuron that determines 'an' vs 'a' in English)

Finally, all science is falsifiable by definition, so, what we think we know now may be be disproven tomorrow.