← Back to context

Comment by plaidfuji

2 years ago

You’re absolutely right and I’m sure that something resembling higher-level pattern matching is present in the architecture and weights of the model, I’m just saying that I’m not aware of “logical thought” being explicitly optimized or designed for - it’s more of a sometimes-emergent feature of a machine that tries to approximate the content of the internet, which for some topics is dominated by mostly logical thought. I’m also unaware of a ground truth against which “correct facts” could even be trained for..

> I’m also unaware of a ground truth against which “correct facts” could even be trained for..

Seems like there are quite a few obvious possibilities here off the top of my head. Ground truth for correct facts could be:

1) Wikidata

2) Mathematical ground truth (can be both generated and results validated automatically) including physics

3) Programming ground truth (can be validated by running the code and defining inputs/outputs)

4) Chess

5) Human labelled images and video

6) Map data

7) Dependent on your viewpoint, peer reviewed journals, as long as cited with sources.