Comment by masfuerte

1 year ago

It's funny, because AI companies are currently spending fortunes on mathematicians, physicists, chemists, software engineers, etc. to create good training data.

Maybe this money would be better spent on creating a Lenat-style ontology, but I guess we'll never know.

We may. LLMs are capable, even arguably at times inventive, but lack the ability to test against ground truth; ontological reasoners can never exceed the implications of the ground truth they're given, but within that scope reason perfectly. These seem like obviously complementary strengths.