← Back to context

Comment by bobbylarrybobby

11 hours ago

IIRC IBM’s Watson (the one that played Jeopardy) used primitive NLP (imagine!) to form a tree of factual relations and then passed this tree to construct Prolog queries that would produce an answer to a question. One could imagine that by swapping out the NLP part with an LLM, the model would have 1. a more thorough factual basis against which to write Prolog queries and 2. a better understanding of the queries it should write to get at answers (for instance, it may exploit more tenuous relations between facts than primitive NLP).

Not so "primitive" NLP. Watson started with what its team called a "shallow parse" of a sentence using a dependency grammar and then matched the parse to an ontology consisting of good, old fashioned frames [1]. That's not as "advanced" as an LLM but far more reliable.

I believe the ontology was indeed implemented in Prolog but I forget the architecture details.

______________

[1] https://en.wikipedia.org/wiki/Frame_(artificial_intelligence...

Please tell me that's approximately what Palantir Ontology is, because if it isn't, I've no idea what it could be.