Comment by patching-trowel
1 year ago
My gut says no because of the way language relates to meaning. In language, a “chair” is a chair is a chair. But in meaning, a chair is not-a-stool, and not-a-couch, and not-a-bench etc. We understand the object largely by what the object is similar to but not.
In order for the LLM to meaningfully model what is coherent, empathetic, free from bias, it must also model the close to, but NOT-that.
That’s a compelling point.
If you’ll indulge me I’m going to think out loud a little.
What makes sense to me about this point:
- Having zero knowledge of “non-good” could lead to fragility when people phrase questions in “non-good” ways
- If an LLM is truly a “I do what I learned” machine, then “good” input + “good” question would output “good” output
- There may be a significant need for an LLM to learn the “chair is not-a-stool” aka “fact is not-a-fiction”. An LLM that only gets affirming meanings might be wildly confused. If true I think that would be a an interesting area to research not just for AI but for cognition. … now I wonder how many of the existing params are “not”s.
- There’s also the question of scale. Does an LLM need to “know” about mass extinction in order to understand empathy? Or can it just know about the emotions people experience during hard times? Children seem to do fine at empathy (maybe even better than adults in some ways) despite never being exposed to planet-sized tragedies. Adults need to deal with bigger issues where it can be important to have those tragedies front of mind, but does an LLM need to?