Comment by joshspankit

1 year ago

That’s a compelling point.

If you’ll indulge me I’m going to think out loud a little.

What makes sense to me about this point:

- Having zero knowledge of “non-good” could lead to fragility when people phrase questions in “non-good” ways

- If an LLM is truly a “I do what I learned” machine, then “good” input + “good” question would output “good” output

- There may be a significant need for an LLM to learn the “chair is not-a-stool” aka “fact is not-a-fiction”. An LLM that only gets affirming meanings might be wildly confused. If true I think that would be a an interesting area to research not just for AI but for cognition. … now I wonder how many of the existing params are “not”s.

- There’s also the question of scale. Does an LLM need to “know” about mass extinction in order to understand empathy? Or can it just know about the emotions people experience during hard times? Children seem to do fine at empathy (maybe even better than adults in some ways) despite never being exposed to planet-sized tragedies. Adults need to deal with bigger issues where it can be important to have those tragedies front of mind, but does an LLM need to?