Comment by kenmorechalfant

2 years ago

Going back to the "sock of independence" example (/u/airstrike's comment for more context), ChatGPT's answer's accuracy is poor - but it's a funny question, and it gave a funny answer. So was it really a poor answer? My interpretation of their use of 'blur' as an analogy is that: it did not simply answer ACCURATELY in the STYLE of the DoC, it merged or "blurred/smudged together" the CONTENT and STYLE of the story and the DoC. It's not good at understanding the question or the context... and therefore, a lot of its answers feel "blurry".

"Wonder why"? Because, human thoughts, opinions and language are inherently blurry, right? That's my view. Plus, humans have a whole nervous system which has a lot of self-correcting systems (e.g. hormones) that ML AI doesn't yet account for if its goal is human-level intelligence.