Comment by msla
2 years ago
> “‘blur’ tool for paragraphs” is such a good way of describing the most prominent and remarkable skill of ChatGPT.
In what way? How, technically, is it anything like that?
These comments sound like full-court publicity press for this article. I wonder why.
Going back to the "sock of independence" example (/u/airstrike's comment for more context), ChatGPT's answer's accuracy is poor - but it's a funny question, and it gave a funny answer. So was it really a poor answer? My interpretation of their use of 'blur' as an analogy is that: it did not simply answer ACCURATELY in the STYLE of the DoC, it merged or "blurred/smudged together" the CONTENT and STYLE of the story and the DoC. It's not good at understanding the question or the context... and therefore, a lot of its answers feel "blurry".
"Wonder why"? Because, human thoughts, opinions and language are inherently blurry, right? That's my view. Plus, humans have a whole nervous system which has a lot of self-correcting systems (e.g. hormones) that ML AI doesn't yet account for if its goal is human-level intelligence.
> How, technically, is it anything like that?
Huh? It isn't. It's a good description because it's figuratively accurate to what reading LLM text feels like, not because it's technically accurate to what it's doing.