Comment by Barrin92

2 years ago

>you'd never accuse a JPEG engine or an MP3 decoder of anything remotely like it.

for psychological reasons. Natural language processing makes people prone to anthropomorphize. It's why people treat Alexa in human like ways, or even ELIZA back in the day. You're making the same mistake in your description. You're not teaching ChatGPT anything, you're ever only querying a trained static model. It remains in the same state. It's not "scatterbrained", that's a human quality, it's incorrect. Ted Chiang points to this mistake in the article, mistaking lossiness in an AI model for the kind of error that a human would make.

A photocopier making bad copies is just a flawed machine, but because you don't treat chatgpt like a machine, you think it performing worse is actually a sign of it being smarter. Ironically if it 100% reproduced your language, you'd likely be more sceptical, even if that was due to real underlying intelligence.

If you consider it a mistake to say the word 'teaching' to describe explaining a new topic in natural language, asking my counterparty to solve problems, explaining errors and wrong assumptions in some of its responses, and getting corrected answers back, with the new information incorporated into subsequent answers -- this is just not a conversation worth having. Yes, of course I know it's freshly reset in new conversations. And of course I know that its mechanisms and the spectrum of strengths and weaknesses are not human-like.

When you tell me what I allegedly think and under what condition I'd be "more skeptical", it's kind of irritating. (Maybe I deserve it for starting this thread with a combative tone. By the time I came back meaning to edit that first comment, there was already a reply.)