Comment by abecedarius
2 years ago
An essay making reasonable points, but overall it strikes me like a dismissal circa 1980 of personal computers as toys.
My first day with ChatGPT I tried teaching it my hobby dialect of Lisp (unlikely to be in its training set) and then asking it to implement symbolic differentiation. Its attempt was very scatterbrained, but not completely hopeless. If you don't think that required any thinking from it, I don't want to argue -- unless you're in some position of influence that'd make such an ostrich attitude matter.
I hope I’m not misunderstanding you, but I could be. Are you saying that because the LLM was able to impress you that it must be thinking? (Whatever that means)
Whatever you want to call the problem solving and persona simulation it can do (in this first commercial generation), you'd never accuse a JPEG engine or an MP3 decoder of anything remotely like it. It's just a really backward-looking conceptualization, underemphasizing everything interesting.
You can think of science itself as lossy compression.
>you'd never accuse a JPEG engine or an MP3 decoder of anything remotely like it.
for psychological reasons. Natural language processing makes people prone to anthropomorphize. It's why people treat Alexa in human like ways, or even ELIZA back in the day. You're making the same mistake in your description. You're not teaching ChatGPT anything, you're ever only querying a trained static model. It remains in the same state. It's not "scatterbrained", that's a human quality, it's incorrect. Ted Chiang points to this mistake in the article, mistaking lossiness in an AI model for the kind of error that a human would make.
A photocopier making bad copies is just a flawed machine, but because you don't treat chatgpt like a machine, you think it performing worse is actually a sign of it being smarter. Ironically if it 100% reproduced your language, you'd likely be more sceptical, even if that was due to real underlying intelligence.
1 reply →