← Back to context

Comment by willvarfar

17 hours ago

I think Hinton uses terms like reasoning and creativity and consciousness in a way that are different from my own embeddings.

I recently had fun asking Gemini to compare how Wittgenstein and Chomsky would view calling a large transformer that was trained entirely on a synthetic 'language' (in my case symbols that encode user behaviour in an app) a 'language' or not. And then, for the killer blow, whether an LLM that is trained on Perl is a language model.

My point being that whilst Hinton is a great and all, I don't think I can quite pin down his definitions of the precise words like reasoning etc. Its possible for people to have opposite meanings for the same words (Wittgenstein famously had two contradictory approaches in his lifetime). In the case of Hinton, I can't quite pin down how loosely or precisely he is using the terms.

A forward-only transformer like GPT can only do symbolic arithmetic to the depth of its layers, for example. And I don't think the solution is to add more layers.

Of course humans are entirely neuro and we somehow manage to 'reason'. So YMMV.