← Back to context

Comment by robbomacrae

2 years ago

These things might be able to produce comparable output but that wasn't my point. I agree that if we are comparing ourselves over the text that gets written then LLM's can achieve super intelligence. And writing text can indeed be simplified to token predicting.

My point was we are not just glorified token predicting machines. There is a lot going on behind what we write and whether we write it or not. Does the method matter vs just the output? I think/hope it does on some level.