← Back to context

Comment by agentcoops

4 days ago

I agree with your fundamental point. However, I don't think steady erosion of mastery is the only way that these next years have to go, even if it looks the most likely at present. Supposing LLMs or whatever future architecture surpass even the greatest human minds in intelligence, why is that situation fundamentally different to living in a world with Einstein, i.e. a level of mastery I'll never reach before the end of my life? As one interested in the depths, I prefer to live in a world with peaks ever greater than myself---it doesn't prevent me from going as deep as I can, inspired by where they've reached, and doing the things that matter to me.

Turing's view, in fact, is similar: "There would be great opposition [to AI] from the intellectuals [read programmers in the context of this thread] who were afraid of being put out of a job. It is probable though that the intellectuals would be mistaken about this. There would be plenty to do, i.e. in trying to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits."

[0] Thomas Bernhard's The Loser is a fantastic account of the opposite standpoint---of the second best piano student, who cannot stand existing in a world with Glenn Gould.