← Back to context

Comment by simianwords

9 hours ago

You are assuming that set of specialists are fixed system! That's not the case. With change in technology, you would get more and more specialists, the same way Agricultural revolution allowed for more specialists to exist.

This comment sounds like hand-waving to me.

The author describes specifically how specialists are produced and how AI undermines their production.

No, we won't get more and more specialists literally "the same way" as the agricultural revolution. You need to be much more specific about how we'll get more specialists under the incentive structure created by AI, otherwise this sounds like some kind of religious faith in AI and progress.

  • I can't tell what specialists we will get the same way you wouldn't be able to tell me we will have Linux Kernel specialists at the year 1945.

    People do more things with AI.

    More things = more inventions = the field growing.

    The field grows and people become specialists on what used to be a small or trivial.

    A mathematician in 1500's wouldn't think algebraic topology would be a specialisation.

    • > I can't tell what specialists we will get the same way you wouldn't be able to tell me we will have Linux Kernel specialists at the year 1945.

      How about addressing astrophysics specifically. What are you claiming about it? Are you claiming that in the future, we won't need astrophysicists at all, AI can do all of our astrophysics for us, freeing humans to specialize in... other subjects?

      And doesn't the same problem exist for Linux kernel specialists? Why even become a Linux kernel specialists when AI can write your source code for you?

      > people become specialists

      This is precisely what is in question.

      > A mathematician in 1500's wouldn't think algebraic topology would be a specialisation.

      The specific subjects have changed over time, but the production of specialist mathematicians hasn't really changed. It takes hard work, grunt work, struggling, making mistakes and learning from them, as well as expert supervision. The problem with AI is that it encourages and incentivizes intellectual laziness, the opposite of what is required to produce specialists.

      A related problem: LLMs have been trained with papers written and supervised by Alice-type specialists. There's a common claim that LLMs will hallucinate less in the future, but I think that LLMs will hallucinate more in the future, when specialty fields become dominated by Bob-type "specialists" who have a harder time distinguishing fact from fiction. When LLMs have to train on material produced by earlier versions of LLMs, the quality trend will go down, not up.

      4 replies →