← Back to context

Comment by Denvercoder9

6 hours ago

There's no contradiction, the point is that Bob is able to produce valid output using LLMs, but only while he himself is being supervised; and that he doesn't develop the skills to supervise independently himself in the future.

> only while he himself is being supervised

No, this is impossible unless Bob is presenting at each weekly meeting simply the output of the LLM and feeding the tutor's feedback straight into it. For a total of 10 minutes work per week, and the tutor would notice straight away at least for the lack of progress.

No, the article specifies that Bob actually works with the LLM, doesn't just delegate. He asks the agent to summarise, to explain, and to help with bug fixing. You could easily argue that Bob, having such an AI tutor available 24/7, can develop understanding much faster. He certainly won't waste his time with small details of python syntax (though working with a "coding expert" will make his code much cleaner and more advanced).

  • This is the rub, Bob would not be promoted if he consistently provided unreliable LLM output. In order to get promoted, Bob needs to learn the skills that get reliable output out of an LLM. These may not be the same skills that Alice learns, but if the argument is that Schwartz's LLM output is valuable -- why are we to assume Bob's path isn't towards Schwartz?