← Back to context

Comment by satisfice

16 hours ago

This paper presents an elaborate straw-man argument. It does not faithfully represent the legitimate concerns of reasonable people about the persistent and irresponsible application of AI in knowledge work.

Generative AI produces work that is voluminous and difficult to check. It presents such challenges to people who apply it that they, in practice, do not adequately validate the output.

The users of this technology then present the work as if it were their own, which misrepresents their skills and judgement, making it more difficult for other people to evaluate the risk and benefits of working with them.

It is not the mere otherness of AI that results in anger about it being foisted upon us, but the unavoidable disruption to our systems of accountability and ability to assess risk.

> Generative AI produces work that is voluminous and difficult to check. It presents such challenges to people who apply it that they, in practice, do not adequately validate the output.

As a matter of scope I could understand leaving the social understanding of "AI makes errors" separate from technical evaluations of models, but the thing that really horrified me is that the author apparently does not think past experience should be a concern in other fields:

> AI both frustrates the producer/consumer dichotomy and intermediates access to information processing, thus reducing professional power. In response, through shaming, professionals direct their ire at those they see as pretenders. Doctors have always derided home remedies, scientists have derided lay theories, sacerdotal colleges have derided folk mythologies and cosmogonies as heresy – the ability of individuals to “produce” their own healing, their own knowledge, their own salvation. [...]

If you don't permit that scientists often experiencing crank "lay theories" is a reason for initial skepticism, can you really explain this as anything other than anti-intellectualism?