I don't think we should as a wider scientific/technical society care for the opinion of a person that uses epistocratic privilege as a a serious term. This stinks to high hell of proving a conclusion by working backwards from it.
The cognitive dissonance to imply that expecting knowledge from a knowledge worker or a knowledge-centered discourse is a form of boundary work or discrimination is extremely destructive to any and all productive work once you consider how most of the sciences and technological systems depend on a very fragile notion of knowledge preservation and incremental improvements on a system that is intentionally pedantic to provide a stable ground for progress. In a lot of fields, replacing this structure with AI is still very much impossible, but explaining how for each example an LLM blurts out is tedious work. I need to sit down and solve a problem the right way, and in the meantime about 20 false solutions can be generated by ChatGPT
If you read the paper, the author even uses terms related to discrimination by immutable characteristics, invokes xenophobia and quotes a black student calling discouragement of AI as a cheating device racist.
This seems to me an utter insanity and should not only be ignored, but actively pushed against on the grounds of anti-intellectualism.
Sarkar argues that “AI shaming arises from a class anxiety induced in middle class knowledge workers, and is a form of boundary work to maintain class solidarity and limit mobility into knowledge work.”
I think there is at least some truth to this.
Another possible cause of AI shaming is that reading AI writing feels like a waste of time. If the author didn’t bother to write it, why should I bother to read it and provide feedback?
This latter piece is something I am struggling with.
I have spent 10+ years working on teams that are primarily composed of people whose first language is not English in workplaces where English is the mandated business language. Ever since the earliest LLMs started appearing, the written communication of non-native speakers has become a lot clearer from a grammatical point of view, but also a lot more florid and pretentious than they actually intended to be. This is really annoying to read because you need to mentally decode the LLM-ness of their comments/messages/etc back into normal English, which ends up costing more cognitive overhead than it used to reading their more blunt and/or broken English. But, from their perspective, I also understand that it saves them cognitive effort to just type a vague notion into an LLM and ask for a block of well-formed English.
So, in some way, this fantastic tool for translation is resulting in worse communication than we had before. It's strange and I'm not sure how to solve it. I suppose we could use an LLM on the receiving end to decode the rambling mess the LLM on the sending end produced? This future sucks.
It is normal, you add a layer between the two brains that communicate, and that layer only add statistical experience to the message.
I write letters to my gf, in English, while English is not our first language. I would never ever put an LLM between us: this would fall flat, remove who we are, be a mess of cultural references, it would just not be interesting to read, even if maybe we could make it sound more native, in the style of Barack Obama or Prince Charles...
LLMs are going to make people as dumb as GPS made them. Except really, when reading a map was not very useful a skill, writing what you feel... should be.
I thought about this too. I think the solution is to send both, prompt and output - since the output was selected itself by the human between potentially multiple variants.
Prompt: I want to tell you X
AI: Dear sir, as per our previous discussion let's delve into the item at hand...
> why should I bother to read it and provide feedback?
I like to discuss a topic with a LLM and generate an article at the end. It is more structured and better worded but still reflects my own ideas. I only post these articles in a private blog I don't pass them as my own writing. But I find this exercise useful to me because I use LLMs as a brainstorming and idea-debugging space.
> If the author didn’t bother to write it, why should I bother to read it
There is an argument that luxury stuff is valuable because typically it's hand made, and in a sense, what you are buying is not the item itself, but the untold hours "wasted" creating that item for your own exclusive use. In a sense "renting a slave" - you have control over another human's time, and this is a power trip.
You have expressed it perfectly: "I don't care about the writing itself, I care about how much effort a human put into it"
This reads like yet another attempt to pathologize perfectly reasonable criticism as some form of oppression. Calling “AI could have written this” a classist slur is a stretch so extreme it borders on parody. People say that when writing lacks originality or depth — not to reinforce some imagined academic caste system. The idea that pointing out bland prose is equivalent to sumptuary laws or racial gatekeeping is intellectual overreach at its finest. Ironically, this entire paper feels like something an AI could have written: full of jargon, light on substance. And no, there’s no original research, just theory stacked on theory.
The real shame of it is that OP claims affiliation to two respectable universities (UCL and Cambridge) and one formerly credible venue (CHI)
Mock scholarship is on the rampage. I agree: this stuff does make me understand the yahoos with a defunding urge too - not something I ever expected to feel any sympathy for, but here we are.
Overall, this comes across as extremely patronising: to authors by running defence for obviously sub-par work, because their background makes it "impossible" for them to do good work. And to the commenters by assuming mal-intent towards the less privileged that needs to be controlled.
And it's all wrapped in a lovely package of AI apologetics - wonderful.
So, honestly, no. The identity of the author doesn't matter, if it reads like AI slop the author should be grateful I even left an "AI could have written this" comment.
I put a chapter of a paper I wrote in 2016 into GPTZero and got the probability breakdown 90% AI, 10% human. I am 100% human, and I wrote it myself, so I guess I'm lucky that I didn't hand it in this year, or I could have gotten accused of cheating?
That's more an indictment of the accuracy of such tools. Writing in a very 'standard' style like found in papers is going to match well with the LLM predictions, regardless of origin.
"We have to use AI to achieve class solidarity" is insane to me.
People realize that the bosses all love AI because they envision a future where they don't need to pay the rabble like us, right? People remember leaders in the Trump administration going on TV and saying that we should have fewer laptop jobs, right?
That professor telling you not to use ChatGPT to cheat on your essay is likely not a member of the PMC but is probably an adjunct getting paid near-poverty wages.
This paper presents an elaborate straw-man argument. It does not faithfully represent the legitimate concerns of reasonable people about the persistent and irresponsible application of AI in knowledge work.
Generative AI produces work that is voluminous and difficult to check. It presents such challenges to people who apply it that they, in practice, do not adequately validate the output.
The users of this technology then present the work as if it were their own, which misrepresents their skills and judgement, making it more difficult for other people to evaluate the risk and benefits of working with them.
It is not the mere otherness of AI that results in anger about it being foisted upon us, but the unavoidable disruption to our systems of accountability and ability to assess risk.
While it would have been a better paper if the author collaborated with a sociologist, it would have also be less likely to be taken seriously by the HN for the same class anxieties that its title is founded on.
Excuse us for expecting evidence and intellectual rigour. :D
I've taken a number of university Sociology courses and from those experiences I came to the opinion that Sociology's current iteration is really just grievance airing in an academic tone. It doesn't really justify it's own existence outside of being a Buzzfeed for academics.
I'm not even talking about slightly more rigorous subjects such as Psychology or Political Science, which modern Sociology uses as a shield for the lack of a feedback mechanism.
Don't get me wrong though, I realise this is an opinion formed from my admittedly limited exposure to Sociology (~3 semesters). It could have also been the university I went to particularly leaned on "grievance airing".
> In this reading, the increasingly common refrain “AI could
have written this” is not so much a pithy taunt, but rather a classist
slur, indicative of wounded and anxious privilege. Moreover, it is
complicit in the systematic exclusion of underprivileged groups
from entering the class of knowledge professionals.
This is the insipid blathering of a woke cretin.
It is in fact widespread reliance of AI that will hinder groups of people from acquiring the skills to be in that class.
The idea that some internet randos commenting "AI could have written that" have gatekeeping power, preventing people from becoming knowledge workers, is preposterous.
The one way in which it is plausible is that the work to which the remark is applied is not in fact written by AI, but its author becomes convinced by the remark that such work could be written by AI, and that author adopts AI as a result. Their newfound habit will subsequently rot their brain, sending them plumetting off the ladder toward the knowledge class. I jest, but only half so.
Actually, let's examine this "systematic exclusion" claim.
Firstly, the basic premise of the article is that "this could have been written by AI" is disparagement. But disparagement is nothing new. If disparagement is intended, all one needs is "this was written by an idiot". I think that "this could have ben written by AI" is much softer. In fact, a possible interpretation of it is that the speaker believes in the use of AI, and that it could have been used to save time in producing something of the same quality. Anyway, we've had disparagement in online forums going back to dial up BBSes; it's just a new variant on plain old flaming.
If the remark is disparagement, does it add up to "systematic exclusion of underprivileged groups"?
In forums and social media, people mostly don't care who you are and are responding to the content. If it looks like AI slop, they don't care whether the person behind the pseudonym is a Stanford professor, or a german shepherd, and just turn on their flamethrower.
Let's say that systematic exclusion is happening in spite of commenters not actually targeting disadvantaged groups, but only responding to the content. What that systematic exclusion hypothesis then entails is that posts from underprivileged groups are actually garbage, and therefore attract more disparagement!
So in fact it is the author of this paper who holds a cynical, discriminatory view of underprivileged groups (whoever he imagines them to be, exactly). Underprivileged groups are morons who write garbage that could be written by AI (and thus precisely receive comments to that effect); and, moreover, are so weakly constituted that these discouraging comments prevent them from entering a knowledge professional class (in addition to the main factor, that being their lack of ability).
Someone not a member of an underprivileged group either does not write posts that are reminiscent of AI drivel, and so doesn't attract those comments, or even if he or she does, the negative commends slide right off due to their thicker skin.
Oh really? Some of the thinnest skins in the world come from privilege: for instance, think of the middle-aged man-child who buys an entire social network for billions in order to be able to suppress critical comments about himself.
Sorry, what jargon is that? I may be able to fix it with your help. I'm not in the USA and don't follow US politics or culture enough to be up to 2025 in jargon.
The prudence of discussing everything in a cultural vacuum comes with the implication of irrelevance to the cultural climate, which could hardly be further from the truth in this case.
This is just like the way some people decided that "Blue Check" should be an insult on Twitter. Occasionally people still say it but almost everyone ignores it. Fads like this are common on the Internet. It's just like any other clique of people: a few people accidentally taste-make as a bunch of replicators simply repeat mindless things over and over again "slop" "mask-off moment" "enshittification" "ghoulish". Just words that people repeat because other people say them and get likes/upvotes/retweets or whatever.
The "Blue Check" insult regime didn't get anywhere and I doubt any anti-LLM/anti-diffusion-model stuff will last. "Stop trying to make fetch happen". The tools are just too useful.
People on the Internet are just weird. Some time in the early 2010s the big deal was "fedoras". Oh you're weird if you have a fedora. Man, losers keep thinking fedoras are cool. I recall hanging out with a bunch of friends once and we walked by a hat shop and the girls were all like "Man, you guys should all wear these hats". The girls didn't have a clue: these were fedoras. Didn't they know that it would mark us out as weird losers? They didn't, and it turned out it doesn't. In real life.
It only does on the Internet. Because the Internet is a collection of subcultures with some unique cultural overtones.
Sounds like something a blue checker would say. And yes, if you pay for Twitter you’re going to clowned on.
And what the hell is that segue into fedoras? The entire meme of them is because stereotypical clueless individually took fedoras to be the pinnacle of fashion, while disregarding nearly everything else about not only their outfit, but about their bodies.
This entire comment reeks of not actually understanding anything.
Found that user who memorized KnowYourMeme and thinks they're a scholar of culture now.
Or a cheap LLM acting as them, and wired up to KnowYourMeme via MCP? Can't tell these days. Hell, we're one weekend side project away from seeing "on the Internet, no one knows you are a dog" become true in a literal sense.
blue checks are orthogonal - they're more rough approximations of "I bought a cybertruck when musk went full crazy" (and yes, it's a bad look). - judging some blog post for seeming like AI is different.
I don't think we should as a wider scientific/technical society care for the opinion of a person that uses epistocratic privilege as a a serious term. This stinks to high hell of proving a conclusion by working backwards from it.
The cognitive dissonance to imply that expecting knowledge from a knowledge worker or a knowledge-centered discourse is a form of boundary work or discrimination is extremely destructive to any and all productive work once you consider how most of the sciences and technological systems depend on a very fragile notion of knowledge preservation and incremental improvements on a system that is intentionally pedantic to provide a stable ground for progress. In a lot of fields, replacing this structure with AI is still very much impossible, but explaining how for each example an LLM blurts out is tedious work. I need to sit down and solve a problem the right way, and in the meantime about 20 false solutions can be generated by ChatGPT
If you read the paper, the author even uses terms related to discrimination by immutable characteristics, invokes xenophobia and quotes a black student calling discouragement of AI as a cheating device racist.
This seems to me an utter insanity and should not only be ignored, but actively pushed against on the grounds of anti-intellectualism.
Being a pilot is an epistrocratic privilege and they should welcome the input of the less advantaged.
Sarkar argues that “AI shaming arises from a class anxiety induced in middle class knowledge workers, and is a form of boundary work to maintain class solidarity and limit mobility into knowledge work.”
I think there is at least some truth to this.
Another possible cause of AI shaming is that reading AI writing feels like a waste of time. If the author didn’t bother to write it, why should I bother to read it and provide feedback?
This latter piece is something I am struggling with.
I have spent 10+ years working on teams that are primarily composed of people whose first language is not English in workplaces where English is the mandated business language. Ever since the earliest LLMs started appearing, the written communication of non-native speakers has become a lot clearer from a grammatical point of view, but also a lot more florid and pretentious than they actually intended to be. This is really annoying to read because you need to mentally decode the LLM-ness of their comments/messages/etc back into normal English, which ends up costing more cognitive overhead than it used to reading their more blunt and/or broken English. But, from their perspective, I also understand that it saves them cognitive effort to just type a vague notion into an LLM and ask for a block of well-formed English.
So, in some way, this fantastic tool for translation is resulting in worse communication than we had before. It's strange and I'm not sure how to solve it. I suppose we could use an LLM on the receiving end to decode the rambling mess the LLM on the sending end produced? This future sucks.
It is normal, you add a layer between the two brains that communicate, and that layer only add statistical experience to the message.
I write letters to my gf, in English, while English is not our first language. I would never ever put an LLM between us: this would fall flat, remove who we are, be a mess of cultural references, it would just not be interesting to read, even if maybe we could make it sound more native, in the style of Barack Obama or Prince Charles...
LLMs are going to make people as dumb as GPS made them. Except really, when reading a map was not very useful a skill, writing what you feel... should be.
1 reply →
I thought about this too. I think the solution is to send both, prompt and output - since the output was selected itself by the human between potentially multiple variants.
Prompt: I want to tell you X
AI: Dear sir, as per our previous discussion let's delve into the item at hand...
If knowledge work doesn't require knowledge then is it knowledge work?
The main issue that is symptomatic to current AI is that without knowledge (at least at some level) you can't validate the output of AI.
> why should I bother to read it and provide feedback?
I like to discuss a topic with a LLM and generate an article at the end. It is more structured and better worded but still reflects my own ideas. I only post these articles in a private blog I don't pass them as my own writing. But I find this exercise useful to me because I use LLMs as a brainstorming and idea-debugging space.
> If the author didn’t bother to write it, why should I bother to read it
There is an argument that luxury stuff is valuable because typically it's hand made, and in a sense, what you are buying is not the item itself, but the untold hours "wasted" creating that item for your own exclusive use. In a sense "renting a slave" - you have control over another human's time, and this is a power trip.
You have expressed it perfectly: "I don't care about the writing itself, I care about how much effort a human put into it"
If effort wasn’t put into it, then the writing cannot be good, except by accident or theft or else it is not your writing.
If you want to court me, don’t ask Cyrano de Bergerac to write poetry and pass it off as your own.
1 reply →
This reads like yet another attempt to pathologize perfectly reasonable criticism as some form of oppression. Calling “AI could have written this” a classist slur is a stretch so extreme it borders on parody. People say that when writing lacks originality or depth — not to reinforce some imagined academic caste system. The idea that pointing out bland prose is equivalent to sumptuary laws or racial gatekeeping is intellectual overreach at its finest. Ironically, this entire paper feels like something an AI could have written: full of jargon, light on substance. And no, there’s no original research, just theory stacked on theory.
> Calling “AI could have written this” a classist slur is a stretch so extreme it borders on parody.
In AI discussions the relevance of Poe's law is rampant. You can never tell what is parody or what is not.
There was a (former) xAI employee that got fired for advocating the extinction of humanity.
Reading this makes me understand why there is a political movement to defund universities.
The real shame of it is that OP claims affiliation to two respectable universities (UCL and Cambridge) and one formerly credible venue (CHI)
Mock scholarship is on the rampage. I agree: this stuff does make me understand the yahoos with a defunding urge too - not something I ever expected to feel any sympathy for, but here we are.
It makes me sick to my heart to think that money is stolen from my pocket to be given to lunatics of this kind.
Overall, this comes across as extremely patronising: to authors by running defence for obviously sub-par work, because their background makes it "impossible" for them to do good work. And to the commenters by assuming mal-intent towards the less privileged that needs to be controlled.
And it's all wrapped in a lovely package of AI apologetics - wonderful.
So, honestly, no. The identity of the author doesn't matter, if it reads like AI slop the author should be grateful I even left an "AI could have written this" comment.
Synthetic beings will look back at this with great curiosity.
I'd like to brag that I got in trouble for saying this to somebody in 2021, before ChatGPT
I put a chapter of a paper I wrote in 2016 into GPTZero and got the probability breakdown 90% AI, 10% human. I am 100% human, and I wrote it myself, so I guess I'm lucky that I didn't hand it in this year, or I could have gotten accused of cheating?
That's more an indictment of the accuracy of such tools. Writing in a very 'standard' style like found in papers is going to match well with the LLM predictions, regardless of origin.
maybe gptzero had your paper on its training data (it being from 2016)?
I wasn't being serious when I said it, I was using it as an insult for bad work
"We have to use AI to achieve class solidarity" is insane to me.
People realize that the bosses all love AI because they envision a future where they don't need to pay the rabble like us, right? People remember leaders in the Trump administration going on TV and saying that we should have fewer laptop jobs, right?
That professor telling you not to use ChatGPT to cheat on your essay is likely not a member of the PMC but is probably an adjunct getting paid near-poverty wages.
This paper presents an elaborate straw-man argument. It does not faithfully represent the legitimate concerns of reasonable people about the persistent and irresponsible application of AI in knowledge work.
Generative AI produces work that is voluminous and difficult to check. It presents such challenges to people who apply it that they, in practice, do not adequately validate the output.
The users of this technology then present the work as if it were their own, which misrepresents their skills and judgement, making it more difficult for other people to evaluate the risk and benefits of working with them.
It is not the mere otherness of AI that results in anger about it being foisted upon us, but the unavoidable disruption to our systems of accountability and ability to assess risk.
Additionally their use of the term "slur" for what is frequently a valid criticism seems questionable.
It is itself a form of bullying.
While it would have been a better paper if the author collaborated with a sociologist, it would have also be less likely to be taken seriously by the HN for the same class anxieties that its title is founded on.
Excuse us for expecting evidence and intellectual rigour. :D
I've taken a number of university Sociology courses and from those experiences I came to the opinion that Sociology's current iteration is really just grievance airing in an academic tone. It doesn't really justify it's own existence outside of being a Buzzfeed for academics.
I'm not even talking about slightly more rigorous subjects such as Psychology or Political Science, which modern Sociology uses as a shield for the lack of a feedback mechanism.
Don't get me wrong though, I realise this is an opinion formed from my admittedly limited exposure to Sociology (~3 semesters). It could have also been the university I went to particularly leaned on "grievance airing".
The state of this headline.
Honestly, AI could have written this.
That tldr table at top looks a lot like what perplexity provides at the bottom...
The obvious response is, "Oh, it will."
Gosh I wonder why there's a cultural backlash against the "intellectual" elite.
Would love to read, but it seems heavily paywalled, so can't.
The author seems to be hosting the full PDF on their website https://advait.org/files/sarkar_2025_ai_shaming.pdf
Thanks, we updated the URL!
> In this reading, the increasingly common refrain “AI could have written this” is not so much a pithy taunt, but rather a classist slur, indicative of wounded and anxious privilege. Moreover, it is complicit in the systematic exclusion of underprivileged groups from entering the class of knowledge professionals.
This is the insipid blathering of a woke cretin.
It is in fact widespread reliance of AI that will hinder groups of people from acquiring the skills to be in that class.
The idea that some internet randos commenting "AI could have written that" have gatekeeping power, preventing people from becoming knowledge workers, is preposterous.
The one way in which it is plausible is that the work to which the remark is applied is not in fact written by AI, but its author becomes convinced by the remark that such work could be written by AI, and that author adopts AI as a result. Their newfound habit will subsequently rot their brain, sending them plumetting off the ladder toward the knowledge class. I jest, but only half so.
Actually, let's examine this "systematic exclusion" claim.
Firstly, the basic premise of the article is that "this could have been written by AI" is disparagement. But disparagement is nothing new. If disparagement is intended, all one needs is "this was written by an idiot". I think that "this could have ben written by AI" is much softer. In fact, a possible interpretation of it is that the speaker believes in the use of AI, and that it could have been used to save time in producing something of the same quality. Anyway, we've had disparagement in online forums going back to dial up BBSes; it's just a new variant on plain old flaming.
If the remark is disparagement, does it add up to "systematic exclusion of underprivileged groups"?
In forums and social media, people mostly don't care who you are and are responding to the content. If it looks like AI slop, they don't care whether the person behind the pseudonym is a Stanford professor, or a german shepherd, and just turn on their flamethrower.
Let's say that systematic exclusion is happening in spite of commenters not actually targeting disadvantaged groups, but only responding to the content. What that systematic exclusion hypothesis then entails is that posts from underprivileged groups are actually garbage, and therefore attract more disparagement!
So in fact it is the author of this paper who holds a cynical, discriminatory view of underprivileged groups (whoever he imagines them to be, exactly). Underprivileged groups are morons who write garbage that could be written by AI (and thus precisely receive comments to that effect); and, moreover, are so weakly constituted that these discouraging comments prevent them from entering a knowledge professional class (in addition to the main factor, that being their lack of ability).
Someone not a member of an underprivileged group either does not write posts that are reminiscent of AI drivel, and so doesn't attract those comments, or even if he or she does, the negative commends slide right off due to their thicker skin.
Oh really? Some of the thinnest skins in the world come from privilege: for instance, think of the middle-aged man-child who buys an entire social network for billions in order to be able to suppress critical comments about himself.
Your argument would've been much better without injecting ca. 2025 US culture war jargon into it.
Sorry, what jargon is that? I may be able to fix it with your help. I'm not in the USA and don't follow US politics or culture enough to be up to 2025 in jargon.
9 replies →
The prudence of discussing everything in a cultural vacuum comes with the implication of irrelevance to the cultural climate, which could hardly be further from the truth in this case.
[dead]
[dead]
This is just like the way some people decided that "Blue Check" should be an insult on Twitter. Occasionally people still say it but almost everyone ignores it. Fads like this are common on the Internet. It's just like any other clique of people: a few people accidentally taste-make as a bunch of replicators simply repeat mindless things over and over again "slop" "mask-off moment" "enshittification" "ghoulish". Just words that people repeat because other people say them and get likes/upvotes/retweets or whatever.
The "Blue Check" insult regime didn't get anywhere and I doubt any anti-LLM/anti-diffusion-model stuff will last. "Stop trying to make fetch happen". The tools are just too useful.
People on the Internet are just weird. Some time in the early 2010s the big deal was "fedoras". Oh you're weird if you have a fedora. Man, losers keep thinking fedoras are cool. I recall hanging out with a bunch of friends once and we walked by a hat shop and the girls were all like "Man, you guys should all wear these hats". The girls didn't have a clue: these were fedoras. Didn't they know that it would mark us out as weird losers? They didn't, and it turned out it doesn't. In real life.
It only does on the Internet. Because the Internet is a collection of subcultures with some unique cultural overtones.
Sounds like something a blue checker would say. And yes, if you pay for Twitter you’re going to clowned on.
And what the hell is that segue into fedoras? The entire meme of them is because stereotypical clueless individually took fedoras to be the pinnacle of fashion, while disregarding nearly everything else about not only their outfit, but about their bodies.
This entire comment reeks of not actually understanding anything.
Found that user who memorized KnowYourMeme and thinks they're a scholar of culture now.
Or a cheap LLM acting as them, and wired up to KnowYourMeme via MCP? Can't tell these days. Hell, we're one weekend side project away from seeing "on the Internet, no one knows you are a dog" become true in a literal sense.
s/
blue checks are orthogonal - they're more rough approximations of "I bought a cybertruck when musk went full crazy" (and yes, it's a bad look). - judging some blog post for seeming like AI is different.