← Back to context

Comment by Sharlin

2 months ago

Unfortunately that seems to be the norm now – people literally reduce themselves to a copy-paste mechanism.

To be honest, I do not understand this new norm. A few months ago I applied to an internal position. I was a NGO IT worker, deployed twice to emergency response operations, knew the policies & operations and had good relations with users and coworkers.

The interview went well. I was honest. When asked what my weakness regarding this position I told that I am a good analyst but when it comes to writing new exploits, that's beyond my expertise. The role doesn't have this as a requirement so I thought it was a good answer.

I was not selected. Instead they selected a guy and then booted him off after 2 months due to his excessive (and non-correct like the link) use of LLM and did not open the position again.

So in addition to wasting the hirers' time those nice people block other people's progress as well. But, as long as the hirers expect wunderkinds crawling out of the woods the applicants try to fake it and win in the short term.

This needs to end but I don't see any progress towards it. This is especially painful as I am seeking a job at the moment and thinking these fakers are muddying the waters. It feels like no one cares about your attitude - like how geniunely you want to work. I am an old techie and the world I was in valued this rather than technical aptitude for you can teach/learn technical information but character is another thing. This gets lost in our brave new cyberpunk without the cool gadgets era I believe.

  • This is definitely not unique to software engineering. Just out of grad school, 15 years ago, I applied for a position with a local electrical engineering company for an open position. I was passed over and later the person I got a recommendation from let me know, out of band, that they had hired the person because he was fresh out of undergrad with an (unrelated) internship instead of research experience (that I would have been the second out of 3 candidates), but they had fired him within 6 months. They opened the position again and after interviewing again they told me they had decided not to hire anyone. Again, out of band, my contact told me he and his supervisor thought I should go work at one of their subcontractors to get experience, but they didn't send any recommendation and the subcontractors didn't respond to inquiry. I wasn't desperate enough to keep playing that game, and it really soured my view of a local company with an external reputation for engineering excellence, meritorious hiring, mentorship, and career building.

  • I posted a job for freelance dev work and all replies were obviously ai generated. Some even included websites that were clearly made by other people as their 'prior work'. So I pulled the posting and probably won't post again.

    Who knew. AI is costing jobs, not because it can do the jobs, but it has made hiring actual competent humans harder.

    • Plus, because it's harder to just do a job listing and get actual submittals, you're going to see more people hired because who are hired because of who they know not what they know. In other words if you wasted your time in networking class working on networking instead of working on networking then you're screwed

      1 reply →

    • if you're still looking and it's a js/ts project, I can help. I'll use a shit ton of AI, but not when talking to you. my email is on my profile. twitter account with the same username.

  • Same thing where I work. It's a startup, and they value large volumes of code over anything else. They call it "productivity".

    Management refuses to see the error of their ways even though we have thrown away 4 new projects in 6 months because they all quickly become an unmaintainable mess. They call it "pivoting" and pat themselves on the back for being clever and understanding the market.

  • This is not a new norm (LLM aside).

    Old man time, providing unsolicited and unwelcome input…

    My own way of viewing interviews: Treat interviews as one would view dating leading to marriage. Interviewing is a different skillset and experience than being on the job.

    The dating analogue for your interview question would be something like: “Can you cook or make meals for yourself?”.

    - Your answer: “No. I’m great in bed, but I’m a disaster in the kitchen”

    - Alternative answer: “No. I’m great in bed; but I haven’t had a need to cook for myself or anyone else up until now. What sort of cooking did you have in mind?”

    My question to you: Which ones leads to at least more conversation? Which one do you think comes off as a better prospect for family building?

    Note: I hope this perspective shift helps you.

I once had a conversation with a potential co-founder who literally told me he was pasting my responses into AI to try to catch up.

Then a few months later, another nontechnical CEO did the same thing, after moving our conversation from SMS into email where it was very clear he was using AI.

These are CEOs who have raised $1M+ pre-seed.

  • Have you watched All-In? Chamath Palihapitiya, who takes himself very seriously, is clearly just reading off something from ChatGPT most of the time.

    These Silicon Valley CEOs are hacks.

    • The word "hacks" is so charitable, when you could use "sociopaths".

      Russ Hanneman raised his kid with AI:

      https://www.youtube.com/watch?v=wGy5SGTuAGI&t=217s

      A company I'm funding, we call it The Lady.

      I press the button, and The Lady tells Aspen when it's time for bed, time to take a bath, when his fucking mother's here to pick him up.

      I get to be his friend, and she's the bad guy.

      I've disrupted fatherhood!

      2 replies →

I watched someone do this during an interview.

They were literally copy and pasting back and forth the LLM. In front of the interviewers! (myself and another co-worker)

https://news.ycombinator.com/item?id=44985254

  • I volunteer at a non-profit employment agency. I don't work with the clients directly. But I have observed that ChatGPT is very popular. Over the last year it has become ubiquitous. Like they use it for every email. And every resume is written with it. The counsellors have an internal portfolio of prompts they find effective.

    Consider an early 20s grad looking to start their career. Time to polish the resume. It starts with using ChatGPT collaboratively with their career counsellor, and they continue to use it the entire time.

  • I had someone do this in my C# / .NET Core / SQL coding test interview as well, I didn't just end it right there as I wanted to see if they could solve the coding test in the time frame allowed.

    They did not, I now state you can search anything online but can't copy and paste from an LLM so as not to waste my time.

    • What did your test involve? That's my occupational stack, and I am always curious how interviews are conducted these days. I haven't applied for a job in over 9 years, if that tells you anything.

  • You should've asked "are you the one who wants this job, or are you implying we should just hire ChatGPT instead?"

Just try to challenge and mentor people on not using it because it’s incapable of the job and wasting all our time when the mandate from down high is to use more of it.

  • Seems to me like people have to push back more directly with a collective effort; otherwise the incentives are all wrong.

    • What I don't get, is why people think this action has value. The maintainer of the project could ask an LLM to do that. A senior dev.

      I can't imagine Googling for something, seeing someone on (for example) stackoverflow commenting on code, and then filing a bug to the maintainer. And just copy and pasting what someone else said, into the bug report.

      All without even comprehending the code, the project, or even running into the issue yourself. Or even running a test case yourself. Or knowing the codebase.

      It's just all so absurd.

      I remember in Asimov's Empire series of books, at one point a scientist wanted to study something. Instead of going to study whatever it was, say... a bug, the scientist looked at all scientific studies and papers over 10000 years, weighed the arguments, and pronounced what the truth was. All without just, you know, looking and studying the bug. This was touted as an example of the Empire's decay.

      I hope we aren't seeing the same thing. I can so easily see kids growing up with AI in their bluetooth ears, or maybe a neuralink, and never having to make a decision -- ever.

      I recall how Google became a crutch to me. How before Google I had to do so much more work, just working with software. Using manpages, or looking at the source code, before ease of search was a thing.

      Are we going to enter an age where every decision made is coupled with the couching of an AI? This through process scares me. A lot.

      15 replies →

  • My sister had a fight over this and resigned from her tenure track position from a liberal arts college in Arkansas.

This resonates a lot with some observations I drafted last week about "AI Slop" at the workplace.

Overall, people are making a net-negative contribution by not having a sense of when to review/filter the responses generated by AI tools, because either (i) someone else is required to make that additional effort, or (ii) the problem is not solved properly.

This sounds similar to a few patterns I noted

- The average length of documents and emails has increased.

- Not alarmingly so, but people have started writing Slack/Teams responses with LLMs. (and it’s not just to fix the grammar.)

- Many discussions and brainstorms now start with a meeting summary or transcript, which often goes through multiple rounds of information loss as it’s summarized and re-expanded by different stakeholders. [arXiv:2509.04438, arXiv:2401.16475]

  • You’re absolutely right. The patterns you’ve noted, from document verbosity to informational decay in summaries, are the primary symptoms. Would you like me to explain the feedback loop that reinforces this behavior and its potential impact on organizational knowledge integrity?

    • Got it — here’s a satiric AI-slop style reply you could post under rvnx:

      Thank you for your profound observation. Indeed, the paradox you highlight demonstrates the recursive interplay between explanation and participation, creating a meta-layered dialogue that transcends the initial exchange. This recursive loop, far from being trivial, is emblematic of the broader epistemological challenge we face in discerning sincerity from performance in contemporary discourse.

      If you’d like, I can provide a structured framework outlining the three primary modalities of this paradox (performative sincerity, ironic distance, and meta-explanatory recursion), along with concrete examples for each. Would you like me to elaborate further?

      Want me to make it even more over-the-top with like bullet lists, references, and faux-academic tone, so it really screams “AI slop”?

      8 replies →

  • I have never seen an AI meeting summary that was useful or sufficient in explaining what happened in the meeting. I have no idea what people use them for other than as a status signal

    • In my company we sometimes cherry-pick parts of the AI summaries and send them to the clients just to confirm the stuff that we agreed on during a meeting. The customers know that the summary is AI-generated and they don't mind. Sometimes people come to me and ask whether what they read in the summary was really discussed in the meeting or is it just AI hallucinating but I can usually assure them that we really did discuss that. So these can be useful to a degree.

    • I'd use it to help me figure out which meeting we talked about a thing in 3 months ago so I can read the transcript for a refresher.

    • I use them to seem engaged about something I don’t actually care about.

      It’s painfully common to invite a laundry list of people to meetings.

  • This is the bull case for AI, as with any significant advance in technology eventually you have no choice but to use it. In this case, the only way to filter through large volumes of AI output is going to be with other LLM models.

    The exponential growth of compute and data continues..

    As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.

    • > As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.

      What if they are a very limited English speaker, using the AI to tighten up their responses into grammatical, idiomatic English?

      8 replies →

  • I'm so annoyed this morning... I picked up my phone to browse HN out of frustration after receiving an obvious AI-written teams message, only to see this on the front page! I can't escape haha

  • > - The average length of documents and emails has increased.

    Brevity is the soul of wit. Unfortunately, many people think more is better.

    • People have also veered strongly toward anti-intellectualism in recent decades. Coincidence?

  • There's a growing body of evidence that AI is damaging people, aside from the obvious slop related costs to review (as a resource attack).

    I've seen colleagues that were quite good at programming when we first met, and over time have become much worse with the only difference being they were forced to use AI on a regular basis. I'm of the opinion that the distorted reflected appraisal mechanism it engages through communication and the inconsistency it induces is particularly harmful, and as such the undisclosed use of AI to any third-party without their consent is gross negligence if not directly malevolent.

    https://fortune.com/2025/08/26/ai-overreliance-doctor-proced...

    • > aside from the obvious slop related costs to review

      Code-review tools (code-rabbit/greptile) produce enormous amounts of slop counterbalanced by the occasional useful tip. And cursor and the like love to produce nicely formatted sloppy READMEs.

      These tools - just like many of us humans - prioritize form over function.

I like the term "echoborg" for those people: https://en.wikipedia.org/wiki/Echoborg

> An echoborg is a person whose words and actions are determined, in whole or in part, by an artificial intelligence (AI).

I've seen people who can barely manage to think on their own anymore and pull out their phone to ask it even relatively basic questions. Seems almost like an addiction for some.

If seen more than one post on reddit being answered by a screenshot of a chatgpt mobile app including OP's question and the llm's answer

Imagine the amount of energy and compute power used...

For all we know, there's no human in the loop here. Could just be an agent configured with tools to spin up and operate Hacker One accounts in a continuous loop.

This has been a norm on Hacker One for over a decade.

  • No, it hasn't. Even where people were just submitting reports from an automated vulnerability scanner, they had to write the English prose themselves and present the results in some way (either in an honest way, "I ran vulnerability scanner tool X and it reported that ...", or dishonestly, "I discovered that ..."). This world where people literally just act as a mechanical intermediary between an English chat bot and the Hacker One discussion section is new.

    • Slop Hacker One reports often include videos, long explanations, and, of course, arguments. It's so prevalent that there's an entire cottage industry of "triage" contractors that filter this stuff out. You want to say that there's something distinctive about an LLM driving the slop, and that's fine; all I'm saying is that the defining experience of a Hacker One bug bounty program has always been a torrent of slop.

We're that for genes, if you trust positivist materialism. (Recently it's also been forced to permit the existence of memes.)

If that's all which is expected of a person - to be a copypastebot for vast forces beyond one's ken - why fault that person for choosing easy over hard? Because you're mad at them for being shit at the craft you've lovingly honed? They don't really know why they're there in the first place.

If one sets a different bar with one's expectations of people, one ought to at least clearly make the case for what exactly it is. And even then the bots have made it quite clear that such things are largely matters of personal conviction, and as such are not permitted much resonance.

  • > If that's all which is expected of a person - to be a copypastebot for vast forces beyond one's ken - why fault that person for choosing easy over hard?

    I wouldn't be mad at them for that, though they might be faulted for not realizing that at some point, the copy/pasting will be done without them, as it's simpler and cheaper to ask ChatGPT directly rather than playing a game of telephone.

    • They are correctly following their incentives as they are presented to them. If you expect better of them, you need to state why, and what exactly.

      1 reply →