Comment by dansmith1919

2 months ago

Crazy how he doubled down by just pasting badger's answer into Chat and submitting the (hilariously obvious AI) reply:

> Thanks for the quick review. You’re right — my attached PoC does not exercise libcurl and therefore does not demonstrate a cURL bug. I retract the cookie overflow claim and apologize for the noise. Please close this report as invalid. If helpful, I can follow up separately with a minimal C reproducer that actually drives libcurl’s cookie parser (e.g., via an HTTP response with oversized Set-Cookie or using CURLOPT_COOKIELIST) and reference the exact function/line in lib/cookie.c should I find an issue.

Unfortunately that seems to be the norm now – people literally reduce themselves to a copy-paste mechanism.

  • To be honest, I do not understand this new norm. A few months ago I applied to an internal position. I was a NGO IT worker, deployed twice to emergency response operations, knew the policies & operations and had good relations with users and coworkers.

    The interview went well. I was honest. When asked what my weakness regarding this position I told that I am a good analyst but when it comes to writing new exploits, that's beyond my expertise. The role doesn't have this as a requirement so I thought it was a good answer.

    I was not selected. Instead they selected a guy and then booted him off after 2 months due to his excessive (and non-correct like the link) use of LLM and did not open the position again.

    So in addition to wasting the hirers' time those nice people block other people's progress as well. But, as long as the hirers expect wunderkinds crawling out of the woods the applicants try to fake it and win in the short term.

    This needs to end but I don't see any progress towards it. This is especially painful as I am seeking a job at the moment and thinking these fakers are muddying the waters. It feels like no one cares about your attitude - like how geniunely you want to work. I am an old techie and the world I was in valued this rather than technical aptitude for you can teach/learn technical information but character is another thing. This gets lost in our brave new cyberpunk without the cool gadgets era I believe.

    • This is definitely not unique to software engineering. Just out of grad school, 15 years ago, I applied for a position with a local electrical engineering company for an open position. I was passed over and later the person I got a recommendation from let me know, out of band, that they had hired the person because he was fresh out of undergrad with an (unrelated) internship instead of research experience (that I would have been the second out of 3 candidates), but they had fired him within 6 months. They opened the position again and after interviewing again they told me they had decided not to hire anyone. Again, out of band, my contact told me he and his supervisor thought I should go work at one of their subcontractors to get experience, but they didn't send any recommendation and the subcontractors didn't respond to inquiry. I wasn't desperate enough to keep playing that game, and it really soured my view of a local company with an external reputation for engineering excellence, meritorious hiring, mentorship, and career building.

    • I posted a job for freelance dev work and all replies were obviously ai generated. Some even included websites that were clearly made by other people as their 'prior work'. So I pulled the posting and probably won't post again.

      Who knew. AI is costing jobs, not because it can do the jobs, but it has made hiring actual competent humans harder.

      4 replies →

    • Same thing where I work. It's a startup, and they value large volumes of code over anything else. They call it "productivity".

      Management refuses to see the error of their ways even though we have thrown away 4 new projects in 6 months because they all quickly become an unmaintainable mess. They call it "pivoting" and pat themselves on the back for being clever and understanding the market.

    • This is not a new norm (LLM aside).

      Old man time, providing unsolicited and unwelcome input…

      My own way of viewing interviews: Treat interviews as one would view dating leading to marriage. Interviewing is a different skillset and experience than being on the job.

      The dating analogue for your interview question would be something like: “Can you cook or make meals for yourself?”.

      - Your answer: “No. I’m great in bed, but I’m a disaster in the kitchen”

      - Alternative answer: “No. I’m great in bed; but I haven’t had a need to cook for myself or anyone else up until now. What sort of cooking did you have in mind?”

      My question to you: Which ones leads to at least more conversation? Which one do you think comes off as a better prospect for family building?

      Note: I hope this perspective shift helps you.

  • I once had a conversation with a potential co-founder who literally told me he was pasting my responses into AI to try to catch up.

    Then a few months later, another nontechnical CEO did the same thing, after moving our conversation from SMS into email where it was very clear he was using AI.

    These are CEOs who have raised $1M+ pre-seed.

    • Have you watched All-In? Chamath Palihapitiya, who takes himself very seriously, is clearly just reading off something from ChatGPT most of the time.

      These Silicon Valley CEOs are hacks.

      3 replies →

  • I watched someone do this during an interview.

    They were literally copy and pasting back and forth the LLM. In front of the interviewers! (myself and another co-worker)

    https://news.ycombinator.com/item?id=44985254

    • I volunteer at a non-profit employment agency. I don't work with the clients directly. But I have observed that ChatGPT is very popular. Over the last year it has become ubiquitous. Like they use it for every email. And every resume is written with it. The counsellors have an internal portfolio of prompts they find effective.

      Consider an early 20s grad looking to start their career. Time to polish the resume. It starts with using ChatGPT collaboratively with their career counsellor, and they continue to use it the entire time.

    • I had someone do this in my C# / .NET Core / SQL coding test interview as well, I didn't just end it right there as I wanted to see if they could solve the coding test in the time frame allowed.

      They did not, I now state you can search anything online but can't copy and paste from an LLM so as not to waste my time.

      1 reply →

    • You should've asked "are you the one who wants this job, or are you implying we should just hire ChatGPT instead?"

  • Just try to challenge and mentor people on not using it because it’s incapable of the job and wasting all our time when the mandate from down high is to use more of it.

    • My sister had a fight over this and resigned from her tenure track position from a liberal arts college in Arkansas.

  • This resonates a lot with some observations I drafted last week about "AI Slop" at the workplace.

    Overall, people are making a net-negative contribution by not having a sense of when to review/filter the responses generated by AI tools, because either (i) someone else is required to make that additional effort, or (ii) the problem is not solved properly.

    This sounds similar to a few patterns I noted

    - The average length of documents and emails has increased.

    - Not alarmingly so, but people have started writing Slack/Teams responses with LLMs. (and it’s not just to fix the grammar.)

    - Many discussions and brainstorms now start with a meeting summary or transcript, which often goes through multiple rounds of information loss as it’s summarized and re-expanded by different stakeholders. [arXiv:2509.04438, arXiv:2401.16475]

    • You’re absolutely right. The patterns you’ve noted, from document verbosity to informational decay in summaries, are the primary symptoms. Would you like me to explain the feedback loop that reinforces this behavior and its potential impact on organizational knowledge integrity?

      11 replies →

    • I have never seen an AI meeting summary that was useful or sufficient in explaining what happened in the meeting. I have no idea what people use them for other than as a status signal

      7 replies →

    • This is the bull case for AI, as with any significant advance in technology eventually you have no choice but to use it. In this case, the only way to filter through large volumes of AI output is going to be with other LLM models.

      The exponential growth of compute and data continues..

      As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.

      9 replies →

    • I'm so annoyed this morning... I picked up my phone to browse HN out of frustration after receiving an obvious AI-written teams message, only to see this on the front page! I can't escape haha

    • There's a growing body of evidence that AI is damaging people, aside from the obvious slop related costs to review (as a resource attack).

      I've seen colleagues that were quite good at programming when we first met, and over time have become much worse with the only difference being they were forced to use AI on a regular basis. I'm of the opinion that the distorted reflected appraisal mechanism it engages through communication and the inconsistency it induces is particularly harmful, and as such the undisclosed use of AI to any third-party without their consent is gross negligence if not directly malevolent.

      https://fortune.com/2025/08/26/ai-overreliance-doctor-proced...

      1 reply →

  • I like the term "echoborg" for those people: https://en.wikipedia.org/wiki/Echoborg

    > An echoborg is a person whose words and actions are determined, in whole or in part, by an artificial intelligence (AI).

    I've seen people who can barely manage to think on their own anymore and pull out their phone to ask it even relatively basic questions. Seems almost like an addiction for some.

  • If seen more than one post on reddit being answered by a screenshot of a chatgpt mobile app including OP's question and the llm's answer

    Imagine the amount of energy and compute power used...

  • For all we know, there's no human in the loop here. Could just be an agent configured with tools to spin up and operate Hacker One accounts in a continuous loop.

  • This has been a norm on Hacker One for over a decade.

    • No, it hasn't. Even where people were just submitting reports from an automated vulnerability scanner, they had to write the English prose themselves and present the results in some way (either in an honest way, "I ran vulnerability scanner tool X and it reported that ...", or dishonestly, "I discovered that ..."). This world where people literally just act as a mechanical intermediary between an English chat bot and the Hacker One discussion section is new.

      1 reply →

  • We're that for genes, if you trust positivist materialism. (Recently it's also been forced to permit the existence of memes.)

    If that's all which is expected of a person - to be a copypastebot for vast forces beyond one's ken - why fault that person for choosing easy over hard? Because you're mad at them for being shit at the craft you've lovingly honed? They don't really know why they're there in the first place.

    If one sets a different bar with one's expectations of people, one ought to at least clearly make the case for what exactly it is. And even then the bots have made it quite clear that such things are largely matters of personal conviction, and as such are not permitted much resonance.

    • > If that's all which is expected of a person - to be a copypastebot for vast forces beyond one's ken - why fault that person for choosing easy over hard?

      I wouldn't be mad at them for that, though they might be faulted for not realizing that at some point, the copy/pasting will be done without them, as it's simpler and cheaper to ask ChatGPT directly rather than playing a game of telephone.

      2 replies →

This might be some kind of asshole Tech-guy trying to make the "This AI creates pull-requests that are accepted into well regarded OSS projects".

IE: They're farming out the work now to OSS volunteers not even sure if the fucking thing works, and eating up OSS maintainer's time.

I wonder if there was a human in the loop to begin with. I hope the future of CVS is not agents opening accounts and posting 'bugs'

  • I don't think there are humans involved. I've now seen countless PRs to some repos I maintain that claim to be fixing non-existent bugs, or just fixing typos. One that I got recently didn't even correctly balanced the parenthesis in the code, ugh.

    I call this technique: "sprAI and prAI".

    • We will quickly evolve a social contract that AI are not allowed to directly contact humans and waste their time with input that was not reviewed by other humans, and any transgression should by swiftly penalized.

      It's essentially spam, automatically generated content that is profitable in large volume because it offsets the real cost to the victims, by wasting their limited attention span.

      If you wantme to read your text, you should have the common courtesy to at least put in a similar work beforehand and read it yourself at least once.

      13 replies →

    • You're absolutely right! There are no humans involved and I apologize for that! Let me try that again and involve some humans this time, as well as correctly balancing the the parentheses. I understand your frustration and apologize for it, I am still learning as a model!

    • I think there are humans that watch "how to get rich with chatgpt and hackerone" videos (replace chatgpt and hackerone with whatever affiliate youtuber uses).

      It's MLM in tech.

  • The future of everything with a text entry box is AIs shoveling plausible looking nonsense into it. This will result in a rise of paranoia, pre-verification hoops, Cloudflare like agent-blocking, and communities "going dark" or closed to new entrants who have not been verified in person somewhere.

    (The CVE system has been under strain for Linux: https://www.heise.de/en/news/Linux-Criticism-reasons-and-con... )

This reads as an AI generated response as well with the; "thanks", "you're right", flawless grammar, and plenty of technical references.

Is it that crazy? He's doing exactly what the AI boosters have told him to do.

Like, do LLMs have actual applications? Yes. By virtue of using one, are you by definition a lazy know-nothing? No. Are they seemingly quite purpose-built for lazy know-nothings to help them bullshit through technical roles? Yeah, kinda.

In my mind this is this tech working exactly as intended. From the beginning the various companies have been quite open about the fact that this tech is (supposed to) free you from having to know... anything, really. And then we're shocked when people listen to the marketing. The executives are salivating at the notion of replacing development staff with virtual machines that generate software, but if they can't have that, they'll be just as happy to export their entire development staff to a country where they can pay every member of it in spoons. And yeah, the software they make might barely function but who cares, it barely functions now.

  • I have a long-running interest in NLP, LLMs basically solved or almost solved a lot of NLP problems.

    The usefulness of LLMs for me, in the end, is their ability to execute classic NLP tasks, so I can incorporate a call for them in programs to do useful stuff that would be hard to do otherwise when dealing with natural language.

    But, a lot of times, people try to make LLMs do things that they can only simulate doing, or doing by analogy. And this is where things start getting hairy. When people start believing LLMs can do things they can't do really.

    Ask an LLM to extract features from a bunch of natural language inputs, and probably it will do a pretty good job in most domains, as long as you're not doing anything exotic and novel enough to not being sufficiently represented in the training data. It will be able to output a nice JSON with nice values for those features, and it will be mostly correct. It will be great for aggregate use, but a bit riskier for you to depend on the LLM evaluation for individual instances.

    But then, people ignore this, and start asking on their prompts for the LLM to add to their output confidence scores. Well. LLMs CAN'T TRULY EVALUATE the fitness of their output for any imaginable criteria, at least not with the kind of precision a numeric score implies. They absolutely can't do it by themselves, even if sometimes they seem to be able to. If you need to trust it, you'd better have some external mechanism to validate it.

    • I once tasked an LLM with correcting a badly-OCR'd text, and it went beast mode on that. Like setting an animal finally free in its habitat. But that kind of work won't propel a stock valuation :(

      1 reply →

  • So basically a hundred billion dollar industry for just spam and fraud. Truly amazing technological progress.

Wait so are we now saying that these AIs are failing the Turing test?

(I mean I guess it has to mean that if we are able to spot them so easily)

Makes me wonder whether the submitter even speaks english

Quite a few people using AI are using it not only to do analysis, but to do translation for them as well; many people leaping onto this technology don't have English as a fluent language, so they can't evaluate the output of the AI for sensibility or "not sounding like AI."

(It's a noise issue, but I find it hard to blame them; not their fault they got born in a part of the world where you don't get autoconfig'd with English and as a result they're on the back-foot for interacting with most of the open source world).

At some point they told ChatGPT to put emoji's everywhere which is also a dead giveaway on the original report that it's AI. They're the new em dash.

Was this all actually an agent? I could see someone making the claim that a security research LLM should always report issues immediately from an ethics standpoint (and in turn acquire more human generated labels of accuracy).

To be clear, I personally disagree with AI experiments that leverage humans/businesses without their knowledge. Regardless of the research area.

Crazy on how the current 400 Billion AI bubble is based on this being feasible...

  • The rationale is that the AI companies are selling the shovels to both generate this pile as well as the ones we'll need to clean it up.

    • I vividly remember the image of one guy digging a hole and another filling it with dirt as a representation of government bureaucracy and similar. Looks like office workers are gonna have the same privilege.

      2 replies →

  • And on externalizing costs - the actual humans who have to respond to bad vulnerability report spam.

I felt like it was more likely to be a complete absence of a human in the loop.

Do you think it’s a person doing it? When I saw that reply I though maybe it’s a bot doing the whole thing!

I think we are now beyond just copy-pasting. I guess we are in the era where this shit is full automated.

Is this for internet points?

  • If it's an individual, it could be as simple as portfolio cred ('look, I found and helped fix a security flaw in this program that's on millions of devices ')

why assume someone is copy-pasting and didn't just build a bot to "report bugs everywhere" ?

The '—' gave it away. No one types this character on purpose.

  • I really loved how easy MacOS made these (option+hypen for en, with shift for em), so I used to use them all the time. I'm a bit miffed by good typography now being an AI smell.

    • On MacOS (and I have this disabled since I'm not infrequently typing code and getting an — where I specced a - can be not fun to debug)...

      Right click in the text box, and select "Substitutions". Smart dashes will replace -- with — when typed that way. It can also do smart quotes to make them curly... which is even worse for code.

      (turning those on...)

      It is disappointing that proper typography is a sign of AI influence… (wait, that’s option semicolon? Things you learn) though I think part of it is that humans haven’t cared about proper typography in the past.

  • Just because you don’t, doesn’t mean other people don’t. Plenty of real humans use emdash. You probably don’t realise that on some platforms it’s easy to type an emdash.

  • And where did you suppose AIs learned this, if not from us?

    Turns out lots of us use dashes — and semicolons! And the word “the”! — and we’re going to stuff just because others don’t like punctuation.

    • I'm starting to wonder if there's a real difference between the populations who use em dashes and those who think it's a sign of AI. The former are the ones who write useful stuff online, which the AIs were trained on, and the latter are the consumers who probably never paid attention to typography and only started commenting on dashes after they became a meme on LinkedIn.

      2 replies →

    • I find it disturbing that many people don't seem to realize that chatbot output is forced into a strict format that it fills in recursively, because the patterns that LLMs recognize are no longer than a few paragraphs. Chatbots are choosing response templates based on the type of response that is being given. Many of those templates include unordered lists, and the unordered list marker that they chose was the em-dash.

      If a chatbot had to write freely, it would be word salad by the end of the length of the average chatbot response. Even its "free" templates are templates (I'm sure stolen from the standard essay writing guides), and the last paragraph is always a call to further engagement.

      Chatbots are tightly designed dopamine dispensers.

      edit: even weirder is people who think they use em-dashes at the rate of chatbots (they don't) even thinking that what they read on the web uses em-dashes at the rate of chatbots (it doesn't.) Oh, maybe in print? No, chatbots use them more than even Spanish writing, and they use em-dashes for quotation marks. It's just the format. I'm sure they regret it, but what are they going to replace them with? Asterisks or en-dashes? Maybe emoticons.

      2 replies →

    • Books use it more liberally, internet writings not so much. Also some languages are much more prone to using it while some practically never use it

  • The AI is trained on human input. It uses the dash because humans did.

    • I'm skeptical this is the reason:

      - Chatgpt uses mdashes in basically every answer, while on average humans don't (the average user might not even be aware it exists)

      - if the preference for em dashes came from the training set, other AIs would show the same bias (gemini and Le chat don't seem to use them at all)

      1 reply →

  • Or at least not anymore since this became the number 1 sign whether a text was written with AI. Which is a bit sad imo.

  • I do all the time, but might have to stop. Same with `…`.

    • I dislike the ellipsis character on its own merits, honestly. Too scrunched-up, I think - ellipses in print are usually much wider, which looks better to me, and three periods approximates that more closely than the Unicode ellipsis.

  • That got a giggle out of me. Not entirely relevant but AI tends to be overzealous in its use of emojis and punctuation, in a way people almost never do (too cumbersome on desktop where majority of typing work is done)

  • Academia certainly does, although, humorously, we also have professors making the same proclamation you do, while while en or em dashes in their syllabi.

  • Keep in mind that now that people know what to pay attention to: em-dash, emojis, etc. they will instruct the LLM to not use that, so yeah.

  • I absolutely bloody do -- though more commonly as a double dash when not at the keyboard -- and I'm so mad it was cargo-culted into the slop machines as a superficial signifier of literacy.