← Back to context

Comment by monegator

5 hours ago

> The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have.

ever had a client second guess you by replying you a screenshot from GPT?

ever asked anything in a public group only to have a complete moron replying you with a screenshot from GPT or - at least a bit of effor there - a copy/paste of the wall of text?

no, people have no shame. they have a need for a little bit of (borrowed) self importance and validation.

Which is why i applaud every code of conduct that has public ridicule as punishment for wasting everybody's time

Problem is people seriously believe that whatever GPT tells them must be true, because… I don't even know. Just because it sounds self-confident and authoritative? Because computers are supposed to not make mistakes? Because talking computers in science fiction do not make mistakes like that? The fact that LLMs ended up having this particular failure mode, out of all possible failure modes, is incredibly unfortunate and detrimental to the society.

  • Last year I had to deal with a contractor who sincerely believed that a very popular library had some issue because it was erroring when parsing a chatgpt generated json... I'm still shocked, this is seriously scary

  • My boss says it's because they are backed by trillion dollar companies and the companies would face dire legal threats if they did not ensure the correctness of AI output.

    • Point out to your boss that trillion dollar companies have million dollar lawyers making sure their terms of service put all responsibility on the user, and if someone still tries to sue them they hire $2000/hour litigators from top law firms to deal with it.

    • If only every LLM-shop out there would put disclaimers on their page that they hope absolve them of the responsibility of correctness, so that your boss could make up his own mind... Oh wait.

  • I think people's attitude would be better calibrated to reality if LLM providers were legally required to call their service "a random drunk guy on the subway"

    E.g.

    "A random drunk guy on the subway suggested that this wouldn't be a problem if we were running the latest SOL server version" "Huh, I guess that's worth testing"

  • Billions of dollars of marketing have been spent to enable them to believe that, in order to justify the trillions of investment. Why would you invest a trillion dollars in a machine that occasionally randomly gave wrong answers?

  • People's trust on LLM imo stems from the lack of awareness of AI hallucinating. Hallucination benchmarks are often hidden or talked about hastily in marketing videos.

  • I don't remember exactly who said it, but at one point I read a good take - people trust these chatbots because there's big companies and billions behind them, surely big companies test and verify their stuff thoroughly?

    But (as someone else described), GPTs and other current-day LLMs are probabilistic. But 99% of what they produce seems feasible enough.

  • I think in science fiction it’s one of the most common themes for the talking computer to be utterly horribly wrong, often resulting in complete annihilation of all life on earth.

    Unless I have been reading very different science fiction I think it’s definitely not that.

    I think it’s more the confidence and seeming plausibility of LLM answers

    • People are literally taking Black Mirror storylines and trying to manifest them. I think they did a `s/dys/u/` and don't know how to undo it...

    • In terms of mass exposure, you're probably talking things like Cmdr Data from Star Trek, who was very much on the 'infallible' end of the fictional AI spectrum.

    • Sure, but this failure mode is not that. "AI will malfunction and doom us all" is pretty far from "AI will malfunction by sometimes confabulating stuff".

    • The stories I read had computers being utterly horribly right, which resulted in attempts (sometimes successful) at annihilate humanity.

This sounds a bit like the "Asking vs. Guessing culture" discussion on the front page yesterday. With the "Guesser" being GP who's front-loading extra investigation, debugging and maintenance work so the project maintainers don't have to do it, and with the "Asker" being the client from your example, pasting the submission to ChatGPT and forwarding its response.

  • >> In Guess Culture, you avoid putting a request into words unless you're pretty sure the answer will be yes. Guess Culture depends on a tight net of shared expectations. A key skill is putting out delicate feelers. If you do this with enough subtlety, you won't even have to make the request directly; you'll get an offer. Even then, the offer may be genuine or pro forma; it takes yet more skill and delicacy to discern whether you should accept.

    delicate feelers is like octopus arms

I've also had the opposite.

I raise an issue or PR after carefully reviewing someone else's open source code.

They ask Claude to answer me; neither them nor Claude understood the issue.

Well, at least it's their repo, they can do whatever.

Not OP, but I don't consider these the same thing.

The client in your example isn't a (presumably) professional developer, submitting code to a public repository, inviting the scrutiny of fellow professionals and potential future clients or employers.

  • I consider them to be the same attitude. Machine made it / Machine said it. It must be right, you must be wrong.

    They are sure they know better because they get a yes man doing their job for them.

Our CEO chiming in on a technical discussion between engineers: by the way, this is what Claude says: *some completely made-up bullshit*

  • I do want to counter that in the past before AI, the CEO would just chime in with some completely off the wall bullshit from a consultant.

  • Hi CEO, thanks for the input. Next time that we have a discussion, we will ask Claude instead of discussing with who wrote the offending code.