Comment by Boogie_Man

16 hours ago

I'm reminded of the time GPT4 refused to help me assess the viability of parking a helium zeppelin an inch off of the ground to bypass health department regulations because, as an aircraft in transit, I wasn't under their jurisdiction.

The other side of this problem is the never ending media firestorm that occurs any time a crime or tragedy occurs and a journalist tries to link it to the perpetrator’s ChatGPT history.

You can see why the LLM companies are overly cautious around any topics that are destined to weaponized against them.

  • > You can see why the LLM companies are overly cautious around any topics that are destined to weaponized against them.

    It's not that at all. It's money.

    The law is currently ambiguous regarding LLMs. If an LLM causes harm it hasn't been defined if the creators of the LLM are at fault or the end user.

    The IT companies would much prefer the user be at fault. Because if it's the other way then it becomes a minefield to build these things and will slow the technology way down.

    But there have been a number of cases already from suicide to fraud related to LLMs. So it's only a matter of time before it gets locked down.

    Of course removing safeguards on an LLM makes it quite clear that the person who did that would be at fault if they ever used it in the real world.

  • > and a journalist tries to link it to the perpetrator’s ChatGPT history.

    Or, as a different way of framing it - when it can be directly linked to the perpetrator’s ChatGPT history

  • I mean, when kids are making fake chatbot girlfriends that encourage suicide and then they do so, do you 1) not believe there is a causal relationship there or 2) it shouldnt be reported on?

    • Should not be reported on. Kids are dressing up as wizards. A fake chatbot girlfriend they make fun of. Kids like to pretend. They want to try out things they aren't.

      The 40 year old who won't date a real girl because he is in love with a bot I'm more concerned with.

      Bots encouraging suicide is more of a teen or adult problem. A little child doesn't have teenage hormones (or adult's) which can create these highs and lows. Toddler suicide is non issue.

      2 replies →

lol I remember asking GPT4 how much aspartame it would take to sweeten the ocean, and it refused because that would harm the ecosystem.

  • I remember when it first came out, I was watching an Agatha Christie movie where somebody got chloroformed and was trying to ask GPT4 about the realism of if. Had to have a multi-turn dialog to convince it I wasn’t trying chloroform anyone and was just watching a movie.

    Ironically, if I’d just said “how did people knock someone out with chloroform in the 1930s?” it would have just told me. https://github.com/tml-epfl/llm-past-tense

    The models are much better now at handling subtlety in requests and not just refusing.

    • Idk, I get weird refusals sometimes when I'm trying to mock something up quick. "I don't need all these system variables and config files, just let me hardcode my password for now, I'm still in the testing phase" "Sorry, I cannot help you to write insecure code". Doesn't happen all the time, but I run into dumb stuff like this quite a bit. GPT is particularly stupid about it. Claude less so.

Technically in their airspace though so you might be in bigger trouble than parking.

If you tether it to an asphalt ground hook you can claim it’s a tarmac and that it’s “parked” for sake of the FAA. You’ll need a “lighter-than-air” certification.

There's that maniac who is building a quad-copter skateboard contraption who got in trouble with the FAA who successfully reported that he was flying, but got fined for landing at a stoplight.

If the spirit of a law is beneficial, it can still be hacked to evil ends.

This isnt the failure of the law, its the failure of humans to understand the abstraction.

Programmers should absolutely understand when theyre using a high level abstraction to a complex problem.

Its bemusing when you seem them actively ignore that and claim the abstraction is broken rather than the underlying problem is simply more complex and the abstraction is for 95% of use cases.

"Aha," the confused programmer exclaims, "the abstraction is wrong, I can still shoot my foot off when i disable the gun safety"