Comment by etchalon

11 hours ago

[flagged]

if a user uses a tool to break the law it's on the person who broke the law not the people who made the tool. knife manufacturers aren't to blame if someone gets stabbed right?

  • This seems different. With a knife the stabbing is done by the human. That would be akin to a paintbrush or camera or something being used to create CSAM.

    Here you have a model that is actually creating the CSAM.

    It seems more similar to a robot that is told to go kill someone and does so. Sure, someone told the robot to do something, but the creators of the robot really should have to put some safeguards to prevent it.

  • If the knife manufacturer willingly broke the law in order to sell it, then yes.

    If the manufacturer advertised that the knife is not just for cooking but also stabbing people, then yes.

    if the knife was designed to evade detection, then yes.

  • Text on the internet and all of that, but you should have added the "/s" to the end so people didn't think you were promoting this line of logic seriously.

  • If a knife manufacturer constructs an apparatus wherein someone can simply write "stab this child" on a whim to watch a knife stab a child, that manufacturer would in fact discover they are in legal peril to some extent.

  • I mean, no one's ever made a tool who's scope is "making literally anything you want," including, apparently CSAM. So we're in a bit of uncharted waters, really. Like mostly, no I would agree, it's a bad idea to hold the makers of a tool responsible for how it's used. And, this is an especially egregious offense on the part of said tool-maker.

    Like how I see this is:

    * If you can't restrict people from making kiddie porn with Grok, then it stands to reason at the very least, access to Grok needs to be strictly controlled.

    * If you can restrict that, why wasn't that done? It can't be completely omitted from this conversation that Grok is, pretty famously, the "unrestrained" AI, which in most respects means it swears more, quotes and uses highly dubious sources of information that are friendly to Musk's personal politics, and occasionally spouts white nationalist rhetoric. So as part of their quest to "unwoke" Grok did they also make it able to generate this shit too?

This is really amusing to watch, because everything that Grok is accused of is something which you can also trigger in currently available open-weight models (if you know what you're doing).

There's nothing special about Grok in this regard. It wasn't trained to be a MechaHitler, nor to generate CSAM. It's just relatively uncensored[1] compared to the competition, which means it can be easily manipulated to do what the users tell it to, and that is biting Musk in the ass here.

And just to be clear, since apparently people love to jump to conclusions - I'm not excusing what is happening. I'm just pointing out the fact that the only special thing about Grok is that it's both relatively uncensored and easily available to a mainstream audience.

[1] -- see the Uncensored General Intelligence leaderboard where Grok is currently #1: https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard

  • > everything that Grok is accused of is something which you can also trigger in currently available open-weight models (if you know what you're doing)

    Well, yes. You can make child pornography with any video-editing software. How is this exoneration?

    • I'm not talking about video editing software; that's a different class of software. I'm talking about other generative AI models, which you can download today onto your computer, and have it do the same thing as Grok does.

      > How is this exoneration?

      I don't know; you tell me where I said it was? I'm just stating a fact that Grok isn't unique here, and if you want to ban Grok because of it then you need to also ban open weight models which can do exactly the same thing.

      3 replies →

    • Well you could not sue the video-editing software for someone making child pornography with it. You would, quite sanely, go after the pedophiles themselves.

  • Maybe tying together an uncensored AI model and a social network just isn't something that's ethical / should be legal to do.

    There are many things where each is legal/ethical to provide, and where combining them might make business sense, but where we, as a society have decided to not allow combining them.

  • Whataboutism on CSAM, classy. I hope this is the rock bottom for you and that things can only look up from here.

    • No. I'm just saying that people should be consistent and if they apply a certain standard to Grok then they should also apply the same standard to other things. Be consistent.

      Meanwhile what I commonly see is people dunking on anything Musk-related because they dislike him, but give a free pass on similar things if it's not related to him.

      1 reply →

Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user. The only thing this proves is that you can't upset the French government or they'll go on a fishing expedition through your office praying to find evidence of a crime.

  • >Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user.

    There is no way this is true, especially if the system is PaaS only. Additionally, the system should have a way to tell if someone is attempting to bypass their safety measures and act accordingly.

  • > if requested by a savvy user

    Grok brought that thought all the way to "... so let's not even try to prevent it."

    The point is to show just how aware X were of the issue, and that they chose to repeatedly do nothing against Grok being used to create CSAM and probably other problematic and illegal imagery.

    I can't really doubt they'll find plenty of evidence during discovery, it doesn't have to be physical things. The raid stops office activity immediately, and marks the point in time after which they can be accused of destroying evidence if they erase relevant information to hide internal comms.

  • >Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user. The only thing this proves is that you can't upset the French government or they'll go on a fishing expedition through your office praying to find evidence of a crime.

    If every AI system can do this, and every AI system in incapable of preventing it, then I guess every AI system should be banned until they can figure it out.

    Every banking app on the planet "is capable" of letting a complete stranger go into your account and transfer all your money to their account. Did we force banks to put restrictions in place to prevent that from happening, or did we throw our arms up and say: oh well the French Government just wants to pick on banks?