X offices raided in France as UK opens fresh investigation into Grok

16 hours ago (bbc.com)

Honest question: What does it mean to "raid" the offices of a tech company? It's not like they have file cabinets with paper records. Are they just seizing employee workstations?

Seems like you'd want to subpoena source code or gmail history or something like that. Not much interesting in an office these days.

  • Sadly the media calls the lawful use of a warrant a 'raid' but that's another issue.

    The warrant will have detailed what it is they are looking for, French warrants (and legal system!) are quite a bit different than the US but in broad terms operate similarly. It suggests that an enforcement agency believes that there is evidence of a crime at the offices.

    As a former IT/operations guy I'd guess they want on-prem servers with things like email and shared storage, stuff that would hold internal discussions about the thing they were interested in, but that is just my guess based on the article saying this is related to the earlier complaint that Grok was generating CSAM on demand.

  • Gather evidence against employees, use that evidence to put them under pressure to testify against their employer or grant access to evidence.

    Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.

    That was legal. Guess what, similar things would be legal in France.

    We all forget that money is nice, but nation states have real power. Western liberal democracies just rarely use it.

    The same way the president of the USA can order a Drone strike on a Taliban war lord, the president of France could order Musks plane to be escorted to Paris by 3 Fighter jets.

    • > We all forget that money is nice, but nation states have real power.

      Interesting point. There's a top gangster who can buy anything in the prison commissary; and then there's the warden.

      2 replies →

    • > We all forget that money is nice, but nation states have real power.

      I remember something (probably linked from here), where the essayist was comparing Jack Ma, one of the richest men on earth, and Xi Jinping, a much lower-paid individual.

      They indicated that Xi got Ma into a chokehold. I think he "disappeared" Ma for some time. Don't remember exactly how long, but it may have been over a year.

      7 replies →

    • > Gather evidence against employees

      I'm sure they have much better and quieter ways to do that.

      Whereas a raid is #1 choice for max volume...

    • Wait, Sabu's kids were foster kids. He was fostering them. Certainly if he went to jail, they'd go back to the system.

      I mean, if you're a sole caretaker and you've been arrested for a crime, and the evidence looks like you'll go to prison, you're going to have to decide what to do with the care of your kids on your mind. I suppose that would pressure you to become an informant instead of taking a longer prison sentence, but there's pressure to do that anyway, like not wanting to be in prison for a long time.

    • >Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.

      >That was legal. Guess what, similar things would be legal in France.

      lawfare is... good now? Between Trump being hit with felony charges for falsifying business records (lawfare is good?) and Lisa Cook getting prosecuted for mortgage fraud (lawfare is bad?), I honestly lost track at this point.

      >The same way the president of the USA can order a Drone strike on a Taliban war lord, the president of France could order Musks plane to be escorted to Paris by 3 Fighter jets.

      What's even the implication here? That they're going to shoot his plane down? If there's no threat of violence, what does the French government even hope to achieve with this?

      8 replies →

    • > Western liberal democracies just rarely use it.

      Also, they are restricted in how they use it, and defendents have rights and due process.

      > Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.

      Though things like that can happen, which are very serious.

      8 replies →

    • > Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.

      This is pretty messed up btw.

      Social work for children systems in the USA are very messed up. It is not uncommon for minority families to lose rights to parent their children for very innocuous things that would not happen to a non-oppressed class.

      It is just another way for the justice/legal system to pressure families that have not been convicted / penalized under the supervision of a court.

      And this isn't the only lever they use.

      Every time I read crap like this I just think of Aaron Swartz.

      1 reply →

  • Offline syncing of outlook could reveal a lot of emails that would otherwise be on a foreign server. A lot of people save copies of documents locally as well.

    • Most enterprises have fully encrypted workstations, when they don't use VM where the desktop is just a thin client that doesn't store any data. So there should be really nothing of interest in the office itself.

  • Whether you are a tech company or not, there's a lot of data on computers that are physically in the office.

    • Except when they have encryption, which should be the standard? I mean how much data would authorities actually retrieve when most stuff is located on X servers anyways? I have my doubts.

      27 replies →

  • It sounds better in the news when you do a raid. These things are generally not done for any purpose other than to communicate a message and score political points.

  • I had the same thought - not just about raids, but about raiding a satellite office. This sounds like theater begging for headlines like this one.

  • These days many tech company offices have a "panic button" for raids that will erase data. Uber is perhaps the most notorious example.

    • >notorious

      What happened to due process? Every major firm should have a "dawn raid" policy to comply while preserving rights.

      Specific to the Uber case(s), if it were illegal, then why didn't Uber get criminal charges or fines?

      At best there's an argument that it was "obstructing justice," but logging people off, encrypting, and deleting local copies isn't necessarily illegal.

      7 replies →

    • This is a perfect way for the legal head of the company in-country to visit some jails.

      They will explain that it was done remotely and whatnot but then the company will be closed in the country. Whether this matters for the mothership is another story.

      6 replies →

    • Or they just connect to a mothership with keys on the machine. The authorities can have the keys, but alas, they're useless now, because there is some employee watching the surveillance cameras in the US, and he pressed a red button revoking all of them. What part of this is illegal?

      Obviously, the government can just threaten to fine you any amount, close operations or whatever, but your company can just decide to stop operating there, like Google after Russia imposed an absurd fine.

      1 reply →

  • They do have some physical records, but it would be mostly investigators producing a warrant and forcing staff to hand over administrative credentials to allow forensic data collection.

  • > Seems like you'd want to subpoena source code or gmail history or something like that.

    This would be done in parallel for key sources.

    There is a lot of information on physical devices that is helpful, though. Even discovering additional apps and services used on the devices can lead to more discovery via those cloud services, if relevant.

    Physical devices have a lot of additional information, though: Files people are actively working on, saved snippets and screenshots of important conversations, and synced data that might be easier to get offline than through legal means against the providers.

    In outright criminal cases it's not uncommon for individuals to keep extra information on their laptop, phone, or a USB drive hidden in their office as an insurance policy.

    This is yet another good reason to keep your work and personal devices separate, as hard as that can be at times. If there's a lawsuit you don't want your personal laptop and phone to disappear for a while.

    • Sure it might be on the device, but they would need a password to decrypt the laptop's storage to get any of the data. There's also the possibility of the MDM software making it impossible to decrypt if given a remote signal. Even if you image the drive, you can't image the secure enclave so if it is wiped it's impossible to retrieve.

      1 reply →

  • Gather evidence.

    I assume that they have opened a formal investigation and are now going to the office to collect/perloin evidence before it's destroyed.

    Most FAANG companies have training specifically for this. I assume X doesn't anymore, because they are cool and edgy, and staff training is for the woke.

  • Why is this the most upvoted question? Obsessing over pedantry rather than the main thrust of what's being discussed

  • I read somewhere that Musk (or maybe Theil) companies have processes in place to quickly offload data from a location to other jurisdictions (and destroy the local data) when they detect a raid happening. Don't know how true it is though. The only insight I have into their operations was the amazing speed by which people are badged in and out of his various gigafactories. It "appears" that they developed custom badging systems when people drive into gigafactories to cut the time needed to begin work. If they are doing that kind of stuff then there has got to be something in place for a raid. (This is second hand so take with a grain of salt)

    EDIT: It seems from other comments that it may have been Uber I was reading about. The badging system I have personally observed outside the Gigafactories. Apologies for the mixup.

Guess that will be a SpaceX problem soon enough. What a mess.

  • I wonder if the recent announcement spurred them into making a move now rather than later.

    • The merger was most likely now because they have to do it before the IPO. After the IPO, there’s a whole process to force independent evaluation and negotiation between two boards / executives, which would be an absolute dumpster fire where Musk controls both.

      When they’re both private, fine, whatever.

      1 reply →

  • How was that move legal anyway? Like... a lot of people and governments gave Musk money to develop, build and launch rockets. And now he's using it to bail out his failing social media network and CSAM peddling AI service.

    • Once he launched the rockets he can do whatever he wants with the profit. And he wants to train Grok.

I remember encountering questionable hentai material (by accident) back in the Twitter days. But back then twitter was a leftist darling

  • I think there's a difference between "user uploaded material isn't properly moderated" and "the sites own chatbot generates porn on request based on images of women who didn't agree to it", no?

France24 article on this: https://www.france24.com/en/france/20260203-paris-prosecutor...

lol, they summoned Elon for a hearing on 420

"Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,

  • >The Paris prosecutor's office said it launched the investigation after being contacted by a lawmaker alleging that biased algorithms in X were likely to have distorted the operation of an automated data processing system.

    I'm not at all familiar with French law, and I don't have any sympathy for Elon Musk or X. That said, is this a crime?

    Distorted the operation how? By making their chatbot more likely to say stupid conspiracies or something? Is that even against the law?

    • > I'm not at all familiar with French law, and I don't have any sympathy for Elon Musk or X. That said, is this a crime?

      GDPR and DMA actually have teeth. They just haven't been shown yet because the usual M.O. for European law violators is first, a free reminder "hey guys, what you're doing is against the law, stop it, or else". Then, if violations continue, maybe two or three rounds follow... but at some point, especially if the violations are openly intentional (and Musk's behavior makes that very very clear), the hammer gets brought down.

      Our system is based on the idea that we institute complex regulations, and when they get introduced and stuff goes south, we assume that it's innocent mistakes first.

      And in addition to that, there's the geopolitical aspect... basically, hurt Musk to show Trump that, yes, Europe means business and has the means to fight back.

      As for the allegations:

      > The probe has since expanded to investigate alleged “complicity” in spreading pornographic images of minors, sexually explicit deepfakes, denial of crimes against humanity and manipulation of an automated data processing system as part of an organised group, and other offences, the office said in a statement Tuesday.

      The GDPR/DMA stuff just was the opener anyway. CSAM isn't liked by authorities at all, and genocide denial (we're not talking about Palestine here, calm your horses y'all, we're talking about Holocaust denial) is a crime in most European jurisdiction (in addition to doing the right-arm salute and other displays of fascist insignia). We actually learned something out of WW2.

  • Why "lol"?

    • 420 is a stoner number, stoners lol a lot, thought of Elmo's failed joint smoking on JRE before I stopped watching

      ...but then other commenters reminded me there is another thing on the same date, which might have been more the actual troll at Elmo to get him all worked up

      4 replies →

  • > lol, they summoned Elon for a hearing on 420

    No. It's 20 April in the rest of the world: 204.

> Prosecutors say they are now investigating whether X has broken the law across multiple areas.

This step could come before a police raid.

This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

  • > This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

    The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

  • France prosecutors use police raids way more than other western countries. Banks, political parties, ex-presidents, corporate HQs, worksites... Here, while white-collar crimes are punished as much as in the US (i.e very little), we do at least investigate them.

I'm not saying I'm entirely against this, but just out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?

  • It was known that Grok was generating these images long before any action was taken. I imagine they’ll be looking for internal communications on what they were doing, or deciding not to do, doing during that time.

  • There was a WaPo article yesterday, that talked about how xAI deliberately loosened Grok’s safety guardrails and relaxed restrictions on sexual content in an effort to make the chatbot more engaging and “sticky” for users. xAI employees had to sign new waivers in the summer, and start working with harmful content, in order to train and enable those features.

    I assume the raid is hoping to find communications to establish that timeline, maybe internal concerns that were ignored? Also internal metrics that might show they were aware of the problem. External analysts said Grok was generating a CSAM image every minute!!

    https://www.washingtonpost.com/technology/2026/02/02/elon-mu...

  • Maybe emails between the French office and the head office warning they may violate laws, and the response by head office?

  • Unlikely, if only because the statement doesn't mention CSAM. It does say:

    "Among potential crimes it said it would investigate were complicity in possession or organised distribution of images of children of a pornographic nature, infringement of people's image rights with sexual deepfakes and fraudulent data extraction by an organised group."

  • What do they hope to find, specifically? Who knows, but maybe the prosecutors have a better awareness of specifics than us HN commenters who have not been involved in the investigation.

    What may they find, hypothetically? Who knows, but maybe an internal email saying, for instance, 'Management says keep the nude photo functionality, just hide it behind a feature flag', or maybe 'Great idea to keep a backup of the images, but must cover our tracks', or perhaps 'Elon says no action on Grok nude images, we are officially unaware anything is happening.'

    • Or “regulators don't understand the technology; short of turning it off entirely, there's nothing we can do to prevent it entirely, and the costs involved in attempting to reduce it are much greater than the likely fine, especially given that we're likely to receive such a fine anyway.”

      2 replies →

  • out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?

    You're not too far off.

    There was a good article in the Washington Post yesterday about many many people inside the company raising alarms about the content and its legal risk, but they were blown off by managers chasing engagement metrics. They even made up a whole new metric.

    There was also prompts telling the AI to act angry or sexy or other things just to keep users addicted.

  • Moderation rules? Training data? Abuse metrics? Identities of users who generated or accessed CSAM?

    • Do you think that data is stored at the office? Where do you think the data is stored? The janitors closet?

This is a show of resolve.

"Uh guys, little heads up: there are some agents of federal law enforcement raiding the premises, so if you see that. That’s what that is."

Why would X have offices in France? I'm assuming it's just to hire French workers? Probably leftover from the Pre Acquisition era.

Or is there any France-specific compliance that must be done in order to operate in that country?

  • X makes its money selling advertising. France is the obvious place to have an office selling advertising to a large European French-speaking audience.

    • Yes, Paris is an international capital and centrally located for Europe, the Middle East, and Africa. Many tech companies have sales offices there.

Finally, someone is taking action against the CSAM machine operating seemingly without penalty.

  • I am not a fan of Grok, but there has been zero evidence of it creating CSAM. For why, see https://www.iwf.org.uk/about-us/

    • CSAM does not have a universal definition. In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response. If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.

      No abuse of a real minor is needed.

      23 replies →

    • Are you implying that it's not abuse to "undress" a child using AI?

      You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools. Just because these images are "fake" doesn't mean they're not abuse, and that there aren't real victims.

      3 replies →

> The prosecutor's office also said it was leaving X and would communicate on LinkedIn and Instagram from now on.

I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms, and start treating the communication with the public that funds your existence in different terms. The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.

  • I agree with you. In my opinion it was already bad enough that official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users, but at least Twitter was arguably a mostly open communication platform and could be misunderstood as a public service in the minds of the less well-informed. However, deciding to "communicate" at this day and age on LinkedIn and Instagram, neither of which ever made a passing attempt to pretend to be a public communications service, boggles the mind.

    • > official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users

      ... thereby driving up adoption far better than Twitter itself could. Ironic or what.

  • >I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms

    I think we are getting very close the the EU's own great firewall.

    There is currently a sort of identity crisis in the regulation. Big tech companies are breaking the laws left and right. So which is it?

    - fine harvesting mechanism? Keep as-is.

    - true user protection? Blacklist.

  • In an ideal world they'd just have an RSS feed on their site and people, journalists, would subscribe to it. Voilà!

  • This. What a joke. Im still waiting on my tax refund from NYC for plastering "twitter" stickers on every publicly funded vehicle.

  • >The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.

    Who decides what communication is in the interest of the public at large? The Trump administration?

    • You appear to have posted a bit of a loaded question here, apologies if I'm misinterpreting your comment. It is, of course, the public that should decide what communication is of public interest, at least in a democracy operating optimally.

      I suppose the answer, if we're serious about it, is somewhat more nuanced.

      To begin, public administrations should not get to unilaterally define "the public interest" in their communication, nor should private platforms for that matter. Assuming we're still talking about a democracy, the decision-making should be democratically via a combination of law + rights + accountable institutions + public scrutiny, with implementation constraints that maximise reach, accessibility, auditability, and independence from private gatekeepers. The last bit is rather relevant, because the private sector's interests and the citizen's interests are nearly always at odds in any modern society, hence the state's roles as rule-setter (via democratic processes) and arbiter. Happy to get into further detail regarding the actual processes involved, if you're genuinely interested.

      That aside - there are two separate problems that often get conflated when we talk about these platforms:

      - one is reach: people are on Twitter, LinkedIn, Instagram, so publishing there increases distribution; public institutions should be interested in reaching as many citizens as possible with their comms;

      - the other one is dependency: if those become the primary or exclusive channels, the state's relationship with citizens becomes contingent on private moderation, ranking algorithms, account lockouts, paywalls, data extraction, and opaque rule changes. That is entirely and dangerously misaligned with democratic accountability.

      A potential middle position could be ti use commercial social platforms as secondary distribution instead of the authoritative channel, which in reality is often the case. However, due to the way societies work and how individuals operate within them, the public won't actually come across the information until it's distributed on the most popular platforms. Which is why some argue that they should be treated as public utilities since dominant communications infrastructure has quasi-public function (rest assured, I won't open that can of worms right now).

      Politics is messy in practice, as all balancing acts are - a normal price to pay for any democratic society, I'd say. Mix that with technology, social psychology and philosophies of liberty, rights, and wellbeing, and you have a proper head-scratcher on your hands. We've already done a lot to balance these, for sure, but we're not there yet and it's a dynamic, developing field that presents new challenges.

      1 reply →

Once you've worked long enough in the software industry, you start to understand it's all just a fully planned economy.

Facebook offices should routinely raided for aiding and profitting from various scams propagated through ads on this platform.

Elon's in the files asking Epstein about "wild parties" and then doesn't seem to care about all this. Easy to draw a conclusion here.

  • All I've seen is Elon tried to invite himself to the "wild parties" and they told him he couldn't come and that they weren't doing them anymore lol. It's possible he went but, from what I've seen, he wasn't ever invited.

  • Elon is literally in the files, talking about going to the island. It's documented

    • Who knows who did what on this island, and I hope we'll figure it out. But in the meantime, going to this island or/and being friend with Epstein doesn't automatically make someone a pedo or rapist.

      5 replies →

    • You know the flight logs are public record and have been for a decade right? We know (and have known for awhile), exactly who was and wasn't there. Who was there: Obama, Bill Clinton, and Bill Gates (his frequency of visits cost him his marriage). Who wasn't there? Trump and Elon because at the time they weren't important enough to get an invite. All of this is a matter of public record.

      1 reply →

> They have also summoned billionaire owner Elon Musk for questioning.

Good luck with that...

  • the thing is a lot of recent legal preceding surrounding X is about weather X fulfilled the legally required due diligence and if not what level of negligence we are speaking about

    and the things about negligence which caused harm to humans (instead of e.g. just financial harm) is that

    a) you can't opt out of responsibility, it doesn't matter what you put into your TOS or other contracts

    b) executives which are found responsible for the negligent action of a company can be hold _personally_ liable

    and independent of what X actually did Musk as highest level executive personal did

    1) frequently did statements that imply gross negligence (to be clear that isn't necessary how X acted, which is the actual relevant part)

    2) claimed that all major engineering decisions etc. are from him and no one else (because he love bragging about how good of an engineer he is)

    This means summoning him for questioning is legally speaking a must have independent of weather you expect him to show up or not. And he probably should take it serious, even if that just means he also could send a different higher level executive from X instead.

[flagged]

  • if a user uses a tool to break the law it's on the person who broke the law not the people who made the tool. knife manufacturers aren't to blame if someone gets stabbed right?

    • This seems different. With a knife the stabbing is done by the human. That would be akin to a paintbrush or camera or something being used to create CSAM.

      Here you have a model that is actually creating the CSAM.

      It seems more similar to a robot that is told to go kill someone and does so. Sure, someone told the robot to do something, but the creators of the robot really should have to put some safeguards to prevent it.

    • If the knife manufacturer willingly broke the law in order to sell it, then yes.

      If the manufacturer advertised that the knife is not just for cooking but also stabbing people, then yes.

      if the knife was designed to evade detection, then yes.

    • Text on the internet and all of that, but you should have added the "/s" to the end so people didn't think you were promoting this line of logic seriously.

    • If a knife manufacturer constructs an apparatus wherein someone can simply write "stab this child" on a whim to watch a knife stab a child, that manufacturer would in fact discover they are in legal peril to some extent.

    • I mean, no one's ever made a tool who's scope is "making literally anything you want," including, apparently CSAM. So we're in a bit of uncharted waters, really. Like mostly, no I would agree, it's a bad idea to hold the makers of a tool responsible for how it's used. And, this is an especially egregious offense on the part of said tool-maker.

      Like how I see this is:

      * If you can't restrict people from making kiddie porn with Grok, then it stands to reason at the very least, access to Grok needs to be strictly controlled.

      * If you can restrict that, why wasn't that done? It can't be completely omitted from this conversation that Grok is, pretty famously, the "unrestrained" AI, which in most respects means it swears more, quotes and uses highly dubious sources of information that are friendly to Musk's personal politics, and occasionally spouts white nationalist rhetoric. So as part of their quest to "unwoke" Grok did they also make it able to generate this shit too?

  • This is really amusing to watch, because everything that Grok is accused of is something which you can also trigger in currently available open-weight models (if you know what you're doing).

    There's nothing special about Grok in this regard. It wasn't trained to be a MechaHitler, nor to generate CSAM. It's just relatively uncensored[1] compared to the competition, which means it can be easily manipulated to do what the users tell it to, and that is biting Musk in the ass here.

    And just to be clear, since apparently people love to jump to conclusions - I'm not excusing what is happening. I'm just pointing out the fact that the only special thing about Grok is that it's both relatively uncensored and easily available to a mainstream audience.

    [1] -- see the Uncensored General Intelligence leaderboard where Grok is currently #1: https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard

    • > everything that Grok is accused of is something which you can also trigger in currently available open-weight models (if you know what you're doing)

      Well, yes. You can make child pornography with any video-editing software. How is this exoneration?

      6 replies →

    • Maybe tying together an uncensored AI model and a social network just isn't something that's ethical / should be legal to do.

      There are many things where each is legal/ethical to provide, and where combining them might make business sense, but where we, as a society have decided to not allow combining them.

  • Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user. The only thing this proves is that you can't upset the French government or they'll go on a fishing expedition through your office praying to find evidence of a crime.

    • >Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user.

      There is no way this is true, especially if the system is PaaS only. Additionally, the system should have a way to tell if someone is attempting to bypass their safety measures and act accordingly.

    • > if requested by a savvy user

      Grok brought that thought all the way to "... so let's not even try to prevent it."

      The point is to show just how aware X were of the issue, and that they chose to repeatedly do nothing against Grok being used to create CSAM and probably other problematic and illegal imagery.

      I can't really doubt they'll find plenty of evidence during discovery, it doesn't have to be physical things. The raid stops office activity immediately, and marks the point in time after which they can be accused of destroying evidence if they erase relevant information to hide internal comms.

      1 reply →

    • >Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user. The only thing this proves is that you can't upset the French government or they'll go on a fishing expedition through your office praying to find evidence of a crime.

      If every AI system can do this, and every AI system in incapable of preventing it, then I guess every AI system should be banned until they can figure it out.

      Every banking app on the planet "is capable" of letting a complete stranger go into your account and transfer all your money to their account. Did we force banks to put restrictions in place to prevent that from happening, or did we throw our arms up and say: oh well the French Government just wants to pick on banks?

      5 replies →

It's cool that not every law enforcement agency in the world is under the complete thumb of U.S. based billionaires.

I’m sure Musk is going to say this is about free speech in an attempt to gin up his supporters. It isn’t. It’s about generating and distributing non consensual sexual imagery, including of minors. And, when notified, doing nothing about it. If anything it should be an embarrassment that France are the only ones doing this.

(it’ll be interesting to see if this discussion is allowed on HN. Almost every other discussion on this topic has been flagged…)

  • > If anything it should be an embarrassment that France are the only ones doing this.

    As mentioned in the article, the UK's ICO and the EC are also investigating.

    France is notably keen on raids for this sort of thing, and a lot of things that would be basically a desk investigation in other countries result in a raid in France.

  • > when notified, doing nothing about it

    When notified, he immediately:

      * "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing" - https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo 
    
      * locked image generation down to paid accounts only (i.e. those individuals that can be identified via their payment details).
    

    Have the other AI companies followed suit? They were also allowing users to undress real people, but it seems the media is ignoring that and focussing their ire only on Musk's companies...

    • You and I must have different definitions of the word “immediately”. The article you posted is from January 15th. Here is a story from January 2nd:

      https://www.bbc.com/news/articles/c98p1r4e6m8o

      > Have the other AI companies followed suit? They were also allowing users to undress real people

      No they weren’t? There were numerous examples of people feeding the same prompts to different AIs and having their requests refused. Not to mention, X was also publicly distributing that material, something other AI companies were not doing. Which is an entirely different legal liability.

      4 replies →

Surprised the EU hasn’t banned it yet given that the platform is manipulated by Musk to destabilize Europe and move it towards the far right. The child abuse feels like a smaller problem compared to that risk.

  • In my opinion I think the reason they raided the offices for CSAM would be there are laws on the books for CSAM and not for social manipulation. If people could be jailed for manipulation there would be no social media platforms, lobbyists, political campaign groups or advertisements. People are already being manipulated by AI.

    On a related note given AI is just a tool and requires someone to tell it to make CSAM I think they will have to prove intent possibly by grabbing chat logs, emails and other internal communications but I know very little about French law or international law.

    • It's broader and mentioned in the article:

      >French authorities opened their investigation after reports from a French lawmaker alleging that biased algorithms on X likely distorted the functioning of an automated data processing system. It expanded after Grok generated posts that allegedly denied the Holocaust, a crime in France, and spread sexually explicit deepfakes, the statement said.

      1 reply →

    • hold on, are you saying that you should be able to be jailed for manipulation? Where would that end? could i be jailed if i post a review for a restaurant if you feel it manipulated you? or anyone stating an opinion could be construed as manipulation. that is beyond a slippery slope, that is an authoritarian nightmare.

      4 replies →

    • > I think the reason they raided the offices for CSAM

      Sigh. The French raid statement makes no mention of CSAM.

    • I had to make a choice to not even use Grok (I wasn't overly interested in the first place, but wanted to review how it might compare to the other tools), because even just the Explore option shows photos and videos of CSAM, CSAM-adjacent, and other "problematic" things in a photorealistic manner (such as implied bestiality).

      Looking at the prompts below some of those image shows that even now, there's almost zero effort at Grok to filter prompts that are blatantly looking to create problematic material. People aren't being sneaky and smart and wordsmithing subtle cues to try to bypass content filtering, they're often saying "create this" bluntly and directly, and Grok is happily obliging.

    • Given America passed PAFACA (intended to ban TikTok, which Trump instead put in hands of his friends), I would think Europe would also have a similar law. Is that not the case?

      3 replies →

  • There's no tool, technological or legal, to block/ban a website EU-wide.

    • The EU can declare a company behind a criminal enterprise and the financial industry must then prevent EU citizens from transacting with them.

    • They will set their DNS servers to drop all incoming connections to X. That can be done in each country. They can use Deep Packet inspection tools and go from there. If the decision is EU wide then they will roll that out.

      2 replies →

  • I am not surprised at all. Independent of whether this is true, such a decision from the EU would never be acted upon. The number of layers between the one who says "ban it" somewhere in Bruissels and the operator blackholing the DNS and filtering traffic is decades.

    • Why do you think that? It can take a few years for national laws bring in place, but that also depends on how much certain countries push it. Regarding Internet traffic I assume a few specific countries that route most of the traffic would be enough to stop operation for the most part.

      1 reply →

  • Simply because if you were to ban this type of platform you wouldn't need Musk to "move it towards the far right" because you would already be the very definition of a totalitarian regime.

    But whatever zombie government France is running can't "ban" X anyway because it would get them one step closer to the guillotine. Like in the UK or Germany it is a tinderbox cruising on a 10-20% approval rating.

    If "French prosecutor" want to find a child abuse case they can check the Macron couple Wikipedia pages.

    • What do you mean with "this type of platform"? Platforms that don't follow (any) national laws have been banned in multiple countries over the years.

      By itself this isn't extraordinary in a democracy.

      1 reply →

    • > if you were to ban this type of platform you wouldn't need Musk to "move it towards the far right" because you would already be the very definition of a totalitarian regime

      Paradox of tolerance. (The American right being Exhibit A for why trying to let sunlight disinfect a corpse doesn’t work.)

  • Big platforms and media are only good if they try to move the populace to the progressive, neoliberal side. Otherwise we need to put their executives in jail.

  • > The child abuse feels like a smaller problem compared to that risk.

    I think we can and should all agree that child sexual abuse is a much larger and more serious problem than political leanings.

    It's ironic as you're commenting about a social media platform, but I think it's frightening what social media has done to us with misinformation, vilification, and echo chambers, to think political leanings are worse than murder, rape, or child sexual abuse.

    • In fairness, AI-generated CSAM is nowhere near as evil as real CSAM. The reason why possession of CSAM was such a serious crime is because its creation used to necessitate the abuse of a child.

      It's pretty obvious the French are deliberately conflating the two to justify attacking a political dissident.

      3 replies →

  • Almost like the EU can't just ban speech on a whim the way US far right people keep saying it can.

  • [flagged]

    • > fairly open platform where people can choose what to post and who to follow.

      It is well known Musk amplifies his own speech and the words of those he agrees with on the platform, while banning those he doesn’t like.

      https://www.theguardian.com/commentisfree/2024/jan/15/elon-m...

      > could you clarify what the difference is between the near right and the far right?

      It’s called far-right because it’s further to the right (starting from the centre) than the right. Wikipedia is your friend, it offers plenty of examples and even helpfully lays out the full spectrum in a way even a five year old with a developmental impairment could understand.

      https://en.wikipedia.org/wiki/Far-right_politics

      8 replies →

    • Elon fiddles with the algorithm to boost certain accounts. Some accounts are behind an auth wall and others are not. It’s open but not even.

      8 replies →

    • Far right to me is advocating for things that discriminate based on protected traits like race, sex, etc. So if you’re advocating for “white culture” above others, that’s far right. If you’re advocating for the 19th amendment (women’s right to vote) to be repealed (as Nick Fuentes and similar influencers do), that’s also far right. Advocating for ICE to terrorize peaceful residents, violate constitutional rights, or outright execute people is also far right.

      Near right to me is advocating for things like lower taxes or different regulations or a secure border (but without the deportation of millions who are already in the country and abiding by laws). Operating the government for those things while still respecting the law, upholding the constitution, defending civil rights, and avoiding the deeply unethical grifting and corruption the Trump administration has normalized.

      Obviously this is very simplified. What are your definitions out of curiosity?

      4 replies →

I suppose those are the offices from SpaceX now that they merged.

  • So France is raiding offices of US military contractor?

    • How is that relevant? Are you implying that being a US military contractor should make you immune to the laws of other countries that you operate in?

      The onus is on the contractor to make sure any classified information is kept securely. If by raiding an office in France a bunch of US military secrets are found, it would suggest the company is not fit to have those kind of contracts.

    • I know it's hard to grasp for you. But in France, french laws and jurisdiction applies, not those of the United States

I guess this means that building the neverending 'deepfake CSAM on demand machine' was a bad idea.

Interesting. This is basically the second enforcement on speech / images that France has done - first was Pavel Durov @ Telegram. He eventually made changes in Telegram's moderation infrastructure and I think was allowed to leave France sometime last year.

I don't love heavy-handed enforcement on speech issues, but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard, just as a matter of keeping a diverse set of global standards, something that adds cultural resilience for humanity.

linkedin is not a replacement for twitter, though. I'm curious if they'll come back post-settlement.

  • In what world is generating CSAM a speech issue? Its really doing a disservice to actual free speech issues to frame it was such.

    • if pictures are speech, then either CSAM is speech, or you have to justify an exception to the general rule.

      CSAM is banned speech.

    • The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration. That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.

      9 replies →

  • Very different charges however.

    Durov was held on suspicion Telegram was willingly failing to moderate its platform and allowed drug trafficking and other illegal activities to take place.

    X has allegedly illegally sent data to the US in violation of GDPR and contributed to child porn distribution.

    Note that both are directly related to direct violation of data safety law or association with a separate criminal activities, neither is about speech.

    • I like your username, by the way.

      CSAM was the lead in the 2024 news headlines in the French prosecution of Telegram also. I didn't follow the case enough to know where they went, or what the judge thought was credible.

      From a US mindset, I'd say that generation of communication, including images, would fall under speech. But then we classify it very broadly here. Arranging drug deals on a messaging app definitely falls under the concept of speech in the US as well. Heck, I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech.

      Obviously, assassinations themselves, not so much.

      2 replies →

  • >but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard

    Censorship increases homogeneity, because it reduces the amount of ideas and opinions that are allowed to be expressed. The only resilience that comes from restricting people's speech is resilience of the people in power.

    • You were downvoted -- a theme in this thread -- but I like what you're saying. I disagree, though, on a global scale. By resilience, I mean to reference something like a monoculture plantation vs a jungle. The monoculture plantation is vulnerable to anything that figures out how to attack it. In a jungle, a single plant or set might be vulnerable, but something that can attack all the plants is much harder to come by.

      Humanity itself is trending more toward monoculture socially; I like a lot of things (and hate some) about the cultural trend. But what I like isn't very important, because I might be totally wrong in my likes; if only my likes dominated, the world would be a much less resilient place -- vulnerable to the weaknesses of whatever it is I like.

      So, again, I propose for the race as a whole, broad cultural diversity is really critical, and worth protecting. Even if we really hate some of the forms it takes.

      1 reply →

    • I really don't see reasonable enforcement of CSAM laws as a restriction on "diversity of thought".

    • This is precisely the point of the comment you are replying to: a balance has to be found and enforced.

  • I wouldn't equate the two.

    There's someone who was being held responsible for what was in encrypted chats.

    Then there's someone who published depictions of sexual abuse and minors.

    Worlds apart.

    • Telegram isn't encrypted. For all the marketing about security, it has none, apart from TLS, and an optional "secret chat" feature that you have to explicitly select, only works with 2 participants and doesn't work very well.

      They can read all messages, so they don't have an excuse for not helping in a criminal case. Their platform had a reputation of being safe for crime, which is because they just... ignored the police. Until they got arrested for that. They still turn a blind eye but not to the police.

  • >but I do really like a heterogenous cultural situation

    Why isn't that a major red flag exactly?

    • Hi there - author here. Care to add some specifics? I can imagine lots of complaints about this statement, but I don't know which (if any) you have.

That's one way to steal the intellectual property and trade secrets of an AI company more successful than any French LLMs. And maybe accidentally leak confidential info.