← Back to context

Comment by dom96

7 months ago

None of this seems to describe exactly what the problem with this new act is. Can someone ELI5 what this new law does that means it's no longer safe to run your own forum?

I think that the fact that no one is fully sure is part of the problem.

The act is intentionally very vague and broad.

Generally, the gist is that it's up to the platforms themselves to assess and identify risks of "harm", implement safety measures, keep records and run audits. The guidance on what that means is very loose, but some examples might mean stringent age verifications, proactive and effective moderation and thorough assessment of all algorithms.

If you were to ever be investigated, it will be up to someone to decide if your measures were good or you have been found lacking.

This means you might need to spend significant time making sure that your platform can't allow "harm" to happen, and maybe you'll need to spend money on lawyers to review your "audits".

The repercussions of being found wanting can be harsh, and so, one has to ask if it's still worth it to risk it all to run that online community?

  • bearing in mind, this is also a country that will jail you for a meme post online.

    • The full agenda of course is: if we jail someone for the meme, then we get to force the company to remove the meme, and then we get to destroy the company if they do not comply with exacting specifications within exact times. Thus full control of speech, teehee modern technology brings modern loopholes! "shut up peon, you still have full right to go into your front yard and say your meme to the squirrels"

      1 reply →

  • > If you were to ever be investigated, it will be up to someone to decide if your measures were good or you have been found lacking.

    This is the problem with many European (and I guess also UK) laws.

    GDPR is one notable example. Very few people actually comply with it properly. Hidden "disagree" options in cookie pop-ups and unauthorized data transfers to the US are almost everywhere, not to mention the "see personalized ads or pay" business model.

    Unlike with most American laws, GDPR investigations happen through a regulator, not a privately-initiated discovery process where the suing party has an incentive to dig up as much dirt as possible, so in effect, you only get punished if you either really go overboard or are a company that the EU dislikes (which is honestly mostly just Meta at this point).

    • NOYB is a non governmental organisation which initiated many of the investigations against Meta. E.g. they recently filed a complaint against the social media app BeReal for not taking no for an answer and continuesly asking for permission for data collection if you decline.

  • > The act is intentionally very vague and broad

    Exactly the complaint that everyone on here made about GDPR, saying the sky would fall in. If you read UK law like an American lawyer you will find it very scary.

    But we don't have political prosecuters out to make a name for themselves, so it works ok for us.

From Wikipedia:

> The act creates a new duty of care of online platforms, requiring them to take action against illegal, or legal but "harmful", content from their users. Platforms failing this duty would be liable to fines of up to £18 million or 10% of their annual turnover, whichever is higher.

  • Doesn't that 18 million minimum disproportionately effect smaller operations risk wise? Or is that the point?

    • Yes, but it sounds like part of the point is that you want to put the fear of the Lord into small-fry operators.

      They mention especially in their CSAM discussion that, in practice, a lot of that stuff ends up being distributed by smallish operators, by intention or by negligence—so if your policy goal is to deter it, you have to be able to spank those operators too. [0]

      > In response to feedback, we have expanded the scope of our CSAM hash-matching measure to capture smaller file hosting and file storage services, which are at particularly high risk of being used to distribute CSAM.

      Surely we can all think of web properties that have gone to seed (and spam) after they outlive their usefulness to their creators.

      I wonder how much actual “turnover” something like 4chan turns over, and how they would respond to the threat of a 10% fine vs an £18mm one…

      [0] https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...

      2 replies →

    • Yes. The regulation is set up to destroy smaller startups & organisations; the only folks who have a hope of complying with it are Big Tech.

      1 reply →

    • It's a minimum maximum. The amount is still "up to" and courts rarely assign the maximum penalty for anything. It seems aimed at platforms which really break the rules, but are run at minimal cost. Basically a value of "what do you charge a minimal forum run at cost, with sole purpose of breaking all these rules".

      2 replies →

    • It is not 18 million minimum, it is up to 18 million... unless you are so big that the second criteria affects you, then it is up to that.

  • Is that 18 million levied on the company? Or the individual?

    If it's the company, the shareholders etc are not liable.

It is essentially requiring tech companies to work for the UK government as part of law enforcement. They are required to monitor and censor users or face fines and Ofcom has the ability to shutdown things they don't like.

This basically ensures that the only people allowed to host online services for other people in the UK will be large corporations. As they are the only ones that can afford the automation and moderation requirements imposed by this bill.

You should be able to self-host content, but you can't do something like operate a forums website or other smaller social media platform unless you can afford to hire lawyers and spend thousands of dollars a month hiring moderators and/or implementing a bullet proof moderation system.

Otherwise you risk simply getting shutdown by Ofcom. Or you can do everything yo are supposed to do and get shutdown anyways. Good luck navigating their appeals processes.

  • I don't mind getting shut down so much as I mind getting a fine for millions when my small little website doesn't make any money.

    But surely no right minded judge would do such a thing, right?

https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...

You need to do a risk assessment and keep a copy. Depending on how risky things are, you need to put more mitigations in place.

If you have a neighbourhood events thing that people can post to, and you haven't had complaints and generally keep an eye out for misuse, that's it.

If you run a large scale chat room for kids with suicidal thoughts where unvetted adults can talk to them in DMs you're going to have a higher set of mitigations and things in place.

Scale is important, but it's not the only determining factor. An example of low risk for suicide harm is

> A large vertical search service specialised in travel searches, including for flights and hotels. It has around 10 million monthly UK users. It uses recommender systems, including for suggesting destinations. It has a basic user reporting system. There has never been any evidence or suggestion of illegal suicide content appearing in search results, and the provider can see no way in which this could ever happen. Even though it is a large service, the provider concludes it has negligible or no risk for the encouraging or assisting suicide offence

An example for high risk of grooming is

> A social media site has over 10 million monthly UK users. It allows direct messaging and has network expansion prompts. The terms of service say the service is only for people aged 16 and over. As well as a content reporting system, the service allows users to report and block other users. While in theory only those aged 16 and over are allowed to use the service, it does not use highly effective age assurance and it is known to be used by younger children. While the service has received few reports from users of grooming, external expert organisations have highlighted that it is known to be used for grooming. It has been named in various police cases and in a prominent newspaper investigation about grooming. The provider concludes the service is high risk for grooming