Comment by fuzzfactor
6 months ago
>the Online Safety Act was supposed to hold big tech to account, but in fact they're the only ones who will be able to comply... it consolidates more on those platforms.
This says it so well, acknowledging the work of a misguided bureaucracy.
Looks like it now requires an online community to have its own bureaucracy in place, to preemptively stand by ready to effectively interact in new ways with a powerful, growing, long-established authoritarian government bureaucracy of overwhelming size and increasing overreach.
Measures like this are promulgated in such a way that only large highly prosperous outfits beyond a certain size can justify maintaining readiness for their own bureaucracies to spring into action on a full-time basis with as much staff as necessary to compare to the scale of the government bureaucracy concerned, and as concerns may arise that mattered naught before. Especially when there are new open-ended provisions for unpredictable show-stoppers, now fiercely codified to the distinct disadvantage of so many non-bureaucrats just because they are online.
If you think you are going to be able to rise to the occasion and dutifully establish your own embryonic bureaucracy for the first time to cope with this type unstable landscape, you are mistaken.
It was already bad enough before without a newly imposed, bigger moving target than everything else combined :\
Nope, these type regulations only allow firms that already have a prominent well-funded bureaucracy of their own, on a full-time basis, long-established after growing in response to less-onerous mandates of the past. Anyone else who cannot just take this in stride without batting an eye, need not apply.
> Looks like it now requires an online community to have its own bureaucracy in place
What do you mean by bureaucracy in this case? Doing the risk assessment?
Good question.
I would say more like the prohibitive cost of compliance comes from the non-productive (or even anti-productive) nature of the activities needed to do so, as an ongoing basis.
An initial risk assessment is a lot more of a fixed target with a goal that is in sight if not well within reach. Once it's behind you, it's possible to get back to putting more effort into productive actions. Assessments are often sprinted through so things can get "back to normal" ASAP, which can be worth it sometimes. Other times it's a world of hurt without paying attention to whether it's a moving goalpoast and the "sprint" might need to last forever.
Which can also be coped with successfully, like dealing with large bureaucratic institutions as customers, since that's another time when you've got to have your own little bureaucracy. To be fully dedicated to the interaction and well-staffed enough for continuous 24/7 problem-solving operation at a moment's notice. If it's just a skeleton crew at a minimum they will have a stunted ability for teamwork since the most effective deployment can be more like a relay race, where each member must pull the complete weight, go the distance, not drop the baton, and pass it with finesse.
While outrunning a pursuing horde and their support vehicles ;)
OP mentions this ( https://news.ycombinator.com/item?id=42440887 ):
> 1. Individual accountable for illegal content safety duties and reporting and complaints duties
> 2. Written statements of responsibilities
> 3. Internal monitoring and assurance
> 4. Tracking evidence of new and increasing illegal harm
> 5. Code of conduct regarding protection of users from illegal harm
> 6. Compliance training
> 7. Having a content moderation function to review and assess suspected illegal content
> 8. Having a content moderation function that allows for the swift take down of illegal content
> 9. Setting internal content policies
> 10. Provision of materials to volunteers
> 11. (Probably this because of file attachments) Using hash matching to detect and remove CSAM
> 12. (Probably this, but could implement Google Safe Browser) Detecting and removing content matching listed CSAM URLs
> ...
> the list goes on.
Most of those don't seem like they would actually be much problem.
First, #2, #4, #5, #6, #9, and #10 only apply to sites that have more than 7 000 000 monthly active UK users or are "multi-risk". Multi-risk means being at medium to high risk in at least two different categories of illegal/harmful content.
If the site has been operating a long time and has not had a problem with illegal/harmful content it is probably going to be low risk. There's a publication about risk levels here [1].
For the sake of argument though let's assume it is multi-risk.
#1 means having someone who has to explain and justify to top management what the site is doing to comply. It sounds like in the present case the person who would be handling compliance is also the person who is top management, so not really much to do here.
#2 means written statements saying which senior managers are responsible for the various things needed for compliance. For a site without a lot of different people working on it this means writing maybe a few sentences.
#3 is not applicable. It only applies to services that are large (more than 7 000 000 active monthly UK users) and are multi-risk.
#4 means keeping track of evidence of new or increasing illegal content and informing top management. Evidence can come from your normal processing, like dealing with complaints, moderation, and referrals from law enforcement.
Basically, keep some logs and stats and look for trends, and if any are spotted bring it up with top management. This doesn't sound hard.
#5 You have to have something that sets the standards and expectations for the people who will dealing with all this. This shouldn't be difficult to produce.
#6 When you hire people to work on or run your service you need to train them to do it in accord with your approach to complying with the law. This does not apply to people who are volunteers.
#7 and #8 These cover what you should do when you become aware of suspected illegal content. For the most part I'd expect sites could handle it like the handle legal content that violates the site's rules (e.g., spam or off-topic posts).
#9 You need a policy that states what is allowed on the service and what is not. This does not seem to be a difficult requirement.
#10 You have to give volunteer moderators access to materials that let them actually do the job.
#11 This only applies to (1) services with more than 7 000 000 monthly active UK users that have at least a medium risk of image-based CSAM, or (2) services with a high risk of image-based CSAM that either have at least 700 000 monthly active UK users or are a "file-storage and file-sharing service".
A "file-storage and file-sharing service" is:
> A service whose primary functionalities involve enabling users to:
> a) store digital content, including images and videos, on the cloud or dedicated server(s); and
> b) share access to that content through the provision of links (such as unique URLs or hyperlinks) that lead directly to the content for the purpose of enabling other users to encounter or interact with the content.
#12 Similar to #11, but without the "file-storage and file-sharing service" part, so only applicable if you have at least 700 000 monthly active UK users and are at a high risk of CSAM URLs or have at least 7 000 000 montly active UK users and at least a medium risk of SCAN URLs.
[1] https://www.ofcom.org.uk/siteassets/resources/documents/onli...
1 reply →