Comment by IanCal

7 months ago

I doubt it. While it's always a bit of a gray area, the example for "medium risk" is a site with 8M monthly users who share images, doesn't have proactive scanning and has been warned by multiple major organisations that it has been used a few times to share CSAM material.

Cases where they assume you should say "medium risk" without evidence of it happening are if you've got several major risk factors:

> (a) child users; (b) social media services; (c) messaging services; (d) discussion forums and chat rooms; (e) user groups; (f) direct messaging; (g) encrypted messaging.

Also, before someone comes along with a specific subset and says those several things are benign

> This is intended as an overall guide, but rather than focusing purely on the number of risk factors, you should consider the combined effect of the risk factors to make an overall judgement about the level of risk on your service

And frankly if you have image sharing, groups, direct messaging, encrypted messaging, child users, a decent volume and no automated processes for checking content you probably do have CSAM and grooming on your service or there clearly is a risk of it happening.

The problem is the following: if you don't have basic moderation your forum will be abused for those various illegal purposes

Having a modicum of rule enforcement and basic abuse protections (let's say: new users can't upload files) on it goes a long way