Comment by kevingadd
1 year ago
only one data point, but fwiw when I worked for Google I found some actively toxic youtube content w/upwards of 500k views that was telling children to off themselves, and despite using my employee back-channel connections the most I was able to get was an eventual "I'm not allowed to do anything about this" from a YouTube moderator, though it seemed to be for technical reasons (all the nasty content was in annotations, which apparently weren't wired into the moderation pipeline). There definitely wasn't a red button for me to hit as an employee to get it taken down.
That seems 4chan levels of vile.
I ended up digging around on the channel and tracked it back to some people of that type, and they had some other uploads that were basically gloating that the video was immune to moderation. It was a rip of the Undertale soundtrack, so laser-targeted at kids (if you're unfamiliar with Undertale, it's recognizable enough that one of its characters got added into one of Nintendo's games)
Sadly if the Undertale soundtrack was aggressively content ID'd/DMCA'd, that would have been a way to take it down. But that would penalize everyone who uploads footage of that game, so obviously that's not done.
Yet if the video sampled Metallica for too long, it would be removed and the feds at your door within minutes. Such is an algorithm that is tuned ad revenue and lawsuites, as opposed to protection. The above story just confirms what the scammers in this video say about youtube about wholesale content scamming with AI editing software.
https://youtu.be/ZMfk-zP4xr0?si=R3RxVJJ7WxhKDj_L