Comment by diacritical
5 days ago
In theory I'm OK (kinda) with regulating the "overall design" somehow, but I don't see how it's going to work. Forced interoperability is a (very?) good idea, as it's really general, but it also doesn't address directly what the article and most comments talk about - the rage bait. I just can't imagine regulations (or "laws" or whatever the correct term is) that deal specifically with the algos that push rage bait that can't be later abused, if passed, to deal with other unpopular speech. And it seems like people want some laws to directly deal with that - the bad types of speech or algos themselves.
To clarify, I use "rage bait" as an example phrase, but it includes algos that only promote engagement at any cost and other things that aren't outright dangerous, but we think are dangerous. Not, like I said, CSAM or yelling FIRE or telling people to kill themselves.
Interoperability sidesteps the issue by giving users the choice of which algorithm (or algorithm provider) to use. The majority might or might not agree with that approach - for example obviously tobacco has not been left purely to the individual's judgment in the west.
Agreed, you can't regulate speech in a targeted manner while also not doing so. You're forced to find some common aspect much more general than "rage bait". Perhaps prohibiting the targeting of certain metrics? Or even prohibiting their collection in the first place.
> You're forced to find some common aspect much more general than "rage bait". Perhaps prohibiting the targeting of certain metrics? Or even prohibiting their collection in the first place.
Can you elaborate, give some ideas, examples, etc.? What metrics? How can you define them in a consistent, safe way?
We're talking generalized metrics. I have no idea which ones - I wasn't claiming to have solved the problem. The point is that if you can identify a general characteristic that is being used in a way which disproportionately contributes to a particular outcome then you can filter on that.
Estimated user age is an example of a metric largely unrelated to concerns regarding free speech. I doubt it has much relevance to the problem we're taking about here but hopefully you can imagine that prohibiting the targeting of ads or the curation of an algorithmic feed based on that metric would not be expected to unduly disadvantage any particular sort of speech.
2 replies →