Comment by almosthere
1 day ago
Death threats mainly. Personally I think it would be easier if they just made it so that platforms ran a tiny LLM against the content that will be posted - determined if it is a death threat, then require them to be identified before it's posted, then it would solve a lot of these problems.
TLDR: Evil people be doxxed internally not everyone.
That turns jokes into contracts that nobody wants. Bad idea.
Maybe just don’t make “jokes” like that.
I don't make such "jokes". Idiots do.
And when the idiots do, the proposed system locks the fire door for them. That's just dangerous. We'd want them with bunch of confusing options and better illuminated de-escalation paths.
a "tiny large language model"? lol
See https://tinyllm.org
These days the name "LLM" refers more to the architecture & usage patterns than it does to the size of model (though to be fair, even the "tiny" LLMs are huge compared to any models from 10+ years ago, so it's all relative).
Yeah, a small one that is cheaper because they'll be processing billions of messages per year.
Good thing all the kind people doing death threats won’t just bypass it?
5 replies →