Comment by cthor
5 years ago
There's no reason the number of humans dealing with these problems can't scale alongside the number of humans creating them.
But it's a lot cheaper to pay for a few really expensive programmers to make a just-good-enough AI than to pay for thousands of human moderators. So we end up with a stupid computer creating tonnes of human misery all for the sake of FAANG's already fat profit margins.
"So we end up with a stupid computer creating tonnes of human misery all for the sake of FAANG's already fat profit margins."
I don't want to blame this entirely on the big companies, though. Also the people want and expect "free" things on the internet. This is how we ended up like this.
> There's no reason the number of humans dealing with these problems can't scale alongside the number of humans creating them.
I would think the attackers are using automation also, to spam attacks as in other areas of fraud. It can only be a battle of AI ultimately.
Depends on which problem your tackling. With App reviews for example it is very easy to rate limit the 100 USD developer licenses. And also in cases like the one the medium article is facing businesses would gladly pay a hundred bucks to get real humans to produce competent answers/reviews/decisions. And if you dislike this solution because it creates a google tax (pay us or we'll block your site), make it not a service payment, but a security deposit which they'll only keep if you are fraudulent in some way.
Is it just me but the way things are currently stacked, human insight is still the best line of defence? The OP and other anecdotes in the comments are examples why we’re not quite at “AI vs AI” yet