← Back to context

Comment by p410n3

5 years ago

These are exactly the cases that worry me. ML / AI is not ready to be used like that. IDK if it ever will be, but they are already using it in production anyways.

It reminds me of when powerful institutions treat lie detectors or facial recognition systems as infallible.

Worse than that, these systems are perfect for decision laundering. You can make the system do arbitrary judgements, and blame negative consequences on "bias in the training data" or such.

regex != ML

They've applied ML to discern status updates from emails. They've applied ML to recognize speech fairly accurately... This kind of behavior seems far too unsophisticated for that. In the Twitter thread some people are suggesting it's something to do with politics. If that's so, then it likely means hands-on-keyboard-finger-on-scales thing a human would cause.