Comment by hinkley

5 years ago

It's interesting to me how Bloom Filters avoid the uncanny valley between probably correct and definitely correct. I don't know if this is a technological difference between problem domains or a purely ideology/mindset.

Dividing a problem by 10 should get notice. By 100 (eg, Bloom Filters) respect. By 1000, accolades. Dividing a problem by infinity should be recognized for what it is: a logic error, not an accomplishment.

Most times when I'm trying to learn someone else's process instead of dictating my own, I'm creating lists of situations where the outcomes are not good. When I have a 'class', I run it up the chain, with a counter-proposal of a different solution, which hopefully becomes the new policy. Usually, that new policy has a probationary period, and then it sticks. Unless it's unpopular, and then it gets stuck in permanent probation. I may have to formally justify my recommendation, repeatedly. In the meantime I have a lot of information queued up waiting for a tweak to the decision tree. We don't seem to be mimicking that model with automated systems, which I think is a huge mistake that is now verging on self-inflicted wound.

Perhaps stated another way, classifying a piece of data should result in many more actions than are visible to the customer, and only a few classifications should result in a fully automated action. The rest should be organizing the data in a way to expedite a human intervention, either by priority or bucket. I could have someone spend tuesday afternoons granting final dispensations on credit card fraud, and every morning looking at threats of legal action (priority and bucket).