Comment by HDBaseT
14 hours ago
I don't think people on HN think "AI is infallible", I think people on HN believe HN is sufficient enough for "most tasks". In the context of HN "most tasks" refers to programming tasks, not arresting and jailing people tasks.
You should always validate the results, but there is an inherint difference between an AI generated tool for personal use and a tool which could be used to destroy someones life.
The problem is that the people who will put this in place rate capability on a linear scale: in their view the ability to write software is sufficiently magic, so such an ability is obviously good enough to recognize criminals. From their perspective, there are hurdles to be crossed (like probable cause) and an AI flagging a suspect feels like a magical intelligence crossing those hurdles and allowing them to continue in the process.
They don't validate the results of their fellow officers, or the validity of warrants, or anything else that predicates an arrest. Why would they start with this?
What about cops and legislators? They thing AI is infallible and thats very convenient for them since they can thus not mandate cops having to double check tmwhat the AI suggests