← Back to context

Comment by eesmith

9 hours ago

Determining where the cameras are placed and what to alert on are also important and unresolved issues.

Simply getting alerts from a camera can cause people to believe that the area is a high-crime area, when it's merely a consequence of having a camera there.

Poor people are more like to be in public areas than rich pedophiles who can buy an island or ranch so they and their friends can enjoy wonderful secrets out of the eye of any Flock camera.

If the camera alerts on AI facial recognition for wanted criminals, and facial recognition causes disproportionally higher false alerts for people of south Asian heritage than of Anglo-Norman heritage, then systemic racism is built into the system, which we should all mind.

I'm not talking about monitoring public spaces or searching for criminals. I don't want either of those things and I'm generally opposed to the government operating cameras. I just don't mind private businesses using them to support their existing security guards so long as they don't mishandle or abuse the data.

I'd even be in favor of entirely banning the use of facial recognition technology in conjunction with security cameras. Have them alert on concrete suspicious activity.

  • I took your list ("The issues are internet connectivity, data retention/mining/sale, and non-local processing") as being incomplete. The examples I gave were to give examples of additional issues. There are equivalents for my examples to private businesses, even putting recognition systems to the side.

    I personally have noticed that "alert" and "suspicious" tends to mean "something unusual", and not "something illegal". Increasing alerts results in forced normality.

    On the flip side, if the information was there and not used, then the security guards are blamed for not connecting the dots, so investigating alerts becomes a CYA task.

    As an example, security guards have harassed people on public sidewalks who are legally taking pictures of the building they are guarding. They are incentivized to investigate the alert, face no consequences (so long as that harassment doesn't itself break the law) for a false alert, and risk losing their job if the photographs are used for nefarious purposes. Adding air-gapped AI may help the security guards, while increasing the amount of harassment.

    Yes, I have had a security guard stand over me while I delete a photograph I took of a building while in a public park. I think I was not legally required to follow request. I wasn't going to risk escalating the confrontation over a picture of a neat-looking gargoyle. No, I don't want AI enabling more of that harassment.