Comment by dogman144

2 days ago

Was fortunate to talk to a security lead who built the data-driven policing network for a major American city that was an early adopter. ALPR vendors like Flock either heavily augment and/or anchor the tech setups.

What was notable to me is the following, and it’s why I think a career spent on either security researching, or going to law school and suing, these vendors into the ground over 20 years would be the ultimate act of civil service:

1. It’s not just Flock cams. It’s the data eng into these networks - 18 wheeler feed cams, flock cams, retail user nest cams, traffic cams, ISP data sales

2. All in one hub, all searchable by your local PD and also the local PD across state lines who doesn’t like your abortion/marijuana/gun/whatever laws, and relying on:

3. The PD to setup and maintain proper RBAC in a nationwide surveillance network that is 100%, for sure, no doubt about it (wait how did that Texas cop track the abortion into Indiana/Illinois…?), configured for least privilege.

4. Or if the PD doesn’t want flock in town, they reinstall cameras against the ruling (Illinois iirc?) or just say “we have the feeds for the DoT cameras in/out of town and the truckers through town so might as well have control over it, PD!”

Layer the above with the current trend in the US, and 2025 model Nissan uploading stop-by-stop geolocation and telematics to cloud (then, sold into flock? Does even knowing for sure if it does or doesn’t even matter?)

Very bad line of companies. Again all is from primary sources who helped implement it over the years. If you spend enough time at cybersecurity conferences you’ll meet people with these jobs.

As someone who has thought about, planned, and implemented a lot of RBAC... I would never trust the security of a system with RBAC at that level.

And to elaborate on that -- for RBAC to have properly defined roles for the right people and ensure that there's no unauthorized access to anything someone shouldn't have access to, you need to know exactly which user has which access. And I mean all of them. Full stop. I don't think I'm being hyperbolic here. Everyone's needs are so different and the risks associated to overprovisioning a role is too high.

When it's every LEO at the nation level that's way too many people -- it is pretty much impossible without dedicated people whose jobs it is to constantly audit that access. And I guarantee no institution or corporation would ever make a role for that position.

I'm not even going to lean into the trustworthiness and computer literacy of those users.

And that's just talking about auditing roles, never mind the constant bug fixes/additions/reductions to the implementation. It's a nightmare.

Funny enough, just this past week I was looking at how my company's roles are defined in admin for a thing I was working on. It's a complete mess and roles are definitely overprovisioned. The difference is it's a low-stakes admin app with only ~150 corporate employees who access it. But there was only like 8 roles!

Every time you add a different role, assign it to each different feature, and then give that role to a different user, it compounds.

I took your comment at face value but I hope to god that Flock at least as some sort of data/application partitioning that would make overprovisioning roles impossible. Was your Texas cop tracking an abortion a real example? Because that would be bad. So so bad.

  • It always starts with "we just give developers in project access to things in project and it all be nice and secure, we will also have separate role for deploy so only Senior Competent People can do it.

    Then the Senior Competent Person goes on vacation and some junior needs to run a deploy so they get the role.

    The the other project need a dev from different project to help them.

    Then some random person need something that has no role for it so they "temporarily" gets some role unrelated to his job.

    Then project changes a manager but the old one is still there for the transition

    And nobody ever makes a ticket to rescind that access

    And everything is a mess

    • ...and "the fix" that companies usually resort to is "use it or lose it" policies (e.g. you lose your role/permission after 30 days of non-use). So if you only do deployments for any given thing like twice a year, you end up having to submit a permissions request every single time.

      No big deal, right? Until something breaks in production and now you have to wait for multiple approvals before you can even begin to troubleshoot. "I guess it'll have to stay down until tomorrow."

      The way systems like this usually get implemented is there's an approval chain: First, your boss must approve the request and then the owner of the resource. Except that's only the most basic case. For production systems, you'll often have a much more complicated approval chain where your boss is just one of many individuals that need to approve such requests.

      The end result is a (compounding) inefficiency that slows down everything.

      Then there's AI: Management wants to automate as much as possible—which is a fine thing and entirely doable!—except you have this system where making changes requires approvals at many steps. So you actually can't "automate all the things" because the policy prevents it.

      1 reply →

This is the part that doesn’t get enough attention. The real risk isn’t any single vendor, it’s the aggregation layer. Once ALPR, retail cams, traffic cams, ISP data, and vehicle telematics all land in one searchable system, the idea that this will be perfectly RBAC’d and jurisdictionally contained is fantasy. At that point it’s not policing tech, it’s a nationwide surveillance substrate held together by policy promises.

  • I’ve been in security for a while and I increasingly think understanding what the future looks like under this threat model is about the only security research that really matters fully above the rest (many topics also very important in their own ways).

    The state change is just so significant and so under discussed because you learn about it via making an effort in a cybersec career, hitting conferences very years, eventually lucking out with who you met for a beer, and so on.

    So how do policy leaders trying to understand this stand a chance at understanding it? How do local PD chiefs understand what they’re bringing in, who I really do believe deserve the benefit of the doubt wrt positive intentions?

    There is really no counter-voice to an incredibly capable nationwide surveillance network that’s been around for at least 10-15 years. The EFF doesn’t really count because the EFF complains about these things, SEN Wyden writes a memo, and that seems to be the accepted scope of the work..

    Just like man… the bill of rights… it’s a thing! Insane technology.

The problem goes even deeper than messy RBAC in a database. This story showed that the system's brains are pushed to the edge, and if you gain access to the device, you don't even need the central police database. You get a local, highly intelligent agent working autonomously. This breaks the traditional threat model where we worry about "someone leaking the database"; here, the camera itself becomes an active reconnaissance tool. It turns out that instead of hacking a complex, (hopefully) secured cloud, you just need to find a smart eye like this with default settings, and you already have a personal spy at an intersection, bypassing any police access protocols

Now you have scale with ai hardware becoming cheaper and software incentives aligning.

  • I always thought that show "person of interest" was a bit far fetched. how could one system have access to that much data? privacy concerns would surely stop it.

    • You'd think so, but everytime a crime is solved by flock or the like, people keep celebrating it and using it as a justification.

      It reminds me of this meme: https://www.reddit.com/r/Cyberpunk/comments/sa0eh3/dont_crea...

      There are few reasons people probably keep building on this topic: 1. Eventually someone will do this anyway. 2. Thus, it shall be mine - I for sure will handle data better than anyone else can, respecting all sorts of guardrails etc. 3. company ipos, founder leaves, things happen.

I will offer an alternative POV: if your big brilliant plan is, sue the elected institutions over administrative decisions, don’t go to law school. It would be a colossal waste of your time. You will lose, even if you “win.”

You are advocating that talented people go for Willits as a blueprint of “civil service,” which is a terrible idea. It’s the worst idea.

If you have a strong opinion about administrative decisions, get elected, or work for someone who wins elections.

Or make a better technology. Talented people should be working on Project Longfellow for everything. Not, and I can’t believe I have to say this, becoming lawyers.

And by the way, Flock is installed in cities run by Democrats and Republicans alike, which should inform you that, this guy is indicting civil servants, not advocating for their elevation to some valued priesthood protecting civil rights.

  • https://www.opensecrets.org/federal-lobbying/clients/lobbyis...

    Do you mean these fine former civil servants simply making administrative decisions who are now Flock lobbyists, or do you mean current civil servants who are future Flock lobbyists?

    You more likely are getting paid something to not understand things if you, in 2025, believe the "bipartisan consensus" with massive donor class overlap is credible to anyone without an emotional need to rationalize.