Comment by photonthug
2 years ago
> I think the AI-model-as-a-service is actually a great use case.
It's a good and natural use-case, but a use-case won't necessarily make a market.
Similar to the sibling comment that mentions voting. It's hard to get excited about new maths allegedly fixing problems in places where we have existing fixes that are ignored. If the simpler fixes aren't acceptable to the incumbent, naturally they'll just rule out bigger and better fixes for the same reasons and won't even bother to explain themselves to the public. (Do we need cryptographically modern voting when we can't even agree to fix stuff like gerrymandering?)
As an example, just looking at security/compliance as an industry, you'd think that people care about things like "verifiably correct" and yet there's so much of it that is just theater (self-attestation and other pinky-promises). Similarly for most B2B contracts that involve data-sharing and "do not JOIN with .." clauses. That stuff just happens so that outfits like Facebook can disavow any bad behaviour coming from 3rd parties, but it's behaviour they don't actually want to stop because that's the whole business model. Corporations like the theater that we have. And even if it's expensive (contract lawyers, compliance experts), they like that too because it's part of their moat, as long as it doesn't truly impact anything about operations.
If FHE were going to fix things later when it's matured, I would expect people today to care more about things like certified, legally actionable traces for data-lineage. (Having at least primitive lineage in place is already a cost of doing business, because otherwise you can't reliably work with tons of diverse inputs on tons of diverse models for training. And yet officially facebook [doesnt know what happens with your data](https://www.vice.com/en/article/akvmke/facebook-doesnt-know-...), and the world seems to have basically accepted that answer.)
> You want to use their AI model but you don't trust them to not train on your data so you don't want to send your data to them. They don't trust you enough to send you their models.
Basically saying this facilitates trust between competitors? It's an interesting idea, but I'm skeptical. Seems like Walmart will keep using Microsoft or Google's cloud just because they hate Amazon and don't want to arm the enemy with cash, not because they don't trust the enemy with information. Similarly for say American vs Chinese state interests.. fixing trust completely won't make it ok to outsource compute, because regardless of the information they don't even want the money moving that way.
Setting aside direct competitors, maybe it's an credit/insurance company with private records, and a vendor like Amazon with trained models? In this case they aren't direct competitors but just client/vendor. No one in this arrangement really cares about the privacy of consumers, so a pinky-promise is fine. Any fuck ups that end in leaks, and both parties have PR ass-coverage because they just blame the other guy. If anyone pays fines, no one cares, because it's less than the cost of doing this work any other way.
Thinking more about this, maybe I can imagine a real market for FHE with healthcare, because even the giants of surveillance capitalism can agree about both parts of this: they selfishly want their own privacy here, and they also stand to directly benefit from making research on aggregates more easily possible at scale.
Besides healthcare I'm cynical, probably cloud companies want FHE everywhere so they can sell more compute, and maybe it'll be even more compute-hungry than blockchain/AI. As much as I like the idea of seeing Amazon/Facebook lobbyists fist-fighting each other for the amusement of congress, maybe we should try simple solutions like basic laws, and enforcement of those laws before we try redistributing cash from ad-tech to hardware-mongers
No comments yet
Contribute on Hacker News ↗