Comment by photonthug

2 years ago

I’ve been trying to puzzle out the market for this kind of technology.

Making truly private data actually useful while retaining the privacy would of course be incredible and the use cases are obvious. But the people that have big data generally are not interested in privacy at all.

Academic interest in fhe is understandable since it’s fascinating, but why does industry care? Is this just hedging their bets for if/when the future brings stronger data regulation?

My undergraduate thesis project attempted to use homomorphic encryption to create a voting system where any party could verify the final tally count while keeping ballot secrecy.

  • Just like the other industries - are politicians generally interested in changing the status quo that works for them nicely already? Based on the history of use of auditable voting systems use in real-world elections, it feels like barely anyone cares.

    • Yeah probably not. Many established players in the game already and either way, adding fancy technology isn’t going to help the general public feel more trust in the system.

there are a lot of hard security problems that become easy if you can introduce an incorruptible 'trusted third party'. often the computations involved are pretty trivial and involve tiny amounts of data

want to find out which of your friends have secret crushes on you? you all tell trent, and then he tells you which of your crushes also have crushes on you, and also tells them

want digital money without double-spending, privacy invasion, or money supply inflation? just let trent keep track of what everyone's balance is. to pay someone, tell trent how much you want to pay them, and he'll decrease your balance and increase theirs. trent promises not to increase anyone's balance without decreasing somebody else's by the same amount

want to buy a good at the welfare-maximizing price? use a second-price auction: everyone privately tells trent their bid, and trent announces the winner (the person who bid the highest) and the price (the highest losing bid). that way bidding lower never gets someone the same good at a lower price; it just decreases their chance of winning

want to play poker with some people you don't trust? trent mentally shuffles the deck and tell you what cards he's dealt you (and what other cards are visible, in some variants). at the end of the round, he announces what cards were in every hand that didn't fold, and who won

there are an infinite number of problems like this

the trouble with trent is that any human 'trent' is corruptible, and trusting them with secrets gives them more power, and absolute power corrupts absolutely. the humans faff around with checks and balances and institutions and oaths and whatnot but they're pretty fragile. cryptographers have often designed clever protocols which solve individual problems from this infinite set, and some of them are secure

fhe makes it possible to construct an incorruptible 'trent': everybody can see the program, everybody can verify the operation of the program step by step, but nobody can see the data. it's almost a fully general cryptographic protocol design primitive

i forget who it was that explained this to me. i thought it was nick szabo but i can't find the essay now

I think the AI-model-as-a-service is actually a great use case.

You want to use their AI model but you don't trust them to not train on your data so you don't want to send your data to them. They don't trust you enough to send you their models.

So you encrypt your data, they compute on your data and send you the encrypted result back.

The only problem is that you have turned an expensive computation into a exponentially more expensive computation

  • > I think the AI-model-as-a-service is actually a great use case.

    It's a good and natural use-case, but a use-case won't necessarily make a market.

    Similar to the sibling comment that mentions voting. It's hard to get excited about new maths allegedly fixing problems in places where we have existing fixes that are ignored. If the simpler fixes aren't acceptable to the incumbent, naturally they'll just rule out bigger and better fixes for the same reasons and won't even bother to explain themselves to the public. (Do we need cryptographically modern voting when we can't even agree to fix stuff like gerrymandering?)

    As an example, just looking at security/compliance as an industry, you'd think that people care about things like "verifiably correct" and yet there's so much of it that is just theater (self-attestation and other pinky-promises). Similarly for most B2B contracts that involve data-sharing and "do not JOIN with .." clauses. That stuff just happens so that outfits like Facebook can disavow any bad behaviour coming from 3rd parties, but it's behaviour they don't actually want to stop because that's the whole business model. Corporations like the theater that we have. And even if it's expensive (contract lawyers, compliance experts), they like that too because it's part of their moat, as long as it doesn't truly impact anything about operations.

    If FHE were going to fix things later when it's matured, I would expect people today to care more about things like certified, legally actionable traces for data-lineage. (Having at least primitive lineage in place is already a cost of doing business, because otherwise you can't reliably work with tons of diverse inputs on tons of diverse models for training. And yet officially facebook [doesnt know what happens with your data](https://www.vice.com/en/article/akvmke/facebook-doesnt-know-...), and the world seems to have basically accepted that answer.)

    > You want to use their AI model but you don't trust them to not train on your data so you don't want to send your data to them. They don't trust you enough to send you their models.

    Basically saying this facilitates trust between competitors? It's an interesting idea, but I'm skeptical. Seems like Walmart will keep using Microsoft or Google's cloud just because they hate Amazon and don't want to arm the enemy with cash, not because they don't trust the enemy with information. Similarly for say American vs Chinese state interests.. fixing trust completely won't make it ok to outsource compute, because regardless of the information they don't even want the money moving that way.

    Setting aside direct competitors, maybe it's an credit/insurance company with private records, and a vendor like Amazon with trained models? In this case they aren't direct competitors but just client/vendor. No one in this arrangement really cares about the privacy of consumers, so a pinky-promise is fine. Any fuck ups that end in leaks, and both parties have PR ass-coverage because they just blame the other guy. If anyone pays fines, no one cares, because it's less than the cost of doing this work any other way.

    Thinking more about this, maybe I can imagine a real market for FHE with healthcare, because even the giants of surveillance capitalism can agree about both parts of this: they selfishly want their own privacy here, and they also stand to directly benefit from making research on aggregates more easily possible at scale.

    Besides healthcare I'm cynical, probably cloud companies want FHE everywhere so they can sell more compute, and maybe it'll be even more compute-hungry than blockchain/AI. As much as I like the idea of seeing Amazon/Facebook lobbyists fist-fighting each other for the amusement of congress, maybe we should try simple solutions like basic laws, and enforcement of those laws before we try redistributing cash from ad-tech to hardware-mongers

FHE is somewhat of a spiritual predecessor to e2e encryption. Imagine a VT100 installed with FHE hardware accelerator hooked up to AT&T phone line that can look up phone books and chat inbox in seconds. It also has constant execution time by nature, so it suits better to realtime systems like good old landline phone networks.

Also looks like it's mathematically related to Diffie–Hellman key exchange? So it might yield something in long term in that direction.

People are (very) slowly realizing that plastering their data all over third party clouds is dangerous for their own privacy and safety. Hopefully we can make data breaches existentially costly for companies so they stop fucking around and take privacy seriously. People want FHE, they just don't know it. It solves so many problems and when it works really well, it will become a selling point for customers. Companies that don't take data privacy seriously will rot.