I say this as a lover of FHE and the wonderful cryptography around it:
While it’s true that FHE schemes continue to get faster, they don’t really have hope of being comparable to plaintext speeds as long as they rely on bootstrapping. For deep, fundamental reasons, bootstrapping isn’t likely to ever be less than ~1000x overhead.
When folks realized they couldn’t speed up bootstrapping much more, they started talking about hardware acceleration, but it’s a tough sell at time when every last drop of compute is going into LLMs. What $/token cost increase would folks pay for computation under FHE? Unless it’s >1000x, it’s really pretty grim.
For anything like private LLM inference, confidential computing approaches are really the only feasible option. I don’t like trusting hardware, but it’s the best we’ve got!
There is an even more fundamental reason why FHE cannot realistically be used for arbitrary computation: it is that some computations have much larger asymptomatic complexity on encrypted data compared to plaintext.
A critical example is database search: searching through a database on n elements is normally done in O(log n), but it becomes O(n) when the search key is encrypted. This means that fully homomorphic Google search is fundamentally impractical, although the same cannot be said of fully homomorphic DNN inference.
There has been a theoretical breakthrough that makes search a O(log n) problem, actually, (https://eprint.iacr.org/2022/1703) but it is pretty impractical (and not getting much faster).
Even without bootstrapping FHE will never be as fast as plaintext computation: the ciphertext is about three orders of magnitude much larger than the plaintext data it encrypts, which means you have to have more memory bandwidth and more compute. You can’t bridge this gap.
Technically, there are rate-1 homomorphic encryption schemes, where ‘rate’ refers to the size ratio between the plaintext and the ciphertext. They’re not super practical, so your general point stands.
That actually sounds pretty reasonable and feels almost standard at this point?
To pick one out of a dozen possible examples: I regularly read 500 word news articles from 8mb web pages with autoplaying videos, analytics beacons, and JS sludge.
That’s about 3 orders of magnitude for data and 4-5 orders of magnitude for compute.
Don't you think there is a market for people who want services that have provable privacy even if it costs 1,000 times more? It's not as big a segment as Dropbox but I imagine it's there.
FHE solves privacy-from-compute-provider and doesn't affect any other privacy risks of the services. The trivial way to get privacy from the compute provider is to run that compute yourself - we delegate compute to cloud services for various reasonable efficiency and convenience reasons, but a 1000-fold less efficient cloud service usually isn't competitive with just getting a local device that can do that.
there is, it's called governments. however this technology is so slow that using it in mission critical systems (think communication / coordinates during warfare) that it is not feasible IMO.
the parent post is right, confidential compute is really what we've got.
Honestly, no? Unless you get everyone using said services, then a market that is only viable to people trying to hide bad behavior becomes the place you look for people doing bad things?
This is a large part of why you have to convince people to hide things even if "they have nothing to hide."
I get that there is a big LLM hype, but is there really no other application for FHE? Like for example trading algorithms (not the high speed once) that you can host on random servers knowing your stuff will be safe or something similar?
I speak as someone who used to build trading algorithms (not the high speed ones) for a living for several years, so knows that world pretty well. I highly doubt anyone who does that will host their stuff on random servers even if you had something like FHE. Why? Because it's not just the code that is confidential.
1) if you are a registered broker dealer you will just incur a massive amount of additional regulatory burden if you want to host this stuff in any sort of "random server"
2) Whoever you are, you need the pipe from your server to the exchange to be trustworthy, so no-one can MITM your connection and front-run your (client's) orders.
3) This is an industry where when people host servers in something like an exchange data center it's reasonably common to put them in a locked cage to ensure physical security. No-one is going to host on a server that could be physically compromised. Remember that big money is at stake and data center staff typically aren't well paid (compared to someone working for an IB or hedge fund), so social engineering would be very effective if someone wanted to compromise your servers.
4)Even if you are able to overcome #1 and are very confident about #2 and #3, even for slow market participants you need to have predictable latency in your execution or you will be eaten for breakfast by the fast players[1]. You won't want to be on a random server controlled by anyone else in case they suddenly do something that affects your latency.
[1] For example, we used to have quite slow execution ability compared with HFTs and people who were co-located at exchanges, so we used to introduce delays when we routed orders to multiple exchanges so the orders would arrive at their destinations at precisely the same time. Even though our execution latency was high, this meant no-one who was colocated at the exchange could see the order at one exchange and arb us at another exchange.
I encountered the situation where one company had the data, and considered this to be really valuable and did not want to show/share it.
Another company had a model, which was very considered very valuable and did not want to show it. So they were stuck in a catch22.
Eventually they solved the perceived risk via contracts, but it could have been solved technically if FHE were viable.
I think the only thing that could make FHE truly world-changing is if someone figures out how to implement something like multi-party garbled circuits under FHE where anyone can verify the output of functions over many hidden inputs since that opens up a realm of provably secure HSMs, voting schemes, etc.
I'd also like to comment on how everything used to be a PCIE expansion card.
Your GPU was, and we also used to have dedicated math coprocessor accelerators. Now most of the expansion card tech is all done by general purpose hardware, which while cheaper will never be as good as a custom dedicated silicon chip that's only focused on 1 task.
Its why I advocate for a separate ML/AI card instead of using GPU's. Sure their is hardware architecture overlap but your sacrificing so much because your AI cards are founded on GPU hardware.
I'd argue the only AI accelerators are something like what goes into modern SXM (sockets). This ditches the power issues and opens up more bandwidth. However only servers have the sxm sockets....and those are not cheap.
> most of the expansion card tech is all done by general purpose hardware, which while cheaper will never be as good as a custom dedicated silicon chip that's only focused on 1 task
I think one reason they can be as good as or better than dedicated silicon is that they can be adjusted on the fly. If a hardware bug is found in your network chip, too bad. If one is found in your software emulation of a network chip, you can update it easily. What if a new network protocol comes along?
Don't forget the design, verification, mask production, and other one-time costs of making a new type of chip are immense ($millions at least).
> Its why I advocate for a separate ML/AI card instead of using GPU's. Sure their is hardware architecture overlap but your sacrificing so much because your AI cards are founded on GPU hardware.
I think you may have the wrong impression of what modern GPUs are like. They may be descended from graphics cards (as in graphics ), but today they are designed fully with the AI market in mind. And they are design to strike an optional balance between fixed functionality for super-efficient calculations that we believe AI will always need, and programmability to allow innovation in algorithms. Anything more fixed would be unviable immediately because AI would have moved on by the time it could hit the market (and anything less fixed would be too slow).
FHE is an important tool because right now companies can be coerced by governments to break encryption for specific targets. FHE removes the need for companies to have a back bone, they can simply shrug and say "We literally do not see the plaintext, ever". They can kinda do this with End to End encryption when they're simply the network/carrier, but cannot currently do this anytime they're processing the plaintext data.
I come from a values basis that privacy is a human right, and governments should be extremely limited in retailiatory powers against a just and democratic usage of powers against them. (things like voting, arts, media, free speech etc)
FHE might allow arbitrary computation, but I use most services because they have some data I want to use: their search index, their knowledge, their database of chemicals, my bank account transactions, whatever.
So unless Google lets me encrypt their entire search index, they can still see my query at the time it interacts with the index, or else they cannot fulfill it.
The other point is incentives: outside of some very few, high-trust high-stakes applications, I don't see why companies would go through the trouble and FHE services.
Here's an implementation of a fully private search engine using FHE that allows querying Wikipedia with the server remaining oblivious as to what you're reading: https://spiralwiki.com/
From what I understand, only the sensitive data needs to be encrypted (e.g. your bank transactions). It is still possible to use public unencryped data in the computation, as the function you want to compute doesn't have to be encrypted.
In a world where Target can figure out a women is pregnant before she knows herself due to her shopping habits, the line that separates sensitive data is pretty ambiguous.
Exactly what I thought. In the end it really isn't in most of the big corps interest to not see your data/query. They need/want to see it so why would they degrade their ability to do so if they can just say no and you will have to rely on using their services without FHE. For banking applications cool, everyone else debatable if it will ever be accepted.
You're right about incentives, but wrong about the first part. Private lookups of a plaintext database are possible and have been for a while now (5+ years?). The problem is it often requires some nontrivial preprocessing of the plaintext database, or in the worst case a linear scan of the entire database.
> Private lookups of a plaintext database are possible and have been for a while now (5+ years?). The problem is it often requires some nontrivial preprocessing of the plaintext database, or in the worst case a linear scan of the entire database.
So that basically means that if a company has data that my program might want to use, the entirety of that data needs to be loaded into my program. Not quite feasible for something like the Google search index, which (afaik) doesn't even fit onto a single machine.
Also, while Google is fine with us doing searches, making the whole search index available to a homomorphic encrypted program is probably a quite different beast.
Which ultimately results in gigabytes of per-client-encrypted data needing to be downloaded, and regenerated and redownloaded every time the index is updated.
I get the "client side" of this equation; some number of users want to keep their actions/data private enough that they are willing to pay for it.
What I don't think they necessarily appreciate is how expensive that would be, and consequently how few people would sign up.
I'm not even assuming that the compute cost would be higher than currently. Let's leave aside the expected multiples in compute cost - although they won't help.
Assume, for example, a privacy-first Google replacement. What does that cost? (Google revenue is a good place to start that Calc.) Even if it was say $100 a year (hint; it's not) how many users would sign up for that? Some sure, but a long long way away from a noticeable percentage.
Once we start adding zeros to that number (to cover the additional compute cost) it gets even lower.
While imperfect, things like Tor provide most of the benefit, and cost nothing. As an alternative it's an option.
I'm not saying that HE is useless. I'm saying it'll need to be paid for, and the numbers that will pay to play will be tiny.
An FHE Google today would be incredible expensive and incredibly slow. No one would pay for it.
The key question I think is how much computing speed will improve in the future. If we assume FHE will take 1000x more time, but hardware also becomes 1000x faster, then the FHE performance will be similar to today's plaintext speed.
Predicting the future is impossible, but as software improves and hardware becoming faster and cheaper every year, and as FHE provides a unique value of privacy, it's plausible that at some point it can become the default (if not 10 years, maybe in 50 years).
Today's hardware is many orders of magnitudes faster compared to 50 years ago.
There are of course other issues too. Like ciphertext size being much larger than plaintext, and requirement of encrypting whole models or indexes per client on the server side.
FHE is not practical for most things yet, but its venn diagram of feasible applications will only grow. And I believe there will be a time in the future that its venn diagram covers search engines and LLMs.
> If we assume FHE will take 1000x more time, but hardware also becomes 1000x faster, then the FHE performance will be similar to today's plaintext speed
Yeah but this also means you can do 1000x more things on plaintext.
> Internet's "Spy by default" can become "Privacy by default".
I've been building and promoting digital signatures for years. Its bad for people and market-dynamics to have Hacker News or Facebook be the grand arbiter of everyone's identity in a community.
Yet here we are because its just that much simpler to build and use it this way, which gets them more users and money which snowballs until alternatives dont matter.
In the same vein, the idea that FHE is a missing piece many people want is wrong. Everything is still almost all run on trust, and that works well enough that very few use cases want the complexity cost - regardless of operation overhead - to consider FHE.
> I've been building and promoting digital signatures for years.
I agree with this wholeheartedly, and yet I do get the following question a lot "What's all that nonsense at the end of your emails". Any explanation is met with eye-rolls and 1000 yard stares. Have you managed to get laypeople on-board with any kind of client-side cryptography? how?
Everything needs to be built into the application in a way that users don't notice if they don't care. A signature at the end of your emails is more cryptic than the recipient's email client showing an icon with a green checkmark or something similar. I take Chrome's rollout of tls-enforcement by default as a great example of this. All I would had to say to people was "check that the padlock next to your url bar is green."
I think the opening example involving Google is misleading. When I hear "Google" I think "search the web".
The articles is about getting an input encrypted with key k, processing it without decrypting it, and sending back an output that is encrypted with key k, too. Now it looks to me that the whole input must be encrypted with key k. But in the search example, the inputs include a query (which could be encrypted with key k) and a multi-terabyte database of pre-digested information that's Google's whole selling point, and there's no way this database could be encrypted with key k.
In other words this technique can be used when you have the complete control of all the inputs, and are renting the compute power from a remote host.
Not saying it's not interesting, but the reference to Google can be misunderstood.
> Now it looks to me that the whole input must be encrypted with key k. But in the search example, the inputs include a query […] and a multi-terabyte database […]
That’s not the understanding I got from Apple’s CallerID example[0][1]. They don’t seem to be making an encrypted copy of their entire database for each user.
They do not explicitly state this fact, but they link to the homomorphic encryption scheme they're using, which works like this. To perform an operation between a plaintext value and an encrypted value, you first encrypt the plaintext with the public key and then you can do your operation on the encrypted values to get the encrypted output.
Moreover, even if the details were slightly different, a scheme that reveals absolutely no information about the query while interacting with a database always needs to do a full scan. If some parts remain unread depending on the query, this tells you what the query wasn't. If you're okay with revealing some information, you can also hash the query and take a short prefix of the hash with many colliders, then only scan values with the same hash prefix. This is how browsers typically do safe browsing lookups, but by downloading that subset of the database instead of doing the comparison homomorphically on the server.
Homomorphically encrypted services don't need a priori knowledge of the encryption key. That's literally the whole point.
Consider the following (very weak) encryption scheme:
m, k ∈ Z[p], E(m) = m * k mod p, D(c) = c * k⁻¹ mod p
With this, I can implement a service that receives two cyphertexts and computes their encrypted sum, without knowledge of the key k:
E(x) + E(y) = x * k + y * k mod p = (x + y) * k mod p = E(x + y)
Of course, such a service is not too interesting, but if you could devise an algebraic structure that supported sufficiently complex operations on cyphertexts (and with a stronger encryption), then by composing these operations one could implement arbitrarily complex computations.
In the case of searching Google, E(x) is the encrypted query and y is Google's database. Can you compute E(x + y) without doing at least as much work as computing E(y)? I don't think so. Instead, you use public key cryptography so that the server can compute E(y) (yes, encrypting the entire database) without being able to decrypt D(E(x)) = x.
I don't know, when I hear Google I hear Gmail, Google docs, and every other service they have to know about people. My mom would probably think mostly about search, but then she would not read an article about HME
"The implications are big. The entire business model built on harvesting user data could become obsolete. Why send your plaintext when another service can compute on your ciphertext?"
Why do people always do this thing where they think inventing a technology has somehow changed economics? I think the implications are very small. There is value in people's user data and people are very eager to barter that value against cheaper services, we can tell because people continue to vote with their wallets and feet.
You could already encrypt or offer zero retention policies on large amounts of internet businesses and every major company has competitors that do, but they exist on the margins because most people don't take that deal.
Yeah, I can totally see companies rushing to implement homomorphically encrypted services that consume 1000000x more compute than necessary, are impossible to debug, and prevent them from analyzing usage data.
E2EE git was invented. I asked the creator if server can enforce protected branches or force pushes. He has no solution for evil clients. Maybe this could lead to E2EE Github?
CipherStash founder here: FHE isn't the only option here. Specialized searchable encryption schemes exist and are much faster than FHE. Different flavours can be combined to create a comprehensive search system which is very close to the performance of plaintext information retrieval. FHE remains an option for generalized computation but can be reserved for small datasets that have been narrowed down using fast searchable encryption.
I don't think FHE is the solution to PIR but it might well form a part of it when combined with more practical approaches.
It's a distraction to try and imagine homomorphic encryption for generic computing or internet needs. At least not for many more generations of moore's law and then even still.
However, where FHE will shine already is in specific high-value, high consequence and high confidentiality applies, but relatively low complexity computational calculations. Smart contracts, banking, potentially medical have lots of these usecases. And the curve of Moore's law + software optimizations are now starting to finally bend into the zone of practicality for some of these.
See what Zama https://www.zama.ai/ is doing, both on the hardware as well as the devtools for FHE.
Full homomorphic encryption is not the future for private internet, confidential VMs are. CVMs are using memory encryption and separation from the host OS. ARM has TEE, AMD has SEV and Intel has been fumbling around with SGX and TDX for more than a decade.
I think SGX (et al) can still be useful as part of a layered defense. We know how to defeat security mitigations like NX and ASLR, but that doesn't mean they're useless.
The problem is that SGX is marketed as the solution.
The idea that these will keep being improved on in speed reminds me of the math problem about average speed:
> An old car needs to go up and down a hill. In the first mile–the ascent–the car can only average 15 miles per hour (mph). The car then goes 1 mile down the hill. How fast must the car go down the hill in order to average 30 mph for the entire 2 mile trip?
Past improvement is no indicator of future possibility, given that each improvement was not re-application of the same solution as before. These are algorithms, not simple physical processes shrinking.
Is the downhill section a cliff? Google informs me terminal velocity of a car is 200-300mph, so to fall a mile at 300mph, the car will need 12 seconds, so let's round up to 15 seconds to account for the time it's accelerating.
To cover the full 2 miles at an average of 30mph, we need to complete the entire journey in 4 minutes, leaving 225 seconds for the ascent.
We know that the old car was averaging 15 miles per hour, but the speedo on an old car is likely inaccurate, and we only need to assume a 6% margin of error for the car to show 15 miles per hour and cover the mile in 225 seconds. You probably couldn't even tell the difference between 15 and 16 on the speed anyway, but let's say that we also fitted out the car with brand new tyres (so the outer circumference will be more than old worn tyres), and it's entirely possible.
So, let's say 240mph. That's the average speed of our mile freefall in 15 seconds.
41 mph, assuming the person asking the question was just really passionate about rounding numbers and/or had just the bare minimum viable measurement tooling available :)))
I'm afraid your maths doesn't add up, so you've missed their point: it can't be done.
To average 30mph over 2 miles, you need to complete those 2 miles in 4 minutes.
But travelling the first mile at 15mph means that took 4 minutes. So from that point the only way to do a second mile and bring your average to 30mph is to teleport it in 0 seconds.
(Doing the second mile at 41mph would give you an average speed of just under 22mph for the two miles.)
Assuming speed gets solved as predicted, for an application like search, the provider would have to sync a new database of “vectors” to all clients every time the index updates. On top of that, these DBs are tens if not hundreds of GB huge.
A very basic way of how it works: encryption is basically just a function e(m, k)=c. “m” is your plaintext and “c” is the encrypted data. We call it an encryption function if the output looks random to anyone that does not have the key
If we could find some kind of function “e” that preserves the underlying structure even when the data is encrypted you have the outline of a homomorphic system. E.g. if the following happens:
e(2,k)*e(m,k) = e(2m,k)
Here we multiplied our message with 2 even in its encrypted form. The important thing is that every computation must produce something that looks random, but once decrypted it should have preserved the actual computation that happened.
It’s been a while since I did crypto, so google might be your friend here; but there are situations when e.g RSA preserves multiplication, making it partially homomorphic.
I get how that works for arithmetic operations - what about stuff like sorting, finding an element in a set etc? This would require knowledge of the cleartext data, wouldn't it?
> If we could find some kind of function “e” that preserves the underlying structure even when the data is encrypted
But isn't such a function a weakened form of encryption? Properly encrypted data should be indistinguishable from noise. "Preserving underlying structure" seems to me to be in opposition to the goal of encryption.
a simple example of partial homomorphic encryption (not full), would be if a system supports addition or multiplication. You know the public key, and the modulus, so you can respect the "wrap around" value, and do multiplication on an encrypted number.
other ones I imagine behave kinda like translating, stretching, or skewing a polynomial or a donut/torus, such that the point/intercepts are still solveable, still unknown to an observer, and actually represent the correct mathematical value of the operation.
just means you treat the []byte value with special rules
Thank you. So based on your examples it sounds like the "computation" term is quite literal. How would this apply at larger levels of complexity like interacting anonymously with a database or something like that?
If I understand correctly companies like OpenAI could run LLMs without having access to the users new inputs. It seems to me new users data are really useful for further training of the models.
Can they still train the models over encrypted data? If this new data is not usable, why would the companies still want it?
Let's assume they can train the LLMs over encrypted data, what if a large number of users inject some crappy data (like it has been seen with the Tay chatbot story). How can the companies still keep a way to clean the data?
> The only way to protect data is to keep it always encrypted on servers, without the servers having the ability to decrypt.
> If FHE is a possible option, people and institutions will demand it.
I don't think that privacy is a technical problem. To take the article's example, why would Google allow you to search without spying on you? Why would chatgpt discard your training data?
GPG has been around for decades. You can relatively easily add a plug-in to use it on top of gmail. Surely the protocol is not perfect, but could have been made better much more easily than it is to improve HPE, since a lot of its clunkiness can be corrected by UX. But people never cared enough that everything they write is read by Google to encrypt it. And since Google loves reading what you write, they'll never introduce something like HPE without overwhelming adoption and requirements by others.
As someone who knows basically nothing about cryptography - wouldn't training an LLM to work on encrypted data also make that LLM extremely good at breaking that encryption?
I assume that doesn't happen? Can someone ELI5 please?
Good encryption schemes are designed so that ciphertexts are effectively indistinguishable from random data -- you should not be able to see any pattern in the encrypted text without knowledge of the key and the algorithm.
If your encryption scheme satisfies this, there are no patterns for the LLM to learn: if you only know the ciphertext but not the key, every continuation of the plaintext should be equally likely, so trying to learn the encryption scheme from examples is effectively trying to predict the next lottery numbers.
This is why FHE for ML schemes [1] don't try to make ML models work directly on encrypted data, but rather try to package ML models so they can run inside an FHE context.
From my understanding of cryptography, most schemes are created with the assumption that _any_ function that does not have access to the secret key will have a probabilistically small chance of decoding the correct message (O(exp(-key_length)) usually). As LLMs are also a function, it is extremely unlikely for cryptographic protocols to be broken _unless_ LLMs can allow for new types of attacks all together.
> The entire business model built on harvesting user data could become obsolete.
This is far too optimistic. Just because you can build a system that doesn't harvest data, doesn't necessarily mean it's a profitable business model. I'm sure many of us here would be willing to pay for a FHE search engine, for example, but we're a minority.
How do you send a password reset email with this. Eventually your mail server will need the plaintext address in order to send the email. And that point can be leaked in a data breach.
It's idealistic to think this could solve data braches because businesses knowing who their customers are is such a fundamental concept.
> Privacy awareness of users is increasing. Privacy regulations are increasing.
I beg your unbelievable pardon, but no? This part of the equation is not addressed in the article, but it is by far and away the biggest missing piece for there to be any hope of FHE seeing widespread adoption.
Here's what I don't understand about homomorphic encryption and so struggle to trust in the very concept. If you can process encrypted data and get useful results, then a major part of the purpose of encryption is defeated, right? How am I wrong?
Yes, I understand that part. The part I struggle with is how the very fact that a party without the key can do the computation on it is not an indication that the encryption is leaking information. If the encryption were airtight, then such computation shouldn't be possible.
Given that cryptography experts seem to be asserting otherwise, I assume that there's something important that I'm not understanding here.
all great until you realize no one is allowed to export things to other regions if it works too well (crypto). Then besides that, the companies who now litterally live off of your personal data (most of big tech), wont suddenly drop their main source of income on behalf of the privacy of their users which clearly, they care nothing about.
unless replacement services are offered and adopted en masse (they wont be, u cant market against companies who can throw billions at breaking you), those giants wont give away their main source of revenue...
so even if technical challenges are overcome, there are more human and political challenges which will likely be even harder to crack...
Very cool, although I have some reservations about "... closest vector problem is believed to be NP-hard and even quantum-resistant". "Believed to be" is kind of different from "known to be".
If it makes you feel better, no cryptographic assumptions we use today are known to be NP-hard. Or maybe that makes you feel worse, not sure. But it doesn't really matter because NP-hardness is a statement about worst case inputs and cryptography needs guarantees about average case inputs since keys are generated randomly.
all modern encryption is currently held together by asymmetric encryption that are all based on "believed to be" foundations not "known to be" foundations
I think this should talk about the kinds of applications you can actually do with FHE because you definitely can't implement most applications (not at a realistic scale anyway).
I meant it should say what can't be implemented. What are the constraints? Currently the article makes it sound like you can do anything, just slower, which definitely isn't the case.
It's simple conceptually: you find an encryption method Enc that guarantees `Sum(Enc(x), Enc(y)) = Enc(Sum(x, y))`. That's ultimately all there is to it. Then, you give the server enc_x and enc_y, the server computes the sum, and returns to you enc_sum. You then decrypt the value you got and that's x+y.
Since lots of functions behave in this way in relation to sums and products, you "just" need to find ones that are hard to reverse so they can be used for encryption as well.
Unfortunately this turns out to not work so simply. In reality, they needed to find different functions FHESum and FHEMultiply, that are actually much harder to compute (1000x more CPU than the equivalent "plaintext" function is a low estimate of the overhead) but that guarantee the above.
I interrupted this fascinating read to tell that "actually", quantum computers are great at multi-dimensional calculation if you find the correct algorithms. It's probably the only thing they will ever be great at. You want to show that finding the algorithm is not possible with our current knowledge.
anyway, making the computer do the calculation is one thing, getting it to spew the correct data is another.... But still, the article (which seems great at the moment) brushes it of a bit too quickly.
Ok, lets stop being delusional here. I'll tell you how this will actualy work:
Imagine your device sending Google an encrypted query and getting back the exact results it wanted — without you having any way of knowing what that query was or what result they returned. The technique to do that is called Fully Homomorphic Encryption (FHE).
I say this as a lover of FHE and the wonderful cryptography around it:
While it’s true that FHE schemes continue to get faster, they don’t really have hope of being comparable to plaintext speeds as long as they rely on bootstrapping. For deep, fundamental reasons, bootstrapping isn’t likely to ever be less than ~1000x overhead.
When folks realized they couldn’t speed up bootstrapping much more, they started talking about hardware acceleration, but it’s a tough sell at time when every last drop of compute is going into LLMs. What $/token cost increase would folks pay for computation under FHE? Unless it’s >1000x, it’s really pretty grim.
For anything like private LLM inference, confidential computing approaches are really the only feasible option. I don’t like trusting hardware, but it’s the best we’ve got!
There is an even more fundamental reason why FHE cannot realistically be used for arbitrary computation: it is that some computations have much larger asymptomatic complexity on encrypted data compared to plaintext.
A critical example is database search: searching through a database on n elements is normally done in O(log n), but it becomes O(n) when the search key is encrypted. This means that fully homomorphic Google search is fundamentally impractical, although the same cannot be said of fully homomorphic DNN inference.
There has been a theoretical breakthrough that makes search a O(log n) problem, actually, (https://eprint.iacr.org/2022/1703) but it is pretty impractical (and not getting much faster).
1 reply →
Even without bootstrapping FHE will never be as fast as plaintext computation: the ciphertext is about three orders of magnitude much larger than the plaintext data it encrypts, which means you have to have more memory bandwidth and more compute. You can’t bridge this gap.
Technically, there are rate-1 homomorphic encryption schemes, where ‘rate’ refers to the size ratio between the plaintext and the ciphertext. They’re not super practical, so your general point stands.
3 replies →
That actually sounds pretty reasonable and feels almost standard at this point?
To pick one out of a dozen possible examples: I regularly read 500 word news articles from 8mb web pages with autoplaying videos, analytics beacons, and JS sludge.
That’s about 3 orders of magnitude for data and 4-5 orders of magnitude for compute.
4 replies →
Don't you think there is a market for people who want services that have provable privacy even if it costs 1,000 times more? It's not as big a segment as Dropbox but I imagine it's there.
FHE solves privacy-from-compute-provider and doesn't affect any other privacy risks of the services. The trivial way to get privacy from the compute provider is to run that compute yourself - we delegate compute to cloud services for various reasonable efficiency and convenience reasons, but a 1000-fold less efficient cloud service usually isn't competitive with just getting a local device that can do that.
???
For the equivalent of $500 in credit you could self host the entire thing!
24 replies →
If we are talking 1000x more latency, that is a pretty hard sell.
Something that normally takes 30 seconds now takes over 8 hours.
8 replies →
For LLM inference, the market that will pay $20,000 for what is now $20 is tiny.
there is, it's called governments. however this technology is so slow that using it in mission critical systems (think communication / coordinates during warfare) that it is not feasible IMO.
the parent post is right, confidential compute is really what we've got.
Honestly, no? Unless you get everyone using said services, then a market that is only viable to people trying to hide bad behavior becomes the place you look for people doing bad things?
This is a large part of why you have to convince people to hide things even if "they have nothing to hide."
For most this would mean only specially treating a subset of all the sensitive data they have.
I get that there is a big LLM hype, but is there really no other application for FHE? Like for example trading algorithms (not the high speed once) that you can host on random servers knowing your stuff will be safe or something similar?
I speak as someone who used to build trading algorithms (not the high speed ones) for a living for several years, so knows that world pretty well. I highly doubt anyone who does that will host their stuff on random servers even if you had something like FHE. Why? Because it's not just the code that is confidential.
1) if you are a registered broker dealer you will just incur a massive amount of additional regulatory burden if you want to host this stuff in any sort of "random server"
2) Whoever you are, you need the pipe from your server to the exchange to be trustworthy, so no-one can MITM your connection and front-run your (client's) orders.
3) This is an industry where when people host servers in something like an exchange data center it's reasonably common to put them in a locked cage to ensure physical security. No-one is going to host on a server that could be physically compromised. Remember that big money is at stake and data center staff typically aren't well paid (compared to someone working for an IB or hedge fund), so social engineering would be very effective if someone wanted to compromise your servers.
4)Even if you are able to overcome #1 and are very confident about #2 and #3, even for slow market participants you need to have predictable latency in your execution or you will be eaten for breakfast by the fast players[1]. You won't want to be on a random server controlled by anyone else in case they suddenly do something that affects your latency.
[1] For example, we used to have quite slow execution ability compared with HFTs and people who were co-located at exchanges, so we used to introduce delays when we routed orders to multiple exchanges so the orders would arrive at their destinations at precisely the same time. Even though our execution latency was high, this meant no-one who was colocated at the exchange could see the order at one exchange and arb us at another exchange.
4 replies →
I encountered the situation where one company had the data, and considered this to be really valuable and did not want to show/share it. Another company had a model, which was very considered very valuable and did not want to show it. So they were stuck in a catch22. Eventually they solved the perceived risk via contracts, but it could have been solved technically if FHE were viable.
I think the only thing that could make FHE truly world-changing is if someone figures out how to implement something like multi-party garbled circuits under FHE where anyone can verify the output of functions over many hidden inputs since that opens up a realm of provably secure HSMs, voting schemes, etc.
I'd also like to comment on how everything used to be a PCIE expansion card.
Your GPU was, and we also used to have dedicated math coprocessor accelerators. Now most of the expansion card tech is all done by general purpose hardware, which while cheaper will never be as good as a custom dedicated silicon chip that's only focused on 1 task.
Its why I advocate for a separate ML/AI card instead of using GPU's. Sure their is hardware architecture overlap but your sacrificing so much because your AI cards are founded on GPU hardware.
I'd argue the only AI accelerators are something like what goes into modern SXM (sockets). This ditches the power issues and opens up more bandwidth. However only servers have the sxm sockets....and those are not cheap.
> most of the expansion card tech is all done by general purpose hardware, which while cheaper will never be as good as a custom dedicated silicon chip that's only focused on 1 task
I think one reason they can be as good as or better than dedicated silicon is that they can be adjusted on the fly. If a hardware bug is found in your network chip, too bad. If one is found in your software emulation of a network chip, you can update it easily. What if a new network protocol comes along?
Don't forget the design, verification, mask production, and other one-time costs of making a new type of chip are immense ($millions at least).
> Its why I advocate for a separate ML/AI card instead of using GPU's. Sure their is hardware architecture overlap but your sacrificing so much because your AI cards are founded on GPU hardware.
I think you may have the wrong impression of what modern GPUs are like. They may be descended from graphics cards (as in graphics ), but today they are designed fully with the AI market in mind. And they are design to strike an optional balance between fixed functionality for super-efficient calculations that we believe AI will always need, and programmability to allow innovation in algorithms. Anything more fixed would be unviable immediately because AI would have moved on by the time it could hit the market (and anything less fixed would be too slow).
Thx! I'm curious about your thoughts...
- FHE for classic key-value stores and simple SQL database tables?
- the author's argument that FHE is experiencing accelerated Moore's law, and therefore will close 1000x gap quickly?
Thx!
From your perspective: which FHE is actually usable? Or is only PHE actually usable?
Interesting! Can you provide some sources for this claim?
FHE is an important tool because right now companies can be coerced by governments to break encryption for specific targets. FHE removes the need for companies to have a back bone, they can simply shrug and say "We literally do not see the plaintext, ever". They can kinda do this with End to End encryption when they're simply the network/carrier, but cannot currently do this anytime they're processing the plaintext data.
I come from a values basis that privacy is a human right, and governments should be extremely limited in retailiatory powers against a just and democratic usage of powers against them. (things like voting, arts, media, free speech etc)
FHE might allow arbitrary computation, but I use most services because they have some data I want to use: their search index, their knowledge, their database of chemicals, my bank account transactions, whatever.
So unless Google lets me encrypt their entire search index, they can still see my query at the time it interacts with the index, or else they cannot fulfill it.
The other point is incentives: outside of some very few, high-trust high-stakes applications, I don't see why companies would go through the trouble and FHE services.
Here's an implementation of a fully private search engine using FHE that allows querying Wikipedia with the server remaining oblivious as to what you're reading: https://spiralwiki.com/
It’s ridiculously easier just to download a Wikipedia database dump and read it locally, with Kiwix: <https://kiwix.org/>
From what I understand, only the sensitive data needs to be encrypted (e.g. your bank transactions). It is still possible to use public unencryped data in the computation, as the function you want to compute doesn't have to be encrypted.
In a world where Target can figure out a women is pregnant before she knows herself due to her shopping habits, the line that separates sensitive data is pretty ambiguous.
1 reply →
Exactly what I thought. In the end it really isn't in most of the big corps interest to not see your data/query. They need/want to see it so why would they degrade their ability to do so if they can just say no and you will have to rely on using their services without FHE. For banking applications cool, everyone else debatable if it will ever be accepted.
Would depend on market pressures, no?
You're right about incentives, but wrong about the first part. Private lookups of a plaintext database are possible and have been for a while now (5+ years?). The problem is it often requires some nontrivial preprocessing of the plaintext database, or in the worst case a linear scan of the entire database.
> Private lookups of a plaintext database are possible and have been for a while now (5+ years?). The problem is it often requires some nontrivial preprocessing of the plaintext database, or in the worst case a linear scan of the entire database.
So that basically means that if a company has data that my program might want to use, the entirety of that data needs to be loaded into my program. Not quite feasible for something like the Google search index, which (afaik) doesn't even fit onto a single machine.
Also, while Google is fine with us doing searches, making the whole search index available to a homomorphic encrypted program is probably a quite different beast.
2 replies →
Which ultimately results in gigabytes of per-client-encrypted data needing to be downloaded, and regenerated and redownloaded every time the index is updated.
I get the "client side" of this equation; some number of users want to keep their actions/data private enough that they are willing to pay for it.
What I don't think they necessarily appreciate is how expensive that would be, and consequently how few people would sign up.
I'm not even assuming that the compute cost would be higher than currently. Let's leave aside the expected multiples in compute cost - although they won't help.
Assume, for example, a privacy-first Google replacement. What does that cost? (Google revenue is a good place to start that Calc.) Even if it was say $100 a year (hint; it's not) how many users would sign up for that? Some sure, but a long long way away from a noticeable percentage.
Once we start adding zeros to that number (to cover the additional compute cost) it gets even lower.
While imperfect, things like Tor provide most of the benefit, and cost nothing. As an alternative it's an option.
I'm not saying that HE is useless. I'm saying it'll need to be paid for, and the numbers that will pay to play will be tiny.
An FHE Google today would be incredible expensive and incredibly slow. No one would pay for it.
The key question I think is how much computing speed will improve in the future. If we assume FHE will take 1000x more time, but hardware also becomes 1000x faster, then the FHE performance will be similar to today's plaintext speed.
Predicting the future is impossible, but as software improves and hardware becoming faster and cheaper every year, and as FHE provides a unique value of privacy, it's plausible that at some point it can become the default (if not 10 years, maybe in 50 years).
Today's hardware is many orders of magnitudes faster compared to 50 years ago.
There are of course other issues too. Like ciphertext size being much larger than plaintext, and requirement of encrypting whole models or indexes per client on the server side.
FHE is not practical for most things yet, but its venn diagram of feasible applications will only grow. And I believe there will be a time in the future that its venn diagram covers search engines and LLMs.
> If we assume FHE will take 1000x more time, but hardware also becomes 1000x faster, then the FHE performance will be similar to today's plaintext speed
Yeah but this also means you can do 1000x more things on plaintext.
But think of the children?
> Internet's "Spy by default" can become "Privacy by default".
I've been building and promoting digital signatures for years. Its bad for people and market-dynamics to have Hacker News or Facebook be the grand arbiter of everyone's identity in a community.
Yet here we are because its just that much simpler to build and use it this way, which gets them more users and money which snowballs until alternatives dont matter.
In the same vein, the idea that FHE is a missing piece many people want is wrong. Everything is still almost all run on trust, and that works well enough that very few use cases want the complexity cost - regardless of operation overhead - to consider FHE.
> I've been building and promoting digital signatures for years.
I agree with this wholeheartedly, and yet I do get the following question a lot "What's all that nonsense at the end of your emails". Any explanation is met with eye-rolls and 1000 yard stares. Have you managed to get laypeople on-board with any kind of client-side cryptography? how?
Everything needs to be built into the application in a way that users don't notice if they don't care. A signature at the end of your emails is more cryptic than the recipient's email client showing an icon with a green checkmark or something similar. I take Chrome's rollout of tls-enforcement by default as a great example of this. All I would had to say to people was "check that the padlock next to your url bar is green."
> that works well enough that very few use cases want the complexity cost
FHE + AI might be the killer combination, the latter sharing the complexity burden.
Is there any reason to think this is a meaningful combination, or do you just like saying the word AI?
4 replies →
I think the opening example involving Google is misleading. When I hear "Google" I think "search the web".
The articles is about getting an input encrypted with key k, processing it without decrypting it, and sending back an output that is encrypted with key k, too. Now it looks to me that the whole input must be encrypted with key k. But in the search example, the inputs include a query (which could be encrypted with key k) and a multi-terabyte database of pre-digested information that's Google's whole selling point, and there's no way this database could be encrypted with key k.
In other words this technique can be used when you have the complete control of all the inputs, and are renting the compute power from a remote host.
Not saying it's not interesting, but the reference to Google can be misunderstood.
> Now it looks to me that the whole input must be encrypted with key k. But in the search example, the inputs include a query […] and a multi-terabyte database […]
That’s not the understanding I got from Apple’s CallerID example[0][1]. They don’t seem to be making an encrypted copy of their entire database for each user.
[0]: https://machinelearning.apple.com/research/homomorphic-encry...
[1]: https://machinelearning.apple.com/research/wally-search
They do not explicitly state this fact, but they link to the homomorphic encryption scheme they're using, which works like this. To perform an operation between a plaintext value and an encrypted value, you first encrypt the plaintext with the public key and then you can do your operation on the encrypted values to get the encrypted output.
Moreover, even if the details were slightly different, a scheme that reveals absolutely no information about the query while interacting with a database always needs to do a full scan. If some parts remain unread depending on the query, this tells you what the query wasn't. If you're okay with revealing some information, you can also hash the query and take a short prefix of the hash with many colliders, then only scan values with the same hash prefix. This is how browsers typically do safe browsing lookups, but by downloading that subset of the database instead of doing the comparison homomorphically on the server.
2 replies →
Homomorphically encrypted services don't need a priori knowledge of the encryption key. That's literally the whole point.
Consider the following (very weak) encryption scheme:
m, k ∈ Z[p], E(m) = m * k mod p, D(c) = c * k⁻¹ mod p
With this, I can implement a service that receives two cyphertexts and computes their encrypted sum, without knowledge of the key k:
E(x) + E(y) = x * k + y * k mod p = (x + y) * k mod p = E(x + y)
Of course, such a service is not too interesting, but if you could devise an algebraic structure that supported sufficiently complex operations on cyphertexts (and with a stronger encryption), then by composing these operations one could implement arbitrarily complex computations.
In the case of searching Google, E(x) is the encrypted query and y is Google's database. Can you compute E(x + y) without doing at least as much work as computing E(y)? I don't think so. Instead, you use public key cryptography so that the server can compute E(y) (yes, encrypting the entire database) without being able to decrypt D(E(x)) = x.
7 replies →
I don't know, when I hear Google I hear Gmail, Google docs, and every other service they have to know about people. My mom would probably think mostly about search, but then she would not read an article about HME
"The implications are big. The entire business model built on harvesting user data could become obsolete. Why send your plaintext when another service can compute on your ciphertext?"
Why do people always do this thing where they think inventing a technology has somehow changed economics? I think the implications are very small. There is value in people's user data and people are very eager to barter that value against cheaper services, we can tell because people continue to vote with their wallets and feet.
You could already encrypt or offer zero retention policies on large amounts of internet businesses and every major company has competitors that do, but they exist on the margins because most people don't take that deal.
Yeah, I can totally see companies rushing to implement homomorphically encrypted services that consume 1000000x more compute than necessary, are impossible to debug, and prevent them from analyzing usage data.
E2EE git was invented. I asked the creator if server can enforce protected branches or force pushes. He has no solution for evil clients. Maybe this could lead to E2EE Github?
https://news.ycombinator.com/item?id=44530927
CipherStash founder here: FHE isn't the only option here. Specialized searchable encryption schemes exist and are much faster than FHE. Different flavours can be combined to create a comprehensive search system which is very close to the performance of plaintext information retrieval. FHE remains an option for generalized computation but can be reserved for small datasets that have been narrowed down using fast searchable encryption.
I don't think FHE is the solution to PIR but it might well form a part of it when combined with more practical approaches.
It's a distraction to try and imagine homomorphic encryption for generic computing or internet needs. At least not for many more generations of moore's law and then even still.
However, where FHE will shine already is in specific high-value, high consequence and high confidentiality applies, but relatively low complexity computational calculations. Smart contracts, banking, potentially medical have lots of these usecases. And the curve of Moore's law + software optimizations are now starting to finally bend into the zone of practicality for some of these.
See what Zama https://www.zama.ai/ is doing, both on the hardware as well as the devtools for FHE.
Full homomorphic encryption is not the future for private internet, confidential VMs are. CVMs are using memory encryption and separation from the host OS. ARM has TEE, AMD has SEV and Intel has been fumbling around with SGX and TDX for more than a decade.
As long as the key and compute are custodied by the vendor, confidential compute is little more than "trust us, we'll keep your data safe."
https://sgx.fail
I think SGX (et al) can still be useful as part of a layered defense. We know how to defeat security mitigations like NX and ASLR, but that doesn't mean they're useless.
The problem is that SGX is marketed as the solution.
4 replies →
The idea that these will keep being improved on in speed reminds me of the math problem about average speed:
> An old car needs to go up and down a hill. In the first mile–the ascent–the car can only average 15 miles per hour (mph). The car then goes 1 mile down the hill. How fast must the car go down the hill in order to average 30 mph for the entire 2 mile trip?
Past improvement is no indicator of future possibility, given that each improvement was not re-application of the same solution as before. These are algorithms, not simple physical processes shrinking.
Is the downhill section a cliff? Google informs me terminal velocity of a car is 200-300mph, so to fall a mile at 300mph, the car will need 12 seconds, so let's round up to 15 seconds to account for the time it's accelerating.
To cover the full 2 miles at an average of 30mph, we need to complete the entire journey in 4 minutes, leaving 225 seconds for the ascent.
We know that the old car was averaging 15 miles per hour, but the speedo on an old car is likely inaccurate, and we only need to assume a 6% margin of error for the car to show 15 miles per hour and cover the mile in 225 seconds. You probably couldn't even tell the difference between 15 and 16 on the speed anyway, but let's say that we also fitted out the car with brand new tyres (so the outer circumference will be more than old worn tyres), and it's entirely possible.
So, let's say 240mph. That's the average speed of our mile freefall in 15 seconds.
41 mph, assuming the person asking the question was just really passionate about rounding numbers and/or had just the bare minimum viable measurement tooling available :)))
I'm afraid your maths doesn't add up, so you've missed their point: it can't be done.
To average 30mph over 2 miles, you need to complete those 2 miles in 4 minutes.
But travelling the first mile at 15mph means that took 4 minutes. So from that point the only way to do a second mile and bring your average to 30mph is to teleport it in 0 seconds.
(Doing the second mile at 41mph would give you an average speed of just under 22mph for the two miles.)
6 replies →
Assuming speed gets solved as predicted, for an application like search, the provider would have to sync a new database of “vectors” to all clients every time the index updates. On top of that, these DBs are tens if not hundreds of GB huge.
> FHE enables computation on encrypted data
This is fascinating. Could someone ELI5 how computation can work using encrypted data?
And does "computation" apply to ordinary internet transactions like when using a REST API, for example?
A very basic way of how it works: encryption is basically just a function e(m, k)=c. “m” is your plaintext and “c” is the encrypted data. We call it an encryption function if the output looks random to anyone that does not have the key
If we could find some kind of function “e” that preserves the underlying structure even when the data is encrypted you have the outline of a homomorphic system. E.g. if the following happens:
e(2,k)*e(m,k) = e(2m,k)
Here we multiplied our message with 2 even in its encrypted form. The important thing is that every computation must produce something that looks random, but once decrypted it should have preserved the actual computation that happened.
It’s been a while since I did crypto, so google might be your friend here; but there are situations when e.g RSA preserves multiplication, making it partially homomorphic.
I get how that works for arithmetic operations - what about stuff like sorting, finding an element in a set etc? This would require knowledge of the cleartext data, wouldn't it?
8 replies →
> If we could find some kind of function “e” that preserves the underlying structure even when the data is encrypted
But isn't such a function a weakened form of encryption? Properly encrypted data should be indistinguishable from noise. "Preserving underlying structure" seems to me to be in opposition to the goal of encryption.
This made me wonder if there is such a thing as homomorphic compression. A cursory search says yes but seems like limited information.
3 replies →
Thank you, this really clarified things for me!
a simple example of partial homomorphic encryption (not full), would be if a system supports addition or multiplication. You know the public key, and the modulus, so you can respect the "wrap around" value, and do multiplication on an encrypted number.
other ones I imagine behave kinda like translating, stretching, or skewing a polynomial or a donut/torus, such that the point/intercepts are still solveable, still unknown to an observer, and actually represent the correct mathematical value of the operation.
just means you treat the []byte value with special rules
Thank you. So based on your examples it sounds like the "computation" term is quite literal. How would this apply at larger levels of complexity like interacting anonymously with a database or something like that?
1 reply →
If I understand correctly companies like OpenAI could run LLMs without having access to the users new inputs. It seems to me new users data are really useful for further training of the models. Can they still train the models over encrypted data? If this new data is not usable, why would the companies still want it?
Let's assume they can train the LLMs over encrypted data, what if a large number of users inject some crappy data (like it has been seen with the Tay chatbot story). How can the companies still keep a way to clean the data?
> Can they still train the models over encrypted data?
Yes but then the model becomes encrypted.
IMO ML training is not a realistic application for FHE, but things like federated training would be the way to do that privately enough.
> The only way to protect data is to keep it always encrypted on servers, without the servers having the ability to decrypt.
> If FHE is a possible option, people and institutions will demand it.
I don't think that privacy is a technical problem. To take the article's example, why would Google allow you to search without spying on you? Why would chatgpt discard your training data?
GPG has been around for decades. You can relatively easily add a plug-in to use it on top of gmail. Surely the protocol is not perfect, but could have been made better much more easily than it is to improve HPE, since a lot of its clunkiness can be corrected by UX. But people never cared enough that everything they write is read by Google to encrypt it. And since Google loves reading what you write, they'll never introduce something like HPE without overwhelming adoption and requirements by others.
As someone who knows basically nothing about cryptography - wouldn't training an LLM to work on encrypted data also make that LLM extremely good at breaking that encryption?
I assume that doesn't happen? Can someone ELI5 please?
Good encryption schemes are designed so that ciphertexts are effectively indistinguishable from random data -- you should not be able to see any pattern in the encrypted text without knowledge of the key and the algorithm.
If your encryption scheme satisfies this, there are no patterns for the LLM to learn: if you only know the ciphertext but not the key, every continuation of the plaintext should be equally likely, so trying to learn the encryption scheme from examples is effectively trying to predict the next lottery numbers.
This is why FHE for ML schemes [1] don't try to make ML models work directly on encrypted data, but rather try to package ML models so they can run inside an FHE context.
[1] It's not for language models, but I like Microsoft's CryptoNets - https://www.microsoft.com/en-us/research/wp-content/uploads/... - as a more straightforward example of how FHE for ML looks in practice
I am confused: you can implement LLM learning with FHE. It’s a different problem than learning on encrypted data.
4 replies →
From my understanding of cryptography, most schemes are created with the assumption that _any_ function that does not have access to the secret key will have a probabilistically small chance of decoding the correct message (O(exp(-key_length)) usually). As LLMs are also a function, it is extremely unlikely for cryptographic protocols to be broken _unless_ LLMs can allow for new types of attacks all together.
Because math. The data that would be necessary to train an LLM to break (properly) encrypted information would be indistinguishable from random bytes.
How do you train a model when the input has no apparent correlation to the output ?
The tendency to increase complexity will be at odds with privacy and user control.
The problem is that the internet is a centralized system practically even though it is decentralized and some are fighting to keep it free.
Fight for decentralization instead, it will remove the need for unnecessary security and reduce the compute cost significantly.
> The entire business model built on harvesting user data could become obsolete.
This is far too optimistic. Just because you can build a system that doesn't harvest data, doesn't necessarily mean it's a profitable business model. I'm sure many of us here would be willing to pay for a FHE search engine, for example, but we're a minority.
How do you send a password reset email with this. Eventually your mail server will need the plaintext address in order to send the email. And that point can be leaked in a data breach.
It's idealistic to think this could solve data braches because businesses knowing who their customers are is such a fundamental concept.
A password reset e-mail is supposed to expire pretty quickly though, so would it really matter in practice?
The email must be able to be used at any time which means that and attacker may be able to also "use" them.
I don't think this is possible with FHE alone.
> Privacy awareness of users is increasing. Privacy regulations are increasing.
I beg your unbelievable pardon, but no? This part of the equation is not addressed in the article, but it is by far and away the biggest missing piece for there to be any hope of FHE seeing widespread adoption.
Here's what I don't understand about homomorphic encryption and so struggle to trust in the very concept. If you can process encrypted data and get useful results, then a major part of the purpose of encryption is defeated, right? How am I wrong?
The result is encrypted. It's useful to the key holder, not to the party doing the computation.
Yes, I understand that part. The part I struggle with is how the very fact that a party without the key can do the computation on it is not an indication that the encryption is leaking information. If the encryption were airtight, then such computation shouldn't be possible.
Given that cryptography experts seem to be asserting otherwise, I assume that there's something important that I'm not understanding here.
9 replies →
all great until you realize no one is allowed to export things to other regions if it works too well (crypto). Then besides that, the companies who now litterally live off of your personal data (most of big tech), wont suddenly drop their main source of income on behalf of the privacy of their users which clearly, they care nothing about.
unless replacement services are offered and adopted en masse (they wont be, u cant market against companies who can throw billions at breaking you), those giants wont give away their main source of revenue...
so even if technical challenges are overcome, there are more human and political challenges which will likely be even harder to crack...
Very cool, although I have some reservations about "... closest vector problem is believed to be NP-hard and even quantum-resistant". "Believed to be" is kind of different from "known to be".
If it makes you feel better, no cryptographic assumptions we use today are known to be NP-hard. Or maybe that makes you feel worse, not sure. But it doesn't really matter because NP-hardness is a statement about worst case inputs and cryptography needs guarantees about average case inputs since keys are generated randomly.
all modern encryption is currently held together by asymmetric encryption that are all based on "believed to be" foundations not "known to be" foundations
I think this should talk about the kinds of applications you can actually do with FHE because you definitely can't implement most applications (not at a realistic scale anyway).
You might enjoy https://jeremykun.com/fhe-in-production
I meant it should say what can't be implemented. What are the constraints? Currently the article makes it sound like you can do anything, just slower, which definitely isn't the case.
What baffles me is, how can code perform computations and comparisons on data that is still encrypted in memory.
It's simple conceptually: you find an encryption method Enc that guarantees `Sum(Enc(x), Enc(y)) = Enc(Sum(x, y))`. That's ultimately all there is to it. Then, you give the server enc_x and enc_y, the server computes the sum, and returns to you enc_sum. You then decrypt the value you got and that's x+y.
Since lots of functions behave in this way in relation to sums and products, you "just" need to find ones that are hard to reverse so they can be used for encryption as well.
Unfortunately this turns out to not work so simply. In reality, they needed to find different functions FHESum and FHEMultiply, that are actually much harder to compute (1000x more CPU than the equivalent "plaintext" function is a low estimate of the overhead) but that guarantee the above.
code in FHE doesn't need to see the data
> 3. Data while processing is un-encrypted, as code need to 'see' the data
read the article again
Most states will probably either forbid this or demand back doors.
FHE stands for Federally Hacked Encryption
I interrupted this fascinating read to tell that "actually", quantum computers are great at multi-dimensional calculation if you find the correct algorithms. It's probably the only thing they will ever be great at. You want to show that finding the algorithm is not possible with our current knowledge.
anyway, making the computer do the calculation is one thing, getting it to spew the correct data is another.... But still, the article (which seems great at the moment) brushes it of a bit too quickly.
[flagged]
Ok, lets stop being delusional here. I'll tell you how this will actualy work:
Imagine your device sending Google an encrypted query and getting back the exact results it wanted — without you having any way of knowing what that query was or what result they returned. The technique to do that is called Fully Homomorphic Encryption (FHE).
queries are Oblivious Transfer - a second limited case of FHE that actually addresses the filter threat model.