← Back to context

Comment by vishnumohandas

5 years ago

> Do you have a thumbnail of every photo client side

In the happy path the files/thumbnails are indexed before they are uploaded. But we are designing a framework that will pull files/thumbnails for indexing if they are unindexed or indexed by older models.

> how do you do this in a privacy preserving way

Our accuracy will not match that offered by services who index your data on their servers. But there's a trade off between user experience and privacy here, and we are hopeful that ente will be a viable option for an audience who is willing to sacrifice a bit of one for a lot of the other.

As someone who has worked on systems like these let me translate:

“You stuff will be private but in return accuracy will be so bad that the UX is gonna suck!”

That’s the key piece people miss when they wanna do anything with ML…that’s it’s a different problem compared to writing code because it’s not about the code anymore, it’s about having great training data!

  • Apple Photos seems to be using just Core ML[1] for on-device recognition and it does a pretty good job. As for Android, we plan to use tflite, but the accuracy is yet to be measured. And if customers do install our desktop app, we will be able to improve the indexes by re-indexing data with the extra bit of compute available.

    We don't feel that the entire UX of a photo storage app will "suck" because of a reduced accuracy in search results, and we think that for some of us the reduced accuracy might not be a deal breaker.

    [1]: https://developer.apple.com/documentation/coreml

    • Up until recently I’ve used Apple Photos happily since it provided a good combination of convenience plus the privacy of on-device recognition. You have a compelling product if you can convince customers you are as reliable and more trustworthy than Apple. You do face the disadvantage of not being the default option for iOS/macOS but that should be balanced by being available cross-platform in Android, Linux, Windows.

    • Core ML and TFlite are just tools for running ML models. Generating the models is the hard part, and that is what encryption will make more difficult.

      2 replies →

  • To be honest, that wasn't a concern with my question. I think most people on HN understand this aspect. My question was more about how you improve your models when you don't have the same feedback mechanisms as non-privacy preserving apps. Google can look at your photos and see what photos fail and collect the biased statistics. In a privacy preserving version you won't be able to do this. Sure, you can on an internal dataset, but then there are lots of questions about that dataset's bias and if it is representative of the real world. I mean how many people think ImageNet is representative of real world images? A surprising number.

  • As someone else who works on systems like these, I agree training data is the whole problem. However you can use some techniques like homomorphic encryption and gradient pooling to collect training data from client code while remaining end-to-end encryption. It's hard, but it's not impossible.

    • Really? Have we had a revolution in homomorphic encryption such that it can be used for anything other than 1-million-times-slower proofs-of-concept?

      I know IBM has released something lately, but given the source..

      Does anyone use HE for the type of ML application you are describing?

So I guess there is more to the question that I'm asking.

> Our accuracy will not match that offered by services who index your data on their servers. But there's a trade off between user experience and privacy here,

I think most people here understand that[0]. We are on Hacker News after all and not Reddit or a more general public place. The concern isn't that you are worse. The concern is that your product has to advance and get better over time. That mechanism is unclear and potentially concerning. The answer to this is the answer to how you ensure continued privacy.

You talk about the "push files/thumbnails for indexing" and this is what is most concerning to me and at the heart of my original question. How are you collecting those photos for _your_ training set? Obviously this isn't just ImageNet (dear god I hope not). Are you creating your own JFT-300M? Where are those photos being sourced from? What's the bias in that dataset? Obviously there are questions about the model too (CNNs and Transformers have different types of biases and see images differently). But that's a bigger question of training methods and that gets complicated and nuanced fast. Obviously we know there is going to be some distillation going on.

There's a lot of concerns here and questions that won't really get asked of people that aren't pushing privacy based apps. But the biggest question is how you get feedback into your model and improve it. Non-privacy preserving apps are easier in this respect because you know what (real world) examples you're failing on. But privacy preserving methods don't have this feedback mechanism. We know homomorphic encryption isn't there yet and we know there are concerns with federated learning (images can be recreated from gradients). So the question is: how are you going to improve your model in a privacy preserving method?

[0] I think people also understand that on device NNs are going to be worse than server side NNs since there's a huge difference in the number of parameters and throughput between these and phone hardware can only do so much.

  • > how are you going to improve your model in a privacy preserving method

    We will not improve our models with the help of user-data and will resort to only pre-trained models that are available in the public domain.