← Back to context

Comment by Etheryte

18 hours ago

How large do you wager your moat to be? Confidential computing is something all major cloud providers either have or are about to have and from there it's a very small step to offer LLM-s under the same umbrella. First mover advantage is of course considerable, but I can't help but feel that this market will very quickly be swallowed by the hyperscalers.

I suspect Nvidia have done a lot of the heavy lifting to make this work; but it's not that trivial to wire the CPU and GPU confidential compute together.

Cloud providers aren't going to care too much about this.

I have worked for many enterprise companies e.g. banks who are trialling AI and none of them have any use for something like this. Because the entire foundation of the IT industry is based on trusting the privacy and security policies of Azure, AWS and GCP. And in the decades since they've been around not heard of a single example of them breaking this.

The proposition here is to tell a company that they can trust Azure with their banking websites, identity services and data engineering workloads but not for their model services. It just doesn't make any sense. And instead I should trust a YC startup who statistically is going to be gone in a year and will likely have their own unique set of security and privacy issues.

Also you have the issue of smaller sized open source models e.g. DeepSeek R1 lagging far behind the bigger ones and so you're giving me some unnecessary privacy attestation at the expense of a model that will give me far better accuracy and performance.

Being gobbled by the hyperscalers may well be the plan. Reasonable bet.

  • GCP has confidential VMs with H100 GPUs; I'm not sure if Google would be interested. And they get huge discount buying GPUs in bulk. The trade-off between cost and privacy is obvious for most users imo.

This. Big tech providers already offer confidential inference today.

Confidential computing as a technology will become (and should be) commoditized, so the value add comes down to security and UX. We don’t want to be a confidential computing company, we want to use the right tool for the job of building private & verifiable AI. If that becomes FHE in a few years, then we will use that. We are starting with easy-to-use inference, but our goal of having any AI application be provably private