Comment by HelloUsername
2 months ago
Lumo is powered by open-source large language models (LLMs) which have been optimized by Proton to give you the best answer based on the model most capable of dealing with your request. The models we’re using currently are Nemo, OpenHands 32B, OLMO 2 32B, and Mistral Small 3. These run exclusively on servers Proton controls so your data is never stored on a third-party platform. Lumo’s code is open source, meaning anyone can see it’s secure and does what it claims to. We’re constantly improving Lumo with the latest models that give the best user experience.
Running those small models is usually not a problem for SME or homelabs. Serving full Kimi K2, Qwen3 or Deepseek V3/R1 under the Proton conditions would be an interesting offer.
> Lumo’s code is open source
Where's the source code ? I couldn't find it yet.
I wonder how is this different from Apple's approach (Private Cloud Compute).
I believe Apple provides guarantees that data access is impossible under most circumstances, create auditable, cryptographically secure hardware logs and allow for third-party inspection of their facilities to ensure compliance with their own stated design and protocols.
Which independent audit has validated such claims and can attest they are factual?
3 replies →
Apple is still a US company and must adhere to US intelligence covert data access regulations.
But you can't give what you don't have access to.
The Apple private could is specifically built so that if it's tampered with it stops working.
2 replies →
Apple just cosplays privacy while proton at least thinks they care
No where close to Apple [0]. In comparison, Proton's mostly going "trust me bro".
[0] https://xeiaso.net/blog/2025/squandered-holy-grail / https://archive.vn/sveXf
Which means the performance will be noticeably worse than any of the mainstream models.
"The responses are worse, but don't worry, at least the queries are private!" says nobody.
It’s funny how when it’s Apple, everyone is happy to defend even the most incomprehensible decisions with “privacy as a feature”. For everyone else apparently privacy doesn’t count. I think “Donald Trump can’t get your photos” is a pretty good selling point.
> everyone is happy to defend even the most incomprehensible decisions with “privacy as a feature”
Not me. I care about privacy and I know they care about privacy, but what I want to see is that they have a product in the first place before all those other things.
In fact, I more or less knew Apple wouldn't ship a good product when all they talked about was privacy instead of providing any meaningful data about performance. Turns out it's all just vaporware.
So is this aimed at small models only? Is there any advantages to these models compared to what I can run locally on a 16GB VRAM GPU?
Would be nice for something at the level of like Claude 3.5
Yeah, proper V3/R1/K2/Qwen 235B are the point at which open LLMs become worth using.
Does anyone knows if there is a hoster for Kimi K2, Qwen3 or Deepseek V3/R1 or so in the EU?
[dead]