You think they (plan to) decrypt messages and then upload them again in plain text to a server?
Since on-device processing is neither as objectionable nor could be very large
I don't use WhatsApp myself because of who runs it and there are plenty of better options out there, so I certainly agree with the sentiment of steering clear, but this claim does seem pretty far out there
They don't plan it because they have no use for it. They only care about the metadata. When you talked to this person; your wife; at what time of day; was it at night; how long is the message; was there a product mentioned in the message; was the message about sports; etc.
That's a completely reasonable boundary. Privacy and consent are critical, especially when sharing personal messages or conversations. It's fair to expect that your interactions remain private unless you've explicitly agreed otherwise. If you'd like, you can communicate your stance clearly to others in advance, ensuring they're aware of your boundaries regarding the use of your messages with AI tools or other external resources.
> They're probably the same types who use it to do their work.
Citation needed.
It's a local LLM with access to an extraordinary amount of personal data. In the EU at least that personal data is supposed to be handled with care. I don't see people freaking out, but simple pointing out the leap of handing it over to ANOTHER company.
Not all Meta products are alike. WA has E2E encryption, has had it for a long time. It's the same protocol as Signal: in fact, it was built for/in WA by Moxie/signal a while ago.
That doesn't make the metadata private. Meta can use that as they want. But not the contents, nor the images, not even in group chats (as opposed to Telegram, where group-chats aren't (weren't?) E2E encrypted).
What you say or send on WA is private. Meta cannot see that. Nor governments nor your ISP or your router. Only you and the person or people you sent it to can read that.
It's a d*ck move if they then publicize this. And, as others pointed out, illegal even in many jurisdictions: AFAIK, it is in my country.
Personally, I would say that still reeks of being manipulative. I've received messages from a friend which were definitely LLM-generated, it made me like that person considerably less
If they use the LLM to search ("when did X tell me about that party somewhere around Y's neighborhood") then I don't think there's any problem.
If they configure it to indicate a prefix, for instance when answering questions like "when are you free to hang out" and it responding "[AI] according to X's calendar and work schedule, they may be available on the following days" I might also consider that somewhat useful (I just wouldn't take it as something they actually said).
If they're using LLMs to reword themselves or because they're not really interested in conversing, that's a definite ick.
I would personally use such a system in a receive-only mode for adding things to calendars or searching. But I'd also stick to local LLMs, so the context window would probably be too small to get much out of it anyway.
This is actually something I am curious about, if for example I use this and I and streaming all my contacts information and messages externally, surely I'm breaking privacy laws in some US states and certainly in the EU.
It very much depends on the specifics around use cases, parties, and jurisdictions. In plenty of them, you're allowed to record and keep track of conversations you're taking part in, as is the other party, but publishing those on the internet would he illegal.
Processing them (like compressing them to mp3 files or storing them in cloud storage) is probably legal in most cases.
The potential problem with LLMs is that they use your input to train themselves.
As of right now, the legal status of AI is very much up in the air. It's looking like AI training will be exempt from things like copyright laws (because how else would you train an LLM without performing the biggest book piracy operation in history?), and if that happens things like personal rights may also fall to the AI training overlords.
I personally don't think using this is illegal. I'm pretty sure 100% of LinkedIn messages are being passed through AI already, as are all WhatsApp business accounts and any similar service. I suppose we'll have to wait for someone to get caught using these tools and the problem making it to a high enough court to form jurisprudence.
I mean, the technology is not the issue. Someone can read your past conversations today and take diligent notes to unearth the same insights an LLM might, if they were so inclined.
This might a actually be helpful for people with poor memory or neurodivergent minds, to help surface relative context to continue their conversation.
Or sales people to help with their customer relationship management.
Since Meta is a big AI investor, I suggest you skip WhatsApp altogether.
You think they (plan to) decrypt messages and then upload them again in plain text to a server?
Since on-device processing is neither as objectionable nor could be very large
I don't use WhatsApp myself because of who runs it and there are plenty of better options out there, so I certainly agree with the sentiment of steering clear, but this claim does seem pretty far out there
They don't plan it because they have no use for it. They only care about the metadata. When you talked to this person; your wife; at what time of day; was it at night; how long is the message; was there a product mentioned in the message; was the message about sports; etc.
6 replies →
That's a completely reasonable boundary. Privacy and consent are critical, especially when sharing personal messages or conversations. It's fair to expect that your interactions remain private unless you've explicitly agreed otherwise. If you'd like, you can communicate your stance clearly to others in advance, ensuring they're aware of your boundaries regarding the use of your messages with AI tools or other external resources.
I understand why one would think it's funny to feed the parent comment into an LLM but please at least label when you echo such output on the site
I don't think their main concern was the privacy aspect.
What do you think their concern was? I can't see any other issues someone might have.
7 replies →
Where do you draw the line? LLMs for searching, BM25 for searching, exact match only, no processes at all (forbid whatsapp search, make them scroll)?
Funny that people freakout about a local LLM while using Facebook products. They're probably the same types who use it to do their work.
> They're probably the same types who use it to do their work.
Citation needed.
It's a local LLM with access to an extraordinary amount of personal data. In the EU at least that personal data is supposed to be handled with care. I don't see people freaking out, but simple pointing out the leap of handing it over to ANOTHER company.
Not all Meta products are alike. WA has E2E encryption, has had it for a long time. It's the same protocol as Signal: in fact, it was built for/in WA by Moxie/signal a while ago.
That doesn't make the metadata private. Meta can use that as they want. But not the contents, nor the images, not even in group chats (as opposed to Telegram, where group-chats aren't (weren't?) E2E encrypted).
What you say or send on WA is private. Meta cannot see that. Nor governments nor your ISP or your router. Only you and the person or people you sent it to can read that.
It's a d*ck move if they then publicize this. And, as others pointed out, illegal even in many jurisdictions: AFAIK, it is in my country.
Do you think it'd be okay if they used a local LLM, via ollama, and this MCP server?
Personally, I would say that still reeks of being manipulative. I've received messages from a friend which were definitely LLM-generated, it made me like that person considerably less
If they use the LLM to search ("when did X tell me about that party somewhere around Y's neighborhood") then I don't think there's any problem.
If they configure it to indicate a prefix, for instance when answering questions like "when are you free to hang out" and it responding "[AI] according to X's calendar and work schedule, they may be available on the following days" I might also consider that somewhat useful (I just wouldn't take it as something they actually said).
If they're using LLMs to reword themselves or because they're not really interested in conversing, that's a definite ick.
I would personally use such a system in a receive-only mode for adding things to calendars or searching. But I'd also stick to local LLMs, so the context window would probably be too small to get much out of it anyway.
This is actually something I am curious about, if for example I use this and I and streaming all my contacts information and messages externally, surely I'm breaking privacy laws in some US states and certainly in the EU.
This seems sketchy to me.
It very much depends on the specifics around use cases, parties, and jurisdictions. In plenty of them, you're allowed to record and keep track of conversations you're taking part in, as is the other party, but publishing those on the internet would he illegal.
Processing them (like compressing them to mp3 files or storing them in cloud storage) is probably legal in most cases.
The potential problem with LLMs is that they use your input to train themselves.
As of right now, the legal status of AI is very much up in the air. It's looking like AI training will be exempt from things like copyright laws (because how else would you train an LLM without performing the biggest book piracy operation in history?), and if that happens things like personal rights may also fall to the AI training overlords.
I personally don't think using this is illegal. I'm pretty sure 100% of LinkedIn messages are being passed through AI already, as are all WhatsApp business accounts and any similar service. I suppose we'll have to wait for someone to get caught using these tools and the problem making it to a high enough court to form jurisprudence.
I hope you never contacted anyone with a business account then.
You should just assume that every single thing that you type into an electronic device made after the 90s gets piped into a LLM anyway
Zuck is already piping it into much worse
I mean, the technology is not the issue. Someone can read your past conversations today and take diligent notes to unearth the same insights an LLM might, if they were so inclined.
This might a actually be helpful for people with poor memory or neurodivergent minds, to help surface relative context to continue their conversation.
Or sales people to help with their customer relationship management.