Comment by xtracto
18 hours ago
That's why we should strive to use and optimize local LLMs.
Or better yet, we should setup something that allows people to share a part of their local GPU processing (like SETI@home) for a distributed LLM that cannot be censored. And somehow be compensated when it's used for inference
Yeah we really have to strive not to rely on these corporations because they absolutely will not do customer support or actually review account closures. This article is also mentioning I assume Google, has control over a lot more than just AI.
[flagged]
I don't see any such agreement here, and your comment is very rude toward the author.
I'm not being rude to parent poster (instead, am agreeing) or the person who wrote the article.
I might have been rude to all the people/bots who insist the article's author is lying because it contradicts AI-everything.