Comment by walterbell
20 days ago
With tiny amounts of forum text, obfuscation can be done locally with open models and local inference hardware (NPU on Arm SoC). Zero dollars sent to OpenAI, NVIDIA, AMD or GPU clouds.
20 days ago
With tiny amounts of forum text, obfuscation can be done locally with open models and local inference hardware (NPU on Arm SoC). Zero dollars sent to OpenAI, NVIDIA, AMD or GPU clouds.
What specifically are you suggesting? Is this a project that already exists or a theory of yours?
Markov chains are ancient in AI-years, and don't need a GPU.
>local inference hardware (NPU on Arm SoC).
Okay the battle is already lost from the beginning.
There are alternatives to NVIDIAmaxing with brute force. See the Chinese paper on DeepSeek V3, comparable to recent GPT and Claude, trained with 90% fewer resources. Research on efficient inference continues.
https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSee...