← Back to context Comment by GaggiX 20 days ago >local inference hardware (NPU on Arm SoC).Okay the battle is already lost from the beginning. 1 comment GaggiX Reply walterbell 20 days ago There are alternatives to NVIDIAmaxing with brute force. See the Chinese paper on DeepSeek V3, comparable to recent GPT and Claude, trained with 90% fewer resources. Research on efficient inference continues.https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSee...
walterbell 20 days ago There are alternatives to NVIDIAmaxing with brute force. See the Chinese paper on DeepSeek V3, comparable to recent GPT and Claude, trained with 90% fewer resources. Research on efficient inference continues.https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSee...
There are alternatives to NVIDIAmaxing with brute force. See the Chinese paper on DeepSeek V3, comparable to recent GPT and Claude, trained with 90% fewer resources. Research on efficient inference continues.
https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSee...