Comment by whoevercares
1 day ago
Absolutely. LLM inference is still a greenfield — things like overlap scheduling and JIT CUDA kernels are very recent. We’re just getting started optimizing for modern LLM architectures, so cost/perf will keep improving fast.
No comments yet
Contribute on Hacker News ↗