Comment by ssyhape

6 hours ago

I like the mainframe comparison but isn't there a key difference? Mainframes died because hardware got cheap -- that's predictable. LLM efficiency improving enough to run locally needs algorithmic breakthroughs, which... aren't. My gut says we'll end up with a split. Stuff where latency matters (copilot, local agents) moves to edge once models actually fit on a laptop. But training and big context windows stay in the cloud because that's where the data lives. One thing I keep going back and forth on: is MoE "better math" or just "better engineering"? Feels like that distinction matters a lot for where this all goes.

MoE feels a lot more like engineering to me. You're routing around the problem rather than actually solving it. The real math gains are things like quantization schemes that change how information is actually represented. Whether that distinction matters long term probably will depend on whether we hit a capability wall first or an efficiency ceiling first.