Comment by bwfan123
7 hours ago
> consumer grade local models are getting good enough for local inference
I am waiting for that. Perhaps a taalas kind of high-performance custom hw coding llm engine paired with an open-source coding-agent. Priced like a high-end graphics card which would be pay off over time. It will be a replay of the ibm-mainframe to PC transition of a previous era.
> I am waiting for that
Same, and I think we're close. "The original 1984 128k Mac model was $2,495, and the 1985 512k Mac was $2,795" [1]. That's $8 to 9 thousand today. About the price of a 32-core, 80-GPU M3 Ultra Mac Studio with 256 GB RAM.
[1] https://blog.codinghorror.com/a-lesson-in-apple-economics/
[2] https://www.bls.gov/data/inflation_calculator.htm
The maxed out 512GB RAM Mac Studio is no longer available from Apple and is now pushing $20 thousand in the secondary market. And we might not even see a new Mac Studio release from Apple before October.