Comment by localfirst
2 years ago
This feels awfully similar to Emad and stability in the beginning when there was a lot of expectations and hype. Ultimately could not make a buck to cover the costs. I'd be curious to see what comes out of this however but we are not seeing the leaps and bounds with new llm iterations so wonder if there is something else in store
Interesting, I wish you had elaborated on Emad/etc. I'll see if Google yields anything. I think it's too soon to say "we're not seeing leaps and bounds with new LLMS". We are in-fact seeing fairly strong leaps, just this year, with respect to quality, speed, multi-modality, and robotics. Reportedly OpenAI started their training run for GPT-5 as well. I think we'd have to wait until this time next year before proclaiming "no progress".