← Back to context

Comment by simonw

1 day ago

The way I've been handling the deafening hype is to focus exclusively on what the models that we have right now can do.

You'll note I don't mention AGI or future model releases in my annual roundup at all. The closest I get to that is expressing doubt that the METR chart will continue at the same rate.

If you focus exclusively on what actually works the LLM space is a whole lot more interesting and less frustrating.

> focus exclusively on what the models that we have right now can do

I'm just a casual user, but I've been doing the same and have noticed the sharp improvements of the models we have now vs a year ago. I have OpenAI Business subscription through work, I signed up for Gemini at home after Gemini 3, and I run local models on my GPU.

I just ask them various questions where I know the answer well, or I can easily verify. Rewrite some code, factual stuff etc. I compare and contrast by asking the same question to different models.

AGI? Hell no. Very useful for some things? Hell yes.