Comment by _dain_
6 days ago
remember in 2022 when we "hit a wall"? everyone said that back then. turned out we didn't.
and in 2023 and 2024 and january 2025 and ...
all those "walls" collapsed like paper. they were phantoms; ppl literally thinking the gaps between releases were permanent flatlines.
money obviously isn't an issue here, VCs are pouring in billions upon billions. they're building whole new data centres and whole fucking power plants for these things; electricity and compute aren't limits. neither is data, since increasingly the models get better through self-play.
>fundamentally they're about as good as they're ever going to get
one trillion percent cope and denial
The difference in quality between model versions has slowed down imo, I know the benchmarks don't say that but as a person who uses LLMs everyday, the difference between Claude 3.5 and the cutting edge today is not very large at all, and that model came out a year ago. The jumps are getting smaller I think, unless the stuff in house is just way ahead of what is public at the moment.
Yet we are still at the “treat it like a junior” level