Comment by gmaster1440
8 hours ago
If you're properly bitter-lesson-pilled then why wouldn't better models continue to develop and improve taste and discernment when it comes to design, development, and just better thinking overall?
8 hours ago
If you're properly bitter-lesson-pilled then why wouldn't better models continue to develop and improve taste and discernment when it comes to design, development, and just better thinking overall?
I think that would imply the creation of AGI (i.e. something as intelligent or more intelligent than mankind), which many consider to be science fiction at this point.
> bitter-lesson-pilled
The "bitter lesson" doesn't imply that AGI is coming, all it says is that letting AIs learn on their own yields better results than directly teaching them things.
They do improve, but the general creativity and sparkle we see with increasing scale comes mostly from scaling up pretraining/parameter-size, so it's quite slow and expensive compared to the speed (and decreasing cost) people have come to take for granted in math/coding in small cheap models. Hence the reaction to GPT-4.5: exactly as much better taste and discernment as it should have had based on scaling laws, yet regarded almost universally as a colossal failure. It was as unpopular as the original GPT-3 was when the paper was released, because people look at the log-esque gains from scaling up 10x or 100x and are disappointed. "Is that all?! What has the Bitter Lesson or scaling done for me lately?"
So, you can expect coding skills to continue to outpace the native LLM taste.
I think we're basically agreeing here. Your point (if I'm reading it right) is that taste and discernment do scale, but the gains come through pretraining/parameter scaling, which is slow and expensive compared to the fast, cheap wins in math/coding from smaller models. So taste is more of a lagging indicator of scale. it improves, but it's the last thing people notice because the benchmarkable stuff races ahead. Which also means taste isn't really a moat, just late to get commoditized.
My point is more that since you can expect taste's commoditization to lag behind for deep fundamental reasons, then taste does serve as a moat. Just perhaps a weaker one than one would naively expect, and where you will have to frantically keep investing in it to stay ahead of the LLMs slowly catching up, as opposed to a permanent lock-in you can lazily monopolistically coast on indefinitely. (I'm reminded of Neal Stephenson's La Brea tarpit analogy for open source vs proprietary software in _In The Beginning was the Commandline_.)
1 reply →
I think the author addresses this in saying that since AI output is statistically plausible by design its unlikely to improve in this area. Why do you think AI will get better in this way?
At least in part because some of Taste is fashion.
Isn't part of fashion trends? AI is good at recognizing trends.
Not in the sense you mean. These are trends with concrete origins, not a root of statistical aggregate. They only look that way if you aren't into fashion and don't follow the handful of fashion houses that decide essentially everything.
most (all?) models are fundamentally a regression toward the mean. Good taste is rarely, if ever, residing in the mean.
Regardless of how good the tools get, third-party tooling can never be a product differentiator unless you somehow manage to have exclusive access. Otherwise, everyone else out there can and will use the same tools you are. It's more a hedonic treadmill than a moat.