← Back to context

Comment by bluefirebrand

8 hours ago

> AI keeps getting better and better until it can work around big AI slop code bases

The belief in this is a form of AI psychosis, I think.

Maybe in the future but certainly no evidence of this anytime soon

> Maybe in the future but certainly no evidence of this anytime soon

Here's some anecdotal evidence from me - I cleaned up multiple GPT 4.x era vibecoded projects recently with the latest claude model and integrated one of those into a fairly large open source codebase.

This is something AI completely failed at last year.

Maybe you should try something like this or listen to success stories before claiming 'certainly no evidence' in future?

There are untold billions of dollars to be had if you can make this future come to pass. You don't need AGI to make it happen either. You just need to keep making the context windows bigger and keep coming up with updated training data. It's not the outcome I want, but it really does feel within reach. The only limiting factor is going to be token count and cost to process/generate those tokens. But if you don't particularly care about quality, costs are going to have to go up by several orders of magnitude before you start to regret firing your software engineers.

I don't know what happens in a decade when there are no junior engineers, skilled senior engineers are becoming rare, and the only data left the train LLMs on is 200th-generation slop. But AI slop being qualitatively slop is not enough of a obstacle to prevent that future from coming to pass. And billions of dollars will be "saved" along the way.

No evidence? Chatgpt came out 3 years ago. You basically just need to stick a ruler up on a curve

  • I'm no expert, but the skeptic's opinion I've heard would be to ask:

    What evidence is there that we're not at or close to a plateau of what LLMs are capable of? How do you know the growth rate from 2023 to present will continue into 2029? eg. Is it more training data? More GPUs? What if we're kind of reaching the limits of those things already?

    • I think we're close to the plateau of what LLMs can do, but they will keep improving. IMHO the results are already showing diminishing returns.

      The (leading) LLMs work by consensus, like Wikipedia, Openstreetmap, web search engine or opensource movement.

      What I mean is if I ask LLM "create a linked list", its understanding (of what I want) is already close to the expected ideal. Just like Wikipedia article on linked list, for example.

      But the LLMs will continue to improve in breath and depth of understanding the world, although technically (what they CAN do) they probably already peaked. Similarly, OSS movement technically peaked in the 90s with the creation of compiler, operating system and a database; doesn't mean that new opensource isn't being created.

      1 reply →

    • Ultimately, you are describing a fundamental problem with induction -- Hume's problem of induction to be specific. How can we know that anything that has been shown empirically in the past will continue to be true - we can't. Best to investigate mechanistically:

      I don't see why we would assume that we are at a plateau for RL. In many other settings, Go for instance, RL continues to scale until you reach compute limits. Some things are more easily RL'd than others, but ultimately this largely unlocks data. We are not yet compute/energy/physical world constrained. I think you would start observing clear changes in the world around you before that becomes a true bottleneck. Regardless, currently the vast majority of compute is used for inference not training so the compute overhang is large.

      Assuming that we plateau at {insert current moment} seems wishful and I've already had this conversation any number of times on this exact forum at every level of capability [3.5, 4, o1, o3, 4.6/5.5, mythos] from Nov 2022 onwards.

    • I'm more curious about how much more capability they can get before the economy collapses.

I have personally had success telling Claude that some AI-written system is too complicated and ask it to rewrite it in a more logical way. This sometimes results in thousands of lines of code being deleted. I give an instruction like that if I see certain red flags, eg:

1) same business logic implemented in two different places, with extra code to sync between them

2) fixing apparently simple bugs results in lots of new code being written

It’s a sign I need to at least temporarily dedicate more effort to overseeing work in that area.

I somewhat agree with the AI psychosis framing of the OP. It takes some taste and discipline to avoid letting things dissolve into complete slop.

It's amusing to me that:

* A belief that AI will keep getting better, presented without evidence, does not yield a lot of skepticism around these parts.

* Your comment saying it is wrong to believe AI will keep getting better, also presented without evidence, is downvoted.