← Back to context

Comment by user3939382

6 days ago

When you have 100+ tables and 100k+ loc you’re incapable of the context needed to write features without bugs which is why we have tests. LLMs are capable of like 5% of the context you are:

Full context > human context capacity > LLM context capacity.

We should all be able to agree on this and it should settle the debates around the efficacy of vibe coding.

I agree and we should be sceptical in ways we really aren't right now. But I also think that it is interesting to figure out when "vibe-like" coding works and how it can be made more useful.

It doesn't, as you say, work for large and complex contexts. But it can work really well for automating parts of your work flow that you otherwise wouldn't have bothered to automate. And I wonder if there are more ways where it can be useful where the context can be narrowed down enough.

We’ve gone from tiny context windows to 1 million tokens in a couple of years. At this rate LLMs will exceed human context and grow to have full context.