Comment by maplethorpe
10 hours ago
I'm waiting for Anthropic to realise they can just set a few thousand agents loose to do just that, and monopolize the entire software market overnight. I'm not sure why they haven't done this yet.
10 hours ago
I'm waiting for Anthropic to realise they can just set a few thousand agents loose to do just that, and monopolize the entire software market overnight. I'm not sure why they haven't done this yet.
You jest, but it's a good question.
When people talk about the 'plateau of ability' agents are widely expected to reach at some point, I suspect a lot of it will boil down to skyrocketing costs and plummeting accuracy past a certain point of number of agents involved. This seems to me like a much harder limit than context windows or model sizes.
Things like Gas Town are exploring this in what you might call a reckless way; I'm sure there are plenty of more careful experiments being conducted.
What I think the ultimate measure of this new tech will be is, how simple of a question can a human put to an LLM group for how complex of a result, and how much will they have to pay for it? It seems obvious to me there is a significant plateau somewhere, it's just a question of exactly where. Things will probably be in flux for a few years before we have anything close to a good answer, and it will probably vary widely between different use cases.
Because a lot of valuable software is the implicit / organizational / human domain knowledge .. not the trillions of lines of code LLms all scraped and trained on.
There is a lot of software that is just code, though; especially at the foundational level.
I guess the thing is - we've always had open source, frameworks, libraries, whatever for all that though, haven't we?
So we can glue that together a bit faster, great.
What if we also stop producing new open source, frameworks, libraries, etc.
What about stories like Tailwind?