Comment by only-one1701
7 days ago
Increasingly I’m realizing that in most cases there is a SIGNIFICANT difference between how useful AI is on greenfield projects vs how useful it is on brownfield projects. For the former: pretty good! For the brownfield, it’s often worse than useless.
It’s also interesting to see how quickly the greenfield progress rate slows down as the projects grow.
I skimmed the vibecoding subreddits for a while. It was common to see frustrations about how coding tools (Cursor, Copilot, etc) were great last month but terrible now. The pattern repeats every month, though. When you look closer it’s usually people who were thrilled when their projects were small but are now frustrated when they’re bigger.
The real issue is context size. You kinda need to know what you are doing in order to construct the project in pieces, and know what to tell the LLM when you spin up a new instance with fresh context to work on a single subsection. It's unwieldy and inefficient, and the model inevitably gets confused when it can effectively look at the whole code base.
Gemini 2.5 is much better in this regard, it can make decent output up to around 100k tokens compared to claude 3.7 starting to choke around 32k. Long term it remains to see if this will remain an issue. If models can get to 5M context and perform like current model with 5k context, it would be a total game changer.
I think there's a similar analogy here for products in the AI era.
Bolting AI onto existing products probably doesn't make sense. AI is going to produce an entirely new set of products with AI-first creation modalities.
You don't need AI in Photoshop / Gimp / Krita to manipulate images. You need a brand new AI-first creation tool that uses your mouse inputs like magic to create images. Image creation looks nothing like it did in the past.
You don't need Figma to design a webpage. You need an AI-first tool that creates the output - Lovable, V0, etc. are becoming that.
You don't need AI in your IDE. Your IDE needs to be built around AI. And perhaps eventually even programming languages and libraries themselves need AI annotations or ASTs.
You don't need AI in Docs / Gmail / Sheets. You're going to be creating documents from scratch (maybe pasting things in). "My presentation has these ideas, figures, and facts" is much different than creating and editing the structure from scratch.
There is so much new stuff to build, and the old tools are all going to die.
I'd be shocked if anyone is using Gimp, Blender, Photoshop, Premiere, PowerPoint, etc. in ten years. These are all going to be reinvented. The only way these products themselves survive is if they undergo tectonic shifts in development and an eventual complete rewrite.
Just for the record, Photoshop's first generative 'AI' feature, Content Aware Fill, is 15 years old.
That's a long time for Adobe not to have figured out what your are saying.
Photoshop is unapproachable to the 99%.
A faster GPT 4o will kill Photoshop for good.
5 replies →
I've been thinking about this a lot and agree. I think the UI will change drastically, maybe making voice central and you just describe what you want done. When language, image and voice models can be run locally things will get crazy.
Oh, I find almost the exact opposite.
On Greenfield projects there's simply too many options for it to pursue. It will take one approach in one place then switch to another.
On a brownfield project, you can give it some reference code and tell it about places to look for patterns and it will understand them.
My experience on brownfield projects is the opposite.
I find that feeding in a bunch of context can help you refactor, add tests to a low coverage application pretty quickly, etc in brownfield apps.
And greenfield turns into brownfield pretty quickly.
Right, but AI could change the ratio of greenfield vs brownfield then (« I’ll be faster if I rewrite this part from scratch »)
I struggle to wrap my head around how this would work (and how AI can be used to maintain and refine software in general). Brownfield code got brown by being useful and solving a real problem, and doing it well enough to be maintained. So the AI approach is to throwaway the code that's proved its usefulness? I just don't get it.