So previously the bottleneck was production. I'd wager now the bottleneck is willingness to test your hypotheses. The willingness to experience failure as soon as possible. To test and iterate.
As technology brings the cost of everything else to 0, psychological costs will predominate.
Reality testing is ultimately unavoidable, of course, but I'd guess most people still lean away from that rather than into it. (Our whole culture is set up that way, and most of us get like two decades of Pavlovian conditioning in that direction.)
Add on the compounding effect that "QA" or "test" in someone's job description was viewed as a synonym for "less-highly compensated" over the past few decades, and you have an entire generation of mid career devs with poorly adapted instincts regarding what is valuable in the process of shipping working product.
The post reads like written by someone who read too much about AI rather than tried to build a startup with the help of AI that they advocate so much. I'm still bounded by system design, UX, pricing and feature decisions, if not by the speed of code output, by the review time for sure. Yes, iterating is faster, but we're nowhere near agentic AI loops spitting out working products. Technically it's possible, but then you just spent that time planning and writing the spec up front, which you'd interleave with dev time otherwise. If the product is a simple CRUD database skin, then yeah, chances of success are lower I think, but this is not the type of startups the post seems to write about.
It's gotten to the point where, when I see yet another Thought Leadership article about software development, I search the page for the word "will". If I see unqualified predictions of the future (AI will change this and Agents will do that and developers will need to do thus), I think I can safely ignore the article. Who has the hubris to make such strong and unwavering statements about a future nobody can see?
A local "thought leader" wrote a ridiculous piece about software leadership in a post-LLM world, when clearly they'd never actually used an LLM to build anything, or actually led a team using LLMs. Lots of hand-waving but obviously no real experience.
As gp says, there's a big difference between theory and practice here, and a lot of the things we needed when we weren't using LLMs are still needed when we are, but it takes a bit of actual practice to work this out. It's still not at the stage where an Ideas Guy can make a real working product without someone on the team actually knowing how to develop software.
At least in my experience, so far. But the world is changing fast.
How come all the talking heads telling us AI has made whatever we've learned or built obsolete never point the lens back at themselves? I personally thing knowledge from entrepreneurs who learned their lessons in previous decades is valuable, but if what they're spouting is true it doesn't make any sense to listen to them, i.e. these thought leaders are desperate to tell you of their irrelevance. Same thing for every CTO who tells me AI replaces developers. That's a stretch, but if we could do that, don't you think it would be trivial to replace your job?
Interfaces will definitely evolve. Visual graphs and representations are useful but speaking to an agent will become mainstream as its faster than typing. Also the ability for agents to code on the fly will open up different interfaces. For instance you could say "show me the impact that our marketing campaign x over this time period" and out comes graphs that were coded. Drawing might even make a comeback for instance when designing a website you just cross out things you don't want, draw boxes of where you want things, talk at the same time saying what you want in that box. Then some people are using virtual relativity. Not everything will become a chatbot but its definitely going to evolve with chatting being an integral part of the user interface.
I usually find chatbot interfaces completely infuriating. I know the UX paradigm of click-reduction went the way of the dodo (and rightfully so, because it was based on bullshit research,) but I think it’s funny that completely removing the user’s agency and visibility into any process, and turning 3-click processes into 200 keystroke processes is the hot shit right now.
I'm glad this is the top comment. I'm ambivalent about a bunch of writing I've seen from Steve Blank - some of his stuff I've loved and some I thought was awful.
But this I just thought was vacuous. I agree with what you wrote, but more to the point, I didn't find any real advice about how a startup should actually change that passed my sniff test. I left the tech startup world about 2 years ago myself, and I'm glad I did, because I just think there are way fewer differentiable opportunities now. That is, even if I accept what Blank says is true, what are all these 2+ year old startups supposed to do - just create some model wrapper/RAG chatbot product like the million other startups out there?
Even in defense, like the article says, there are now a bajillion drone companies, and it looks like a race to the bottom. The most successful plan at this point just looks like the grifter plan, e.g. getting the current president to tweet out your stock ticker.
I'm honestly curious what folks think are good startup business plans these days. Even startups that looked they were "knock it out of the park" successes like Cursor and Lovable just seem like they have no moat to me - I see very few startups (particularly in the "We're AI for X!" that got a ton of funding in the past two years) with defensible positions.
I have a lot of respect for Steve Blank, but my heuristic by now is to ignore any breathless posts that state “teams are doing X with AI, if you are not doing the same you’re behind”.
The much more useful posts are “my team and I are doing X with AI”. Of course, the challenge there is that the ones who are truly getting a competitive edge through AI are usually going to be too busy building to blog about it.
I'm not, but this is not a great introduction. It's handwavy and makes the assumption that AI dev tools are much farther along than they are. I have seen this a lot lately; the farther up the management chain and farther away from putting hands on code, the more confident people seem to be in the power of AI tools.
For big complex real world problems, and big complex real worlde codebases, the AIs are helpful but not yet earth shattering. And that helpfulness seems to have plateaued as of late.
One year ago, the models were only slightly less competent than today. There were models writing entire apps 3 years ago. Competent function writing is basically a given on all models since GPT3.
Much of the progress in the past year has been around the harnesses, MCPs, and skills. The models themselves are not getting better exponentially, if anything the progress is slowing down significantly since the 2023-2024 releases.
One year before 1969 we had never been to the moon. In the 70s credible scientists and physicists predicted that large martian colonies would exist before the year 2000.
If a metric goes from 0 to 2 it doesn't mean it's on a long-lived exponential trajectory.
This comment is getting punished for the incorrect timeline (I would know, I've been harping on about AI getting good at coding for ~2 years now!) but I do think it is directionally correct. Just over 3 years ago, (publicly available) AI could not write code at all. Today it can write whole modules and project scaffoldings and even entire apps, not to mention all the other stuff agents can do today. Considering I didn't think I'd see this kind of stuff in my lifetime, this is a blink of an eye.
Even if a lot of the improvements we see today are due to things outside the models themselves -- tools, harnesses, agents, skills, availability of compute, better understanding of how to use AI, etc. -- things are changing very quickly overall. It would be a mistake to just focus on one or two things, like models or benchmarks, and ignore everything else that is changing in the ecosystem.
Seems extremely disingenuous to say that one year ago models could barely write a working function. In fact, there were plenty capable of writing a working function with the right context fed in, exactly as today.
Made me wonder if there's a live-streaming equivalent for blogging... some platform that both ensures the reader knows the blogger is a person, and promotes a parasocial relationship.
There's live-coding, so it's not totally a crazy idea.
Your comment immediately made me think about the extreme opposite: a Davy Force-like, Infochammel-style livestream of a never-ending AI generated Ted Talk, offering delectable morsels of tech startup wisdom, but is ultimately zero calorie.
As a way to get my feet wet vibe coding, I made https://seeitwritten.com with that idea. That by capturing how you write with all its fits and starts you can show that a human wrote it. So, sort of recorded live-streaming. But I'm thinking that a sufficiently cute agent could be prompted to write something and re-write something in a convincing manner. I'm not so sure about that, though, since their corpus is completed text rather than text in action.
In 2020, I wrote a book and livestreamed the writing of it. Felt more raw and real at the time but I guess is probably more human vs ai these days.
Not sure how to make that a platform, as when i wrote i explicitly put everything directly into a book unedited, whereas for many people, the editing is probably at least half if not more of the time they spend writing.
I think Blank's story is about a guy who missed a $20B market shift in defense VC, not a guy who failed to adopt AI.
He's not saying "add AI to your product" or "use AI or die" but more that AI has shifted institutional assumptions about tech stacks, defensibility and fundability. The bottleneck moved up the stack from engineering to judgment, insight and design.
Chris lost because he was heads down building while $20B in defense VC was flowing into his exact problem space and he didn't build the boat to capture that wave.
Well of course it is, most startups are dead on arrival.
The big pinch of salt I throw in with advice like this though is that startup failure rate hasn't dramatically shifted despite two decades of lean startup methodology, accelerators, and an entire cottage industry of startup advice. It's never the fault of the framework, mind you.
If anything the failure rate has probably increased with more capital and founders chasing after the same opportunities. Being a startup founder gained prestige and became the default thing to do after college for certain types of people who before the GFC would have ended up in finance.
isn't this a problem of being able to see whatever we want in the data? For example maybe more startups than ever are created in part due to access to this info. Volume increase but rate doesn't. Or maybe magnitude of success increases. Surely this is true but largely attributed to internet overall. But maybe methodologies are indeed a factor.
edit: Tobi from Shopify has an insight that relates. His north star metric is user churn. sounds crazy on its face. he's known for that. But increasing churn means you've increased top line exposure to more would-be entrepreneurs. Not all of them will succeed, but shopifys mission is to create more entrepreneurs. Grow the pie. A focus on increasing conversion tends to have a narrowing effect.
Sure, but if the advice works, you'd see failure rate drops over time. It would work like medicine - we have more people than ever before, but almost no one dies of polio anymore. That's not what we see though, failure rate is basically the same.
Interestingly, this was always true, it's just made more obvious when it takes less time to bring a product to a potential customer.
Launching a product was never the finish line; it was always the start line. But technical founders could trick themselves into thinking that building a product was building a business.
The same "Lean Startup" rules apply. Build something and get it in front of real people who will pay you for the thing. If they won't, back to the drawing board.
The only real "shift" I've seen is that most startups don't actually need VC at all. That's a great thing.
> Founders who started pre-2025 typically have built a technical stack optimized for a world where software development was bespoke and expensive.
Of all the things that AI has changed, tech stacks aren't one of them. The bots will gladly write Typescript, Java, Python, Rust, what have you. They could not give less of a shit.
Build vs. buy is an eternal question in enterprises. I remember many in-house data teams trying to build tools for "digital transformation" and cloud migration about 10 years ago. The challenge was, building those tools was more expensive than those enterprises could budget for (IT as cost center), so a startup like Snowflake would easily outcompete in-house solutions with their custom, cloud-based tech stack that was necessarily complex because it needed to serve the needs of thousands of customers.
If he's right, the build vs. buy equation has shifted more towards build, at least as far as enterprise software is concerned. IT is still a cost center, but in theory an internal team can now handle more requests for custom tools without looking to outside vendors. Essentially the cost of building in-house might be collapsing and therefore enterprise software startups will be serving fewer customers (who would all pay you more because if solving the problem was cheap they'd do it).
If you had to build a stack for dozens of customers paying huge amounts of money, how would that stack differ from the stack you'd build to serve thousands of customers? Certainly it wouldn't need to be as scalable! And that's probably what he's getting at. I think what you'd do instead, to capture those higher price point customers, is solve their problems more specifically, in a higher value manner.
Many companies already do this, investing far more in field engineers than they do in their tech stack, since customization is essential.
"You better be doing something in AI" applies to Startups - businesses that are expected to spend every penny as quickly as possible, to meet the metrics needed to raise the next (bigger) round of funding.
It is also not the same as, "If you want to be a profitable company...". For that you need to somehow make more money than you are spending.
> For that you need to somehow make more money than you are spending.
I've had this idea of 'business as reducing entropy' floating around in my head for awhile. It's a neat way to think about the value a business offers to buyers; a washing machine manufacturer is selling reduced time to reduced entropy (clean cloths), spreadsheet software is selling reduced time to understanding (information from tabulated data), and so on.
From that perspective, a lot of AI-driven development is failing.
We're still in the phase of 'how do we get order out of semi-average chaos?' for LLMs. For ML we're largely past that point.
I've been using this framing as a means to guide me towards 'what is actually useful, what might someone actually buy'. I don't have my own business at this point, but its still fun to think about off and on.
I think this is application dependent. LLM's are quite good for brainstorming, even if they are not arguably creative, at least they draw from a lot of information that is already out there, which saves me time in researching and learning.
Just stop thinking of software products. There are a million businesses offering solid value propositions that need a lot of software to run, but software isn't the product.
Note the mention of "systems of record" being unsuitable for the present level of AI. The real question is whether the costs of AI mistakes and hallucinations can be dumped on some external party who can't impose costs on you. If not, there's a problem.
> The bottleneck is no longer engineering. It’s moving up the stack to judgment, customer insight for desired outcomes and distribution.
My posit is this: engineering never was the bottleneck, or at least hasn't been for 10 years now. Frameworks and best practices are pretty well known at this point. AI is simply exposing this reality to engineers' faces.
Proof point - most publicly traded SaaS first businesses S&M equals their R&D spend, if not dwarfs it. You're going to see this even more lopsided going forward.
Building SaaS businesses has become a whole lot less capital intensive. Solo founders can go much further than they've been able to previously. New startups probably don't need funding anymore.
This article resonated with me, especially since I've noticed the fund raising hoopla's in my circle has dramatically dropped. Either investors are tightening the belt so founder-investor fit has crossed into the realm of disillusionment
"a pricing model based on seats, a product roadmap built around features rather than outcomes [is outdated]"
I disagree with this.
On pricing, I get that agents and tokens can scale in a way that's unrelated to # of users. But for much SaaS software, AI remains helpful to a human and the human remains the receiver of value. Seat-based pricing is easy to understand and you can always layer in token/agent costs thresholds.
On features vs. outcomes, the latter is hard to define and measure in many industries. In marketing SaaS, which I know well, you can't often tell what outcome to expect. You have to try a lot of ideas and some will hit. No way a SaaS vendor can guarantee that.
The argument against seat pricing is that companies will employ fewer people in general. So a 45 person company paying for 10 seats will become a 7 person company paying for one seat.
Not sure I agree with this train of thought, but a SaaS CEO made this exact argument to me last week.
I’d add that token pricing doesn’t work for anyone but the frontier models. Everything else will be commodified. So Opus can charge us top prices per token until a lower (or local) model hits parity and then price goes to zero.
The author didn't spend more than, maybe, 30 seconds thinking this through? Information I could've gotten in 3 seconds by opening a screen and looking at a line item, I now have to extract by writing a paragraph to an AI agent (and cross my fingers that nothing I said was ambiguous or misunderstood). And that's supposedly an upgrade?
I read it less as “every startup now needs an AI feature” and more as “your assumptions expire faster than they used to.” That part feels true even if the examples are a bit overstated imo...
It’s easier to view it in terms of DCF - the value of a cash flow generating asset = present value of expected cash flows discounted back at a risk discount adjusted rate. In other words what you’ve invested into your existing assets is irrelevant - the cash flows generated by them and the growth assets through future investment, is what matters.
One assumption that's also changed: small teams no longer need dedicated QA. Tools like Autonoma (open source, getautonoma.com) use AI agents to generate and run E2E tests in real browsers, and tests self-maintain as your app evolves. One of the things making the solo-founder path more viable.
Seems like the headline should have been "is now dead on arrival". As currently written, it fails to convey the temporal aspect that is the focus of the blog post.
It also fails to convey that he's actually only talking about startups that were created 2+ years ago, rather than the many AI startups founded in the last 2 years.
> Now, before you build a physical prototype, you can simulate more design variants, create digital twins, and stress-test assumptions earlier and much cheaper than before.
Hahahahahahahaha no you can't. The rise of LLMs has done little to nothing in this area because it's very much compute-limited. Digital-twins and other ML-based strategies predate ChatGPT by a long shot. There are definitely places in hardware design where LLMs and agentic workflows will help, but that's largely because the existing tooling is utter garbage, and now the industry has a fire under its ass to make things automatable so they can build their own agents.
I think this is a lot of fluff supported by a weak anecdote.
Chris' company's assumptions are no longer true, but that doesn't apply to everyone's startup. This is mostly a Chris problem.
No, not every product can just be a chat window like in the silly little screenshot.
If the author actually wrote software they'd realize that, no, AI isn't speeding up development by any more than a modest amount. It's great that we have it and it's removing tedium but it has replaced zero engineers at my company or at any other company of anyone else I know.
And no, your company laying off some people isn't because of AI, your startup idea not getting funding is not because of AI, it's because we've been in a regular old recession which is now a developing oil crisis. Interest rates aren't 0% so nobody wants to lend money to infinite startups.
depends on assumptions. thats the load bearing element. 99.99% of what was true then is still true. its mostly on the fractal churning edges where hyped change happens. things flip-flop too:
2021-2024: good time in US for EV startup
2025: terrible time in US for EV startup
2026 March/April: AWESOME time in world for EV startup
focus on fundamentals, not flakey ephemerals
2020: wise to have smart elite software engineers on your team
2021: ditto
2022: ditto
2023: ditto
2024: ditto (is this when ChatGPT launched? dont care. snore)
It is insane to me that anyone could look at the US Economy right now and think this was a good time to start a business. Between the war, AI, a pending economic recession (look at bond prices), all I see is failure. Maybe a funeral home, but that is all I can think of.
I find the story of a startup founder who entirely missed the developments of the last two years and did absolutely nothing with AI difficult to believe. If that actually happened it's the exception, not the rule. Most startup founders are way more in-tune with AI developments. This makes it sound like Chris (the mentioned founder) is behind marketing people who use LLM bots to post slop on LinkedIn.
In that case, yes their startup is most certainly DOA.
Look around, you're most definitely in a bubble. LLMs are bleeding edge by themselves. Using agentic anything is mega bleeding edge. Having something actually working reliably is a tiny sliver of the bleeding edge audience. We have barely entered early adoption phase. Most AI users out there are Q&A'ing it and they have no idea what agents, tool calling or context compaction are.
I don't follow. Tech startups are bleeding edge. You may be over-generalizing here. I talk a lot of my dev friends, they are all using AI for work. So if Joe Blow at some consulting company is using it, then a SV startup CEO should be too.
>Most AI users out there are Q&A'ing it and they have no idea what agents, tool calling or context compaction are.
Again, talking about a tech CEO not a random "AI user."
A laundromat isn't a startup in that sense. There's no potential for exponential growth. A VC would never give you money to open a laundromat, you'd go to a bank and get a business loan for that.
So previously the bottleneck was production. I'd wager now the bottleneck is willingness to test your hypotheses. The willingness to experience failure as soon as possible. To test and iterate.
As technology brings the cost of everything else to 0, psychological costs will predominate.
Reality testing is ultimately unavoidable, of course, but I'd guess most people still lean away from that rather than into it. (Our whole culture is set up that way, and most of us get like two decades of Pavlovian conditioning in that direction.)
Edit: Expanded here: https://nekolucifer.substack.com/p/willingness-to-fail-is-no...
Add on the compounding effect that "QA" or "test" in someone's job description was viewed as a synonym for "less-highly compensated" over the past few decades, and you have an entire generation of mid career devs with poorly adapted instincts regarding what is valuable in the process of shipping working product.
The bottleneck was never coding...
I keep seeing people say that, but the bottleneck really was, to a large degree, writing the code.
[dead]
I agree, and a willingness to experience failure as soon as possible has always been a competitive advantage.
If anything LLM chatbots & synthetic users will make the majority of founders evermore comfortable not testing reality.
The post reads like written by someone who read too much about AI rather than tried to build a startup with the help of AI that they advocate so much. I'm still bounded by system design, UX, pricing and feature decisions, if not by the speed of code output, by the review time for sure. Yes, iterating is faster, but we're nowhere near agentic AI loops spitting out working products. Technically it's possible, but then you just spent that time planning and writing the spec up front, which you'd interleave with dev time otherwise. If the product is a simple CRUD database skin, then yeah, chances of success are lower I think, but this is not the type of startups the post seems to write about.
It's gotten to the point where, when I see yet another Thought Leadership article about software development, I search the page for the word "will". If I see unqualified predictions of the future (AI will change this and Agents will do that and developers will need to do thus), I think I can safely ignore the article. Who has the hubris to make such strong and unwavering statements about a future nobody can see?
A local "thought leader" wrote a ridiculous piece about software leadership in a post-LLM world, when clearly they'd never actually used an LLM to build anything, or actually led a team using LLMs. Lots of hand-waving but obviously no real experience.
As gp says, there's a big difference between theory and practice here, and a lot of the things we needed when we weren't using LLMs are still needed when we are, but it takes a bit of actual practice to work this out. It's still not at the stage where an Ideas Guy can make a real working product without someone on the team actually knowing how to develop software.
At least in my experience, so far. But the world is changing fast.
The last "Thought Leader" worth listening to was put to death by Athens.
Some that came after might be worthy of the title, but those who claim it for themselves aren't.
How come all the talking heads telling us AI has made whatever we've learned or built obsolete never point the lens back at themselves? I personally thing knowledge from entrepreneurs who learned their lessons in previous decades is valuable, but if what they're spouting is true it doesn't make any sense to listen to them, i.e. these thought leaders are desperate to tell you of their irrelevance. Same thing for every CTO who tells me AI replaces developers. That's a stretch, but if we could do that, don't you think it would be trivial to replace your job?
1 reply →
Have you ever sweat bullets at 3 a.m.? While Claude spins in circles unable to fix production without breaking five other things?
You will!
> Who has the hubris to make such strong and unwavering statements about a future nobody can see?
LinkedIn influencers
Yeah… also, it’s just weird. Interfaces are important, they contain information and affordances, everything should not become a chatbot.
Interfaces will definitely evolve. Visual graphs and representations are useful but speaking to an agent will become mainstream as its faster than typing. Also the ability for agents to code on the fly will open up different interfaces. For instance you could say "show me the impact that our marketing campaign x over this time period" and out comes graphs that were coded. Drawing might even make a comeback for instance when designing a website you just cross out things you don't want, draw boxes of where you want things, talk at the same time saying what you want in that box. Then some people are using virtual relativity. Not everything will become a chatbot but its definitely going to evolve with chatting being an integral part of the user interface.
2 replies →
I usually find chatbot interfaces completely infuriating. I know the UX paradigm of click-reduction went the way of the dodo (and rightfully so, because it was based on bullshit research,) but I think it’s funny that completely removing the user’s agency and visibility into any process, and turning 3-click processes into 200 keystroke processes is the hot shit right now.
I think the AI backlash is strong enough that "AI-Free" might be a powerful marketing tool, whether that is fair or not.
Another option I like: organic.
3 replies →
I'm glad this is the top comment. I'm ambivalent about a bunch of writing I've seen from Steve Blank - some of his stuff I've loved and some I thought was awful.
But this I just thought was vacuous. I agree with what you wrote, but more to the point, I didn't find any real advice about how a startup should actually change that passed my sniff test. I left the tech startup world about 2 years ago myself, and I'm glad I did, because I just think there are way fewer differentiable opportunities now. That is, even if I accept what Blank says is true, what are all these 2+ year old startups supposed to do - just create some model wrapper/RAG chatbot product like the million other startups out there?
Even in defense, like the article says, there are now a bajillion drone companies, and it looks like a race to the bottom. The most successful plan at this point just looks like the grifter plan, e.g. getting the current president to tweet out your stock ticker.
I'm honestly curious what folks think are good startup business plans these days. Even startups that looked they were "knock it out of the park" successes like Cursor and Lovable just seem like they have no moat to me - I see very few startups (particularly in the "We're AI for X!" that got a ton of funding in the past two years) with defensible positions.
Are you familiar with Steve Blank? What you’re describing really isn’t his MO at all.
I have a lot of respect for Steve Blank, but my heuristic by now is to ignore any breathless posts that state “teams are doing X with AI, if you are not doing the same you’re behind”.
The much more useful posts are “my team and I are doing X with AI”. Of course, the challenge there is that the ones who are truly getting a competitive edge through AI are usually going to be too busy building to blog about it.
1 reply →
I'm not, but this is not a great introduction. It's handwavy and makes the assumption that AI dev tools are much farther along than they are. I have seen this a lot lately; the farther up the management chain and farther away from putting hands on code, the more confident people seem to be in the power of AI tools.
For big complex real world problems, and big complex real worlde codebases, the AIs are helpful but not yet earth shattering. And that helpfulness seems to have plateaued as of late.
I am extremely skeptical of posts like this.
10 replies →
You are assuming a linear future while we are in an exponential.
One year ago models could barely write a working function.
GPT-4o is 23 months old.
One year ago, the models were only slightly less competent than today. There were models writing entire apps 3 years ago. Competent function writing is basically a given on all models since GPT3.
Much of the progress in the past year has been around the harnesses, MCPs, and skills. The models themselves are not getting better exponentially, if anything the progress is slowing down significantly since the 2023-2024 releases.
3 replies →
One year before 1969 we had never been to the moon. In the 70s credible scientists and physicists predicted that large martian colonies would exist before the year 2000.
If a metric goes from 0 to 2 it doesn't mean it's on a long-lived exponential trajectory.
6 replies →
> One year ago models could barely write a working function.
This is a false claim.
Claude Code was released over a year ago.
Models have improved a lot recently, but if you think 12 months ago they could barely write a working function you are mistaken.
This comment is getting punished for the incorrect timeline (I would know, I've been harping on about AI getting good at coding for ~2 years now!) but I do think it is directionally correct. Just over 3 years ago, (publicly available) AI could not write code at all. Today it can write whole modules and project scaffoldings and even entire apps, not to mention all the other stuff agents can do today. Considering I didn't think I'd see this kind of stuff in my lifetime, this is a blink of an eye.
Even if a lot of the improvements we see today are due to things outside the models themselves -- tools, harnesses, agents, skills, availability of compute, better understanding of how to use AI, etc. -- things are changing very quickly overall. It would be a mistake to just focus on one or two things, like models or benchmarks, and ignore everything else that is changing in the ecosystem.
1 reply →
Sigmoids look a lot like exponentials early on.
We can’t say for sure yet which trajectory we are on.
Seems extremely disingenuous to say that one year ago models could barely write a working function. In fact, there were plenty capable of writing a working function with the right context fed in, exactly as today.
> the bottleneck is no longer engineering, it’s ____
90% of blog articles created in the last two years are probably dead on arrival
Made me wonder if there's a live-streaming equivalent for blogging... some platform that both ensures the reader knows the blogger is a person, and promotes a parasocial relationship.
There's live-coding, so it's not totally a crazy idea.
Your comment immediately made me think about the extreme opposite: a Davy Force-like, Infochammel-style livestream of a never-ending AI generated Ted Talk, offering delectable morsels of tech startup wisdom, but is ultimately zero calorie.
As a way to get my feet wet vibe coding, I made https://seeitwritten.com with that idea. That by capturing how you write with all its fits and starts you can show that a human wrote it. So, sort of recorded live-streaming. But I'm thinking that a sufficiently cute agent could be prompted to write something and re-write something in a convincing manner. I'm not so sure about that, though, since their corpus is completed text rather than text in action.
In 2020, I wrote a book and livestreamed the writing of it. Felt more raw and real at the time but I guess is probably more human vs ai these days.
Not sure how to make that a platform, as when i wrote i explicitly put everything directly into a book unedited, whereas for many people, the editing is probably at least half if not more of the time they spend writing.
Or we could just bring back Google Wave :-)
jimkleiber.com/project-35 if you’re curious.
Isn't it just podcast but text
Radio host
Is that the original phrasing? Now it's:
> The bottleneck is no longer engineering. It’s moving up the stack to judgment, customer insight for desired outcomes and distribution.
That’s the same phrasing?
I think Blank's story is about a guy who missed a $20B market shift in defense VC, not a guy who failed to adopt AI.
He's not saying "add AI to your product" or "use AI or die" but more that AI has shifted institutional assumptions about tech stacks, defensibility and fundability. The bottleneck moved up the stack from engineering to judgment, insight and design.
Chris lost because he was heads down building while $20B in defense VC was flowing into his exact problem space and he didn't build the boat to capture that wave.
Well of course it is, most startups are dead on arrival.
The big pinch of salt I throw in with advice like this though is that startup failure rate hasn't dramatically shifted despite two decades of lean startup methodology, accelerators, and an entire cottage industry of startup advice. It's never the fault of the framework, mind you.
If anything the failure rate has probably increased with more capital and founders chasing after the same opportunities. Being a startup founder gained prestige and became the default thing to do after college for certain types of people who before the GFC would have ended up in finance.
isn't this a problem of being able to see whatever we want in the data? For example maybe more startups than ever are created in part due to access to this info. Volume increase but rate doesn't. Or maybe magnitude of success increases. Surely this is true but largely attributed to internet overall. But maybe methodologies are indeed a factor.
edit: Tobi from Shopify has an insight that relates. His north star metric is user churn. sounds crazy on its face. he's known for that. But increasing churn means you've increased top line exposure to more would-be entrepreneurs. Not all of them will succeed, but shopifys mission is to create more entrepreneurs. Grow the pie. A focus on increasing conversion tends to have a narrowing effect.
Sure, but if the advice works, you'd see failure rate drops over time. It would work like medicine - we have more people than ever before, but almost no one dies of polio anymore. That's not what we see though, failure rate is basically the same.
Arguably that mostly says stuff about average VC skill to pick winning idea.
If you have good product idea, the methodology to get there mostly affect profit marigins, not whether it will be success or total failure
>Arguably that mostly says stuff about average VC skill to pick winning idea.
What happens is that the original idea rarely matters at all. It is the people that implements the idea what matters.
The original idea is almost always terrible, but great people pivot or change the idea gradually while having contact with reality.
Execution is very important. Startups almost never start with the "right" idea.
Interestingly, this was always true, it's just made more obvious when it takes less time to bring a product to a potential customer.
Launching a product was never the finish line; it was always the start line. But technical founders could trick themselves into thinking that building a product was building a business.
The same "Lean Startup" rules apply. Build something and get it in front of real people who will pay you for the thing. If they won't, back to the drawing board.
The only real "shift" I've seen is that most startups don't actually need VC at all. That's a great thing.
I almost feel like this is always true. Every startup is dead on arrival. Most of them fail.
You’ve always needed to constantly learn and innovate to launch a successful business.
Paul graham wrote about this 11 years ago: https://www.paulgraham.com/aord.html
Startups are mostly all default dead. That's why they need VC money.
> Founders who started pre-2025 typically have built a technical stack optimized for a world where software development was bespoke and expensive.
Of all the things that AI has changed, tech stacks aren't one of them. The bots will gladly write Typescript, Java, Python, Rust, what have you. They could not give less of a shit.
I caught that too.
What is he getting at? How does the code and infra stack differ at all between a company that is using AI, vs one that is not?
Here's my take on what he was getting at:
Build vs. buy is an eternal question in enterprises. I remember many in-house data teams trying to build tools for "digital transformation" and cloud migration about 10 years ago. The challenge was, building those tools was more expensive than those enterprises could budget for (IT as cost center), so a startup like Snowflake would easily outcompete in-house solutions with their custom, cloud-based tech stack that was necessarily complex because it needed to serve the needs of thousands of customers.
If he's right, the build vs. buy equation has shifted more towards build, at least as far as enterprise software is concerned. IT is still a cost center, but in theory an internal team can now handle more requests for custom tools without looking to outside vendors. Essentially the cost of building in-house might be collapsing and therefore enterprise software startups will be serving fewer customers (who would all pay you more because if solving the problem was cheap they'd do it).
If you had to build a stack for dozens of customers paying huge amounts of money, how would that stack differ from the stack you'd build to serve thousands of customers? Certainly it wouldn't need to be as scalable! And that's probably what he's getting at. I think what you'd do instead, to capture those higher price point customers, is solve their problems more specifically, in a higher value manner.
Many companies already do this, investing far more in field engineers than they do in their tech stack, since customization is essential.
"You better be doing something in AI" applies to Startups - businesses that are expected to spend every penny as quickly as possible, to meet the metrics needed to raise the next (bigger) round of funding.
It is also not the same as, "If you want to be a profitable company...". For that you need to somehow make more money than you are spending.
> For that you need to somehow make more money than you are spending.
I've had this idea of 'business as reducing entropy' floating around in my head for awhile. It's a neat way to think about the value a business offers to buyers; a washing machine manufacturer is selling reduced time to reduced entropy (clean cloths), spreadsheet software is selling reduced time to understanding (information from tabulated data), and so on.
From that perspective, a lot of AI-driven development is failing.
We're still in the phase of 'how do we get order out of semi-average chaos?' for LLMs. For ML we're largely past that point.
I've been using this framing as a means to guide me towards 'what is actually useful, what might someone actually buy'. I don't have my own business at this point, but its still fun to think about off and on.
I think this is application dependent. LLM's are quite good for brainstorming, even if they are not arguably creative, at least they draw from a lot of information that is already out there, which saves me time in researching and learning.
> I've had this idea of 'business as reducing entropy' floating around in my head for awhile
You could generalize this to the purpose of life itself, probably.
Just stop thinking of software products. There are a million businesses offering solid value propositions that need a lot of software to run, but software isn't the product.
Note the mention of "systems of record" being unsuitable for the present level of AI. The real question is whether the costs of AI mistakes and hallucinations can be dumped on some external party who can't impose costs on you. If not, there's a problem.
> The bottleneck is no longer engineering. It’s moving up the stack to judgment, customer insight for desired outcomes and distribution.
My posit is this: engineering never was the bottleneck, or at least hasn't been for 10 years now. Frameworks and best practices are pretty well known at this point. AI is simply exposing this reality to engineers' faces.
Proof point - most publicly traded SaaS first businesses S&M equals their R&D spend, if not dwarfs it. You're going to see this even more lopsided going forward.
Building SaaS businesses has become a whole lot less capital intensive. Solo founders can go much further than they've been able to previously. New startups probably don't need funding anymore.
VC for conventional SaaS is dead.
This article resonated with me, especially since I've noticed the fund raising hoopla's in my circle has dramatically dropped. Either investors are tightening the belt so founder-investor fit has crossed into the realm of disillusionment
It is the economy. Interest rates are up and that will hit startups first since investors have other options that are more likely to be better.
"a pricing model based on seats, a product roadmap built around features rather than outcomes [is outdated]"
I disagree with this.
On pricing, I get that agents and tokens can scale in a way that's unrelated to # of users. But for much SaaS software, AI remains helpful to a human and the human remains the receiver of value. Seat-based pricing is easy to understand and you can always layer in token/agent costs thresholds.
On features vs. outcomes, the latter is hard to define and measure in many industries. In marketing SaaS, which I know well, you can't often tell what outcome to expect. You have to try a lot of ideas and some will hit. No way a SaaS vendor can guarantee that.
The argument against seat pricing is that companies will employ fewer people in general. So a 45 person company paying for 10 seats will become a 7 person company paying for one seat.
Not sure I agree with this train of thought, but a SaaS CEO made this exact argument to me last week.
I’d add that token pricing doesn’t work for anyone but the frontier models. Everything else will be commodified. So Opus can charge us top prices per token until a lower (or local) model hits parity and then price goes to zero.
> https://i0.wp.com/steveblank.com/wp-content/uploads/2026/03/...
The author didn't spend more than, maybe, 30 seconds thinking this through? Information I could've gotten in 3 seconds by opening a screen and looking at a line item, I now have to extract by writing a paragraph to an AI agent (and cross my fingers that nothing I said was ambiguous or misunderstood). And that's supposedly an upgrade?
If you started something at some point, many assumptions are no longer true, and might possibly never have been true at any moment.
That said, if you believe universe exists, chances are not null that you are correct. But solipsism might actually be right.
In case of doubt, remember that your memory might be mere illusions.
the best/worst part of this is that it is obviously not written by steve blank. it does not look, read, or track like a steve blank blog piece
he obviously generated large parts of this with claude or chatgpt or whatever.
i don't know what that means for the rest of us, but boy it's a big ole spike of signal
I read it less as “every startup now needs an AI feature” and more as “your assumptions expire faster than they used to.” That part feels true even if the examples are a bit overstated imo...
True, it increases the trials and errors you can have to receive pmf
> How can we throw away years of work?
This trap has killed many startups, well before AI.
Now that code is cheaper to write, hopefully it becomes less of a problem?
In either case, founders should never fall in love with their solutions.
It’s easier to view it in terms of DCF - the value of a cash flow generating asset = present value of expected cash flows discounted back at a risk discount adjusted rate. In other words what you’ve invested into your existing assets is irrelevant - the cash flows generated by them and the growth assets through future investment, is what matters.
One assumption that's also changed: small teams no longer need dedicated QA. Tools like Autonoma (open source, getautonoma.com) use AI agents to generate and run E2E tests in real browsers, and tests self-maintain as your app evolves. One of the things making the solo-founder path more viable.
Why not both? When the chat interface fails, you still need a UI to fix things.
Seems like the headline should have been "is now dead on arrival". As currently written, it fails to convey the temporal aspect that is the focus of the blog post.
It also fails to convey that he's actually only talking about startups that were created 2+ years ago, rather than the many AI startups founded in the last 2 years.
If it was stretched before it's definitely snapped now. The only place I see convincing opportunity is resources and commodities.
Not being AI focused might mean fewer competitors.
> Now, before you build a physical prototype, you can simulate more design variants, create digital twins, and stress-test assumptions earlier and much cheaper than before.
Hahahahahahahaha no you can't. The rise of LLMs has done little to nothing in this area because it's very much compute-limited. Digital-twins and other ML-based strategies predate ChatGPT by a long shot. There are definitely places in hardware design where LLMs and agentic workflows will help, but that's largely because the existing tooling is utter garbage, and now the industry has a fire under its ass to make things automatable so they can build their own agents.
I dig MPO as a term and that's exactly what I think of implementing agentic systems.
I think this is a lot of fluff supported by a weak anecdote.
Chris' company's assumptions are no longer true, but that doesn't apply to everyone's startup. This is mostly a Chris problem.
No, not every product can just be a chat window like in the silly little screenshot.
If the author actually wrote software they'd realize that, no, AI isn't speeding up development by any more than a modest amount. It's great that we have it and it's removing tedium but it has replaced zero engineers at my company or at any other company of anyone else I know.
And no, your company laying off some people isn't because of AI, your startup idea not getting funding is not because of AI, it's because we've been in a regular old recession which is now a developing oil crisis. Interest rates aren't 0% so nobody wants to lend money to infinite startups.
Chris has been so focused for 5 years that he had no clue about anything else going on in the industry?
It reads to me like the author thinks Chris is an idiot for not selling his automation tech to the defense industry
It used to be 90% of startups would unfortunately fail.
Now with AI, it is likely going to be 98%.
Before AI: 900 of 1000 fail (90%), 100 succeed
After AI: 4900 of 5000 fail (98%), 100 succeed
Like this?
Still better odds than winning the lottery right ? Right ?
[dead]
depends on assumptions. thats the load bearing element. 99.99% of what was true then is still true. its mostly on the fractal churning edges where hyped change happens. things flip-flop too:
2021-2024: good time in US for EV startup
2025: terrible time in US for EV startup
2026 March/April: AWESOME time in world for EV startup
focus on fundamentals, not flakey ephemerals
2020: wise to have smart elite software engineers on your team
2021: ditto
2022: ditto
2023: ditto
2024: ditto (is this when ChatGPT launched? dont care. snore)
2025: ditto (what are YC/HN/VC hyping now? snore)
2026: ditto
2027+: ditto, likely
It is insane to me that anyone could look at the US Economy right now and think this was a good time to start a business. Between the war, AI, a pending economic recession (look at bond prices), all I see is failure. Maybe a funeral home, but that is all I can think of.
[dead]
[flagged]
This smells like LLM.
I find the story of a startup founder who entirely missed the developments of the last two years and did absolutely nothing with AI difficult to believe. If that actually happened it's the exception, not the rule. Most startup founders are way more in-tune with AI developments. This makes it sound like Chris (the mentioned founder) is behind marketing people who use LLM bots to post slop on LinkedIn.
In that case, yes their startup is most certainly DOA.
Look around, you're most definitely in a bubble. LLMs are bleeding edge by themselves. Using agentic anything is mega bleeding edge. Having something actually working reliably is a tiny sliver of the bleeding edge audience. We have barely entered early adoption phase. Most AI users out there are Q&A'ing it and they have no idea what agents, tool calling or context compaction are.
I don't follow. Tech startups are bleeding edge. You may be over-generalizing here. I talk a lot of my dev friends, they are all using AI for work. So if Joe Blow at some consulting company is using it, then a SV startup CEO should be too.
>Most AI users out there are Q&A'ing it and they have no idea what agents, tool calling or context compaction are.
Again, talking about a tech CEO not a random "AI user."
9 replies →
Not if you’re launching a startup based in the real world. Tell me how AI will make a laundromat business DoA?
If your business is selling services at 40% margin that are entirely digitally based, then maybe you’ll need to cut some margin, sure.
A laundromat isn't a startup in that sense. There's no potential for exponential growth. A VC would never give you money to open a laundromat, you'd go to a bank and get a business loan for that.
Doordash for laundry.
3 replies →
Was uber a startup? Airbnb?