I have a lot of respect for Steve Blank, but my heuristic by now is to ignore any breathless posts that state “teams are doing X with AI, if you are not doing the same you’re behind”.
The much more useful posts are “my team and I are doing X with AI”. Of course, the challenge there is that the ones who are truly getting a competitive edge through AI are usually going to be too busy building to blog about it.
I really enjoyed reading his articles a while back. He wrote an article about about silicon valley's roots in microchip development. I can't remember the details but I reached out to point out how important Autonetics was to chip development and that was based in Los Angeles. From my perspective what made Silicon Valley significant was it's connection to wall street. I wanted to engage on the topic of Venture Capital might be the real product of silicon Valley.
He could have ignored the email or engaged on the topic I introduced. Instead he sent me a wikilink to Autonetics. I was left with the feeling that he had no real interest in the topic he wrote about. It was really no big deal. He is a busy guy and doesn't need to engage with strangers. I never read anything by him again because I was left with the feeling he is just phoning these posts in.
I'm not, but this is not a great introduction. It's handwavy and makes the assumption that AI dev tools are much farther along than they are. I have seen this a lot lately; the farther up the management chain and farther away from putting hands on code, the more confident people seem to be in the power of AI tools.
For big complex real world problems, and big complex real worlde codebases, the AIs are helpful but not yet earth shattering. And that helpfulness seems to have plateaued as of late.
I will take a lot more hand waving from the 70-something year-old Stanford professor who co-created far-up the chain management paradigms that run a good chunk of the economy. That context kinda changes things but what do I know.
That thing he created says you should take your assumptions out into the real world and validate them, ya?
So hand-waving about how easy it is to have an MVP in days w/o actually experience in doing that seems ironic.
Now, maybe he's saying this based on companies he's funded who've had great success with what he's saying. But it's curious that the only concrete example of a company mentioned is one that's six years old and not operating like that.
And in fact, many of the ways he thinks that company went wrong seem completely unrelated to AI?
> Chris is now starting to raise his first large fundraising round. In looking at his investor deck I realized that while he’s been heads down, the world has changed around him – by a lot. The software moat he built with his 5-year investment in autonomy development is looking less unique every day. Autonomous drones and ground vehicles in Ukraine have spawned 10s, if not 100s, of companies with larger, better funded development teams working on the same problem.
> While Chris has been fighting for adoption for this niche market (one that is ripe for disruption, but the incumbents still control), the market for autonomy in an adjacent market – defense – has boomed. In the last five years VC Investment in defense startups has gone from zero to $20 billion/year. His product would be perfect for contested logistics and medical evacuation. But he had literally no clue these opportunities in the defense market had occurred.
> While there’s still a business to be had (Chris’s team has done amazing system integration with an existing airborne platform that makes his solution different from most), – it’s not the business he started.
"Being heads down without paying enough attention to the market for 6 (!!) years" doesn't seem like an AI-caused issue.
Meanwhile, the core suggestion doesn't seem to fix that, it seems almost completely perpendicular.
> You can now test multiple versions of the same business at once (or simultaneously be testing different businesses). While you can be simultaneously testing five pricing models, ten messages or twenty UX flows, the “user interface” may no longer be a screen at all. Testing might be to find prompt(s) to AI Agent(s) deliver needed outcomes.
Ok, but this person didn't even seem to be doing enough paying to the market of one version already?
And while this claim about parallel development being a huge unlock is the most interesting thing, it also sounds a bit glib. Getting your foot in the door is the hardest thing early on, now you're trying to run six versions of your company at once? Each time you get a foot in the door sales-wise, are you trying to make them use all 6 versions, or are you only gonna get feedback on 1? Would you want to pay money to be a beta tester of 6 different products simultaneously, with reason to believe that 5 of them will probably evaporate over night soon?
I have a lot of respect for Steve Blank, but my heuristic by now is to ignore any breathless posts that state “teams are doing X with AI, if you are not doing the same you’re behind”.
The much more useful posts are “my team and I are doing X with AI”. Of course, the challenge there is that the ones who are truly getting a competitive edge through AI are usually going to be too busy building to blog about it.
I really enjoyed reading his articles a while back. He wrote an article about about silicon valley's roots in microchip development. I can't remember the details but I reached out to point out how important Autonetics was to chip development and that was based in Los Angeles. From my perspective what made Silicon Valley significant was it's connection to wall street. I wanted to engage on the topic of Venture Capital might be the real product of silicon Valley.
He could have ignored the email or engaged on the topic I introduced. Instead he sent me a wikilink to Autonetics. I was left with the feeling that he had no real interest in the topic he wrote about. It was really no big deal. He is a busy guy and doesn't need to engage with strangers. I never read anything by him again because I was left with the feeling he is just phoning these posts in.
I'm not, but this is not a great introduction. It's handwavy and makes the assumption that AI dev tools are much farther along than they are. I have seen this a lot lately; the farther up the management chain and farther away from putting hands on code, the more confident people seem to be in the power of AI tools.
For big complex real world problems, and big complex real worlde codebases, the AIs are helpful but not yet earth shattering. And that helpfulness seems to have plateaued as of late.
I am extremely skeptical of posts like this.
I will take a lot more hand waving from the 70-something year-old Stanford professor who co-created far-up the chain management paradigms that run a good chunk of the economy. That context kinda changes things but what do I know.
Based on his own arguments a 70-something Stanford prof has no more knowledge, experience or credibility than someone who started 18 months ago.
These guys don't get to have it both ways.
1 reply →
That thing he created says you should take your assumptions out into the real world and validate them, ya?
So hand-waving about how easy it is to have an MVP in days w/o actually experience in doing that seems ironic.
Now, maybe he's saying this based on companies he's funded who've had great success with what he's saying. But it's curious that the only concrete example of a company mentioned is one that's six years old and not operating like that. And in fact, many of the ways he thinks that company went wrong seem completely unrelated to AI?
> Chris is now starting to raise his first large fundraising round. In looking at his investor deck I realized that while he’s been heads down, the world has changed around him – by a lot. The software moat he built with his 5-year investment in autonomy development is looking less unique every day. Autonomous drones and ground vehicles in Ukraine have spawned 10s, if not 100s, of companies with larger, better funded development teams working on the same problem.
> While Chris has been fighting for adoption for this niche market (one that is ripe for disruption, but the incumbents still control), the market for autonomy in an adjacent market – defense – has boomed. In the last five years VC Investment in defense startups has gone from zero to $20 billion/year. His product would be perfect for contested logistics and medical evacuation. But he had literally no clue these opportunities in the defense market had occurred.
> While there’s still a business to be had (Chris’s team has done amazing system integration with an existing airborne platform that makes his solution different from most), – it’s not the business he started.
"Being heads down without paying enough attention to the market for 6 (!!) years" doesn't seem like an AI-caused issue.
Meanwhile, the core suggestion doesn't seem to fix that, it seems almost completely perpendicular.
> You can now test multiple versions of the same business at once (or simultaneously be testing different businesses). While you can be simultaneously testing five pricing models, ten messages or twenty UX flows, the “user interface” may no longer be a screen at all. Testing might be to find prompt(s) to AI Agent(s) deliver needed outcomes.
Ok, but this person didn't even seem to be doing enough paying to the market of one version already?
And while this claim about parallel development being a huge unlock is the most interesting thing, it also sounds a bit glib. Getting your foot in the door is the hardest thing early on, now you're trying to run six versions of your company at once? Each time you get a foot in the door sales-wise, are you trying to make them use all 6 versions, or are you only gonna get feedback on 1? Would you want to pay money to be a beta tester of 6 different products simultaneously, with reason to believe that 5 of them will probably evaporate over night soon?
So you’re saying he’s majorly complicit in the ultracapitalist dystopia the US has turned into?
5 replies →