← Back to context

Comment by karolist

13 hours ago

The post reads like written by someone who read too much about AI rather than tried to build a startup with the help of AI that they advocate so much. I'm still bounded by system design, UX, pricing and feature decisions, if not by the speed of code output, by the review time for sure. Yes, iterating is faster, but we're nowhere near agentic AI loops spitting out working products. Technically it's possible, but then you just spent that time planning and writing the spec up front, which you'd interleave with dev time otherwise. If the product is a simple CRUD database skin, then yeah, chances of success are lower I think, but this is not the type of startups the post seems to write about.

It's gotten to the point where, when I see yet another Thought Leadership article about software development, I search the page for the word "will". If I see unqualified predictions of the future (AI will change this and Agents will do that and developers will need to do thus), I think I can safely ignore the article. Who has the hubris to make such strong and unwavering statements about a future nobody can see?

  • A local "thought leader" wrote a ridiculous piece about software leadership in a post-LLM world, when clearly they'd never actually used an LLM to build anything, or actually led a team using LLMs. Lots of hand-waving but obviously no real experience.

    As gp says, there's a big difference between theory and practice here, and a lot of the things we needed when we weren't using LLMs are still needed when we are, but it takes a bit of actual practice to work this out. It's still not at the stage where an Ideas Guy can make a real working product without someone on the team actually knowing how to develop software.

    At least in my experience, so far. But the world is changing fast.

  • The last "Thought Leader" worth listening to was put to death by Athens.

    Some that came after might be worthy of the title, but those who claim it for themselves aren't.

  • How come all the talking heads telling us AI has made whatever we've learned or built obsolete never point the lens back at themselves? I personally thing knowledge from entrepreneurs who learned their lessons in previous decades is valuable, but if what they're spouting is true it doesn't make any sense to listen to them, i.e. these thought leaders are desperate to tell you of their irrelevance. Same thing for every CTO who tells me AI replaces developers. That's a stretch, but if we could do that, don't you think it would be trivial to replace your job?

    • Someone is still needed to decide what the AI should do, and to harness and manage it. A CTO can do that with existing skills, and the resulting organization should converge quickly on near-zero need or want for human SWEs.

      So goes the thinking, anyway. It's why my couple decades of experience and I still occasionally get to hear from rando cold recruiters desperate to sell someone a "pivot to AI," probably thinking they can lowball me by holding my mortgage over my head in order to screw three times the work out of me that they'd pay for.

      I was in this business too long.

  • Have you ever sweat bullets at 3 a.m.? While Claude spins in circles unable to fix production without breaking five other things?

    You will!

  • > Who has the hubris to make such strong and unwavering statements about a future nobody can see?

    LinkedIn influencers

Yeah… also, it’s just weird. Interfaces are important, they contain information and affordances, everything should not become a chatbot.

  • Yeah, it's crazy to think an opaque chatbot will be preferable to a well designed UI for most users. People don't like badly designed UIs, but I'm pretty sure most people under 40 prefer a well designed UI to a customer service agent. We call customer service because the website doesn't do what we want, not because we don't want to use the website.

  • Interfaces will definitely evolve. Visual graphs and representations are useful but speaking to an agent will become mainstream as its faster than typing. Also the ability for agents to code on the fly will open up different interfaces. For instance you could say "show me the impact that our marketing campaign x over this time period" and out comes graphs that were coded. Drawing might even make a comeback for instance when designing a website you just cross out things you don't want, draw boxes of where you want things, talk at the same time saying what you want in that box. Then some people are using virtual relativity. Not everything will become a chatbot but its definitely going to evolve with chatting being an integral part of the user interface.

    • You're describing some Hollywood version of SF, not the real world. Speaking is not faster than pressing a key or turning a knob (like try to operate a CAD or a DAW without keyboard and mouse). And for most report/infographic, you mostly need to design a few dashboard and almost never change them, because those are your core metrics that you need to monitor. And the ability to sketch even a simple wireframe relies on a lot of knowledge that most people don't want to burden themselves with.

      1 reply →

  • I usually find chatbot interfaces completely infuriating. I know the UX paradigm of click-reduction went the way of the dodo (and rightfully so, because it was based on bullshit research,) but I think it’s funny that completely removing the user’s agency and visibility into any process, and turning 3-click processes into 200 keystroke processes is the hot shit right now.

Are you familiar with Steve Blank? What you’re describing really isn’t his MO at all.

  • I have a lot of respect for Steve Blank, but my heuristic by now is to ignore any breathless posts that state “teams are doing X with AI, if you are not doing the same you’re behind”.

    The much more useful posts are “my team and I are doing X with AI”. Of course, the challenge there is that the ones who are truly getting a competitive edge through AI are usually going to be too busy building to blog about it.

    • I really enjoyed reading his articles a while back. He wrote an article about about silicon valley's roots in microchip development. I can't remember the details but I reached out to point out how important Autonetics was to chip development and that was based in Los Angeles. From my perspective what made Silicon Valley significant was it's connection to wall street. I wanted to engage on the topic of Venture Capital might be the real product of silicon Valley.

      He could have ignored the email or engaged on the topic I introduced. Instead he sent me a wikilink to Autonetics. I was left with the feeling that he had no real interest in the topic he wrote about. It was really no big deal. He is a busy guy and doesn't need to engage with strangers. I never read anything by him again because I was left with the feeling he is just phoning these posts in.

  • I'm not, but this is not a great introduction. It's handwavy and makes the assumption that AI dev tools are much farther along than they are. I have seen this a lot lately; the farther up the management chain and farther away from putting hands on code, the more confident people seem to be in the power of AI tools.

    For big complex real world problems, and big complex real worlde codebases, the AIs are helpful but not yet earth shattering. And that helpfulness seems to have plateaued as of late.

    I am extremely skeptical of posts like this.

    • I will take a lot more hand waving from the 70-something year-old Stanford professor who co-created far-up the chain management paradigms that run a good chunk of the economy. That context kinda changes things but what do I know.

      9 replies →

I'm glad this is the top comment. I'm ambivalent about a bunch of writing I've seen from Steve Blank - some of his stuff I've loved and some I thought was awful.

But this I just thought was vacuous. I agree with what you wrote, but more to the point, I didn't find any real advice about how a startup should actually change that passed my sniff test. I left the tech startup world about 2 years ago myself, and I'm glad I did, because I just think there are way fewer differentiable opportunities now. That is, even if I accept what Blank says is true, what are all these 2+ year old startups supposed to do - just create some model wrapper/RAG chatbot product like the million other startups out there?

Even in defense, like the article says, there are now a bajillion drone companies, and it looks like a race to the bottom. The most successful plan at this point just looks like the grifter plan, e.g. getting the current president to tweet out your stock ticker.

I'm honestly curious what folks think are good startup business plans these days. Even startups that looked they were "knock it out of the park" successes like Cursor and Lovable just seem like they have no moat to me - I see very few startups (particularly in the "We're AI for X!" that got a ton of funding in the past two years) with defensible positions.

You are assuming a linear future while we are in an exponential.

One year ago models could barely write a working function.

  • GPT-4o is 23 months old.

    One year ago, the models were only slightly less competent than today. There were models writing entire apps 3 years ago. Competent function writing is basically a given on all models since GPT3.

    Much of the progress in the past year has been around the harnesses, MCPs, and skills. The models themselves are not getting better exponentially, if anything the progress is slowing down significantly since the 2023-2024 releases.

    • >One year ago, the models were only slightly less competent than today.

      That has not been my experience. This weekend I pointed Claude Code+Opus 4.6+effort=max at a PRD describing a Docusign-like software. The exact same document I gave to Claude Code+Opus 4.5+Ultrathink around 6 months ago.

      The touch-ups I needed after it completed implementation was around a tenth that it took with 4.5. It is a pretty startling difference.

      1 reply →

    • Yeah I've been able to get great Python functions out of everything since the ChatGPT 4 API in early-to-mid 2023.

      It takes far less manual prompting to make it have consistent output, work well with other languages, etc. But if you watch the "thinking" logs it looks an awful lot like the "prompt engineering" you'd do by hand back then. And the output for tricky cases still sometimes goes sideways in obviously-naive-ways. The most telling thing in my experience is all the grepping, looping, refining - it's not "I loaded all twenty of these files into context and have such a perfect understanding of every line's place in the big picture that I can suggest a perfect-the-first-time maximally-elegant modification." It's targeted and tactical. Getting really good at its tactics for that stuff, though!

      I can get more done now than a year ago because taking me out of the annoying part of that loop is very helpful.

      But there's still a very curious gap that the tool that can quickly and easily recognize certain type of bugs if you ask them directly will also happily spit out those sorts of bugs while writing the code. "Making up fake functions" doesn't make it to the user much anymore, but "not going to be robust in production but technically satisfies the prompt" still does, despite it "knowing better" when you ask it about the code five seconds later.

  • One year before 1969 we had never been to the moon. In the 70s credible scientists and physicists predicted that large martian colonies would exist before the year 2000.

    If a metric goes from 0 to 2 it doesn't mean it's on a long-lived exponential trajectory.

  • > One year ago models could barely write a working function.

    This is a false claim.

    Claude Code was released over a year ago.

    Models have improved a lot recently, but if you think 12 months ago they could barely write a working function you are mistaken.

  • This comment is getting punished for the incorrect timeline (I would know, I've been harping on about AI getting good at coding for ~2 years now!) but I do think it is directionally correct. Just over 3 years ago, (publicly available) AI could not write code at all. Today it can write whole modules and project scaffoldings and even entire apps, not to mention all the other stuff agents can do today. Considering I didn't think I'd see this kind of stuff in my lifetime, this is a blink of an eye.

    Even if a lot of the improvements we see today are due to things outside the models themselves -- tools, harnesses, agents, skills, availability of compute, better understanding of how to use AI, etc. -- things are changing very quickly overall. It would be a mistake to just focus on one or two things, like models or benchmarks, and ignore everything else that is changing in the ecosystem.

    • I agree it's directionally correct, but only in the ways that don't matter to this discussion. If 2026->2029 AI is as much of an improvement as 2023->2026 AI, is anything we learn about how to leverage it in 2026 going to stay relevant?

  • Sigmoids look a lot like exponentials early on.

    We can’t say for sure yet which trajectory we are on.

  • Seems extremely disingenuous to say that one year ago models could barely write a working function. In fact, there were plenty capable of writing a working function with the right context fed in, exactly as today.