Comment by bob1029

6 days ago

> Why can't we figure out the right thing faster by building the wrong thing faster?

Because usually the customer can only tolerate so many failed attempts per unit of time. Running your fitness function is often very expensive in terms of other people's time.

This is easily the biggest bottleneck in B2B/SaaS stuff for banking. You can iterate maybe once a week if you have a really, really good client.

> Why can't we figure out the right thing faster by building the wrong thing faster?

> Because usually the customer can only tolerate so many failed attempts per unit of time. Running your fitness function is often very expensive in terms of other people's time.

Heh, depends on what you do. Many times the stakeholders can't explain what they want but can clearly articulate what they don't want when they see it.

Generate a few alternatives, have them pick, is a tried and true method in design. It was way too expensive when coding was manual, so often you need multiple rounds of meetings and emails to align.

If you don't think coding was the bottleneck, you're not thinking creatively about what's only now possible.

It's not what you can do faster (well, it is, up to a point), but also what can you now, do that would have been positively insane and out of the question before.

  • > Generate a few alternatives, have them pick, is a tried and true method in design. It was way too expensive when coding was manual, so often you need multiple rounds of meetings and emails to align.

    Why do you need coding for those. You can doodle on a whiteboard for a lot of those discussions. I use Balsamiq[0] and I can produce a wireframe for a whole screen in minutes. Even faster than prompting.

    > If you don't think coding was the bottleneck, you're not thinking creatively about what's only now possible.

    If you think coding was a bottleneck, that means you spent too much time doing when you should have been thinking.

    [0]: https://balsamiq.com/product/desktop/

    • I've been using Balsamiq since back in 2012 (perhaps earlier, can't find an earlier reference in email), when it was all in Flash/Flex (IIRC).

      For UX, approach like that can often work. For non-UI stuff, more complex processes and exploration (because nobody is actually sure what's going to work best), the handwaving is just deluding everyone.

      I've been on enough demos where customers pretended they understood the concepts only to be surprised later, or going back and forth for months on what they actually need, to know that:

      > If you think coding was a bottleneck, that means you spent too much time doing when you should have been thinking

      is cute, but naive.

  • That's done by arranging a demo (the very old way) or (better) by deploying to a staging server. The customer meets with you for a demo not very often, maybe once per month, or checks what's on the staging server maybe a couple of times per week. They have other things to do, so you cannot make them check your proposal multiple times per day. However I concede that if you are fast you can work for multiple customers at the same time and juggle their demos on the staging servers.

You have it completely backwards.

Most Enterprise IT projects fail. Including at banks. They are extremely saleable though. They don't see things that are failures as failures. The metrics are not real. Contract renewals do not focus on objective metrics.

This is why you make "$1" with all your banking relationships and actually valuable tacit knowledge, until Accenture notices and makes bajillions, and now Anthropic makes bajillions. Look, I agree that you know a lot. That's not what I'm saying. I'm saying the thing you are describing as a bottleneck is actually the foundation of the business of the IT industry.

Another POV is, yeah, listen, the code speed matters a fucking lot. Everyone says it does, and it does. Jesus Christ.

The customer doesn't need to be shown every "wrong thing".

  • Then how will you know if it's the wrong thing? If you're not user testing then you're just guessing.

    • I build things iteratively all the time, without anyone seeing most of it. I move things up for review and QA after I'm satisfied that I've done the things that need doing, and done them well enough for purpose. Customers aren't the only opinion on what's good or bad. They're more like the final and most important.

      Now that process is much, much faster.

    • Presumably you have at least a partial spec, and some domain knowledge. If your invoices doesn't show amounts or a recipient address, for example, you don't need a user to tell you that is an error.

  • In my experience this just makes them lose confidence in you and the company. So when it eventually is right, they're resistant. Worst case you lose the contract.

    • The opposite is true. If you spoonfeed them every error or gap during development, they'll quickly ditch you for someone who will not bother them with all the things they expect you to know how to fix without their involvement.

      2 replies →

attempt != release to customer

when you're building a feature and have different ideas how to go about it, it's incredibly valuable to build them all, compare, and then build another, clean implementation based on all the insights

I used to do it before, but pretty rarely, only for the most important stuff. now I do it for basically everything. and while 2-4 agents are working on building these options, I have time to work on something else.

That's fair. I'm usually my own customer.

  • I think a lot of the discourse around LLMs fails because of organizational differences.

    I work in science, and I’ve recently worked with a couple projects where they generated >20,000 LOC before even understanding what the project was supposed to be doing. All the scientists hated it and it didn’t do anything that it was supposed to. But I still felt like I was being “anti-ai” when criticizing it.

    I understand that it’s way better when you deeply understand the problem and field though.

    • I'm starting to see this. It starting to seem like a lot of the people making the most specious, yet wild AI SLDC claims are:

      * Hobbyist or people engaged in hobby and personal projects

      * Startup bros; often pre-funding and pre-team

      * Consultancies selling an AI SDLC as that wasn't even possible 6 months ago as "the way; proven, facts!"

      It's getting to the point I'd like people to disclose the size of the team and org they are applying these processes at LOL.

      1 reply →