Build Software. Build Users

4 days ago (dima.day)

I like the idea. As a solo dev I've experimented with creating Claude subagents for multiple perspectives for "team leads" and will run ideas through them (in parallel). The subagents are just simple markdown explaining the various perspectives that are usually in contention when designing stuff. And a 'decider' that gives me an executive summary.

  agents/
    |-- customer-expert.md - validates problem assumptions, customer reality
    |-- design-lead.md - shapes solution concepts, ensures UX quality
    |-- growth-expert.md - competitive landscape, positioning, distribution
    |-- technical-expert.md - assesses feasibility, identifies technical risks
    |-- decider-advisor.md - synthesizes perspectives, executive analysis

  • I've experimented with something similar - my flow is to have the subagents "initialize" a persona for the task at hand, and then have the main thread simulate a debate between the personas. Not sure if it's the best approach but it's helpful to get a diversity of perspectives on an issue

This is interesting, and I think worth trying. However,

    The process is iterative:

    Vibe code users <--> Vibe code software

    Step by step, you get closer to truly understanding your users

Do not fool yourself. This is not "truly" "understanding" your "users". This is a model which may be very useful, but should not be mistaken for your users themselves.

Nothing beats feedback from humans, and there's no way around the painstaking effort of customer development to understand how to satisfy their needs using software.

  • I agree. I do like the general idea as an exploration.

    Perhaps the idea is to use an LLM to emulate users such that some user-based problems can be detected early.

    It is very frustrating to ship a product and have a product show stopper right out of the gate that was missed by everyone on the team. It is also sometimes difficult to get accurate feedback from an early user group.

A bit too vague to be useful advice don't you think?

Why not show some actual examples of these agents actually doing what you describe. How exactly would you set up an agent to simulate a user?

  • To me it sounds like one way to do this would be to have LLMs write Cucumber test cases. Those are high level, natural language tests which could be run in a browser.

Another fart in the wind. How to write lots of “programming philosophy” and say nothing.

Good point. I think it is the time to remove the line between the engineering and product management completely. Because, we can.

> LLMs likely have a much better understanding of what our users need and want.

They don't.

Basically this sounds like Agentic Fuzz Testing. Could it be useful? Sure. Does it have anything to do with what real users need or want? Nope.

This is ridiculous. I doubt this would work with a general AI, but it surely cannot work with LLMs who understand exactly nothing about human behaviour.

  • They may not understand it but they may very well be able to reproduce aspects of feedback and comments on similar pieces of software.

    I agree that the approach shouldn’t be done unsupervised but I can imagine it being useful to gain valuable insights for improving the product before real users even interact with it.

    • > reproduce aspects of feedback and comments on similar pieces of software

      But this is completely worthless or even misleading. There is zero value in this kind of "feedback". It will produce nonsense which sounds believable. You need to talk to real users of your software.

      1 reply →

Yeah, I have build a Product hunt alternative for solo founders - Solo Launches to give them visibility. I built 290+ users from it till now and its free and giving a good DR dofollow backlink. https://sololaunches.com