← Back to context

Comment by behnamoh

5 hours ago

    5 years ago: ML-auto-complete → You had to learn coding in depth
    Last Year: AI-generated suggestions → You had to be an expert to ask the right questions
    Now: AI-generated code → You should learn how to be a PM
    Future: AI-generated companies → You must learn how to be a CEO
    Meta-future: AI-generated conglomerates → ?

Recently I realized that instead of just learning technical skills, I need to learn management skills. Specifically, project management, time management, writing specifications, setting expectations, writing tests, and in general, handling and orchestrating an entire workflow.

And I think this will only shift to the higher levels of the management hierarchy in the future. For example, in the future we will have AI models that can one-shot an entire platform like Twitter. Then the question is less about how to handle a database and more about how to handle several AI generated companies!

While we're at the project manager level now, in the future we'll be at the CEO level. It's an interesting thing to think about.

>While we're at the project manager level now, in the future we'll be at the CEO level.

This is the kind of half baked thought that seems profound to a certain kind of tech-brained poster on HN, but upon further consideration makes absolutely zero sense.

I've never understood this train of thought. When working in teams and for clients, people always have questions about what we have created. "Why did you choose to implement it like this?" "How does this work?" "Is X possible to do within our timeframe/budget?"

If you become just a manager, you don't have answers to these questions. You can just ask the AI agent for the answer, but at that point, what value are you actually providing to the whole process?

And what happens when, inevitably, the agent responds to your question with "You're absolutely right, I didn't consider that possibility! Let's redo the entire project to account for this?" How do you communicate that to your peers or clients?

If AI gets to be this sophisticated, what value would you bring to the table in these scenarios?

  • > what value would you bring to the table in these scenarios?

    I bring the table, AI brings the value.

    • So... nothing. Glad we're in agreement here. If AI can do all the things people hope/dream it can, there won't be any value in doing it on behalf of folks. I would argue that even some "AI provider" (if that could even be a thing given a sophisticated enough agent) would see diminishing returns as the tech inevitably distills into everyone having bespoke agents running locally and handling/organizing/managing everything (of whatever needs managing, who knows).

      Basically I don't see how you can be an AI maximalist and a capitalist at the same time. They're contradictory, IMO.

      1 reply →

The moment we have true AGD (artificial general developer), we’ll also have AGI that can equally well serve as a CEO. Where humans sit then won’t be a question of intellectual skill differentiation among humans anymore.

I'd advise caution with this approach. One of the things I'm seeing a lot of people get wrong about AI is that they expect that it means they no longer need to understand the tools they're working with - "I can just focus on the business end". This seems true but it's not - it's actually more important to have a deep understanding of how the machine works because if the AI is doing things that you don't understand you run a severe risk of putting yourself in a very bad situation - insecure applications or servers, code with failure modes that are catastrophic edge cases you won't catch until they're a problem, data lossage / leakage.

If anything, managing the project, writing the spec, setting expectations and writing tests are things llms are incredibly well suited for. Getting their work 'correct' and not 'functional enough that you don't know the difference' is where they struggle.

one-shot doesn't mean what you think it means.

one-shot means you provide one full question/answer example (from the same distribution) in the context to LLM.

> more about how to handle several AI generated companies!

The cost of a model capable of running an entire company will be multiples of the market cap of the company it is capable of running.

  • "AI-generated company" as in the AI writes the A-Z of the code required to have a working platform like Twitter. Currently it can build some of the frontend or some of the backend, but not all. It's conceivable that in the future AI can handle the entire chain.

    Also you're forgetting the decreasing cost of AI, as well as the fact that you can buy a $10k Mac Studio NOW and have it run 24/7 with some of the best models out there. Only costs would be the initial fixed cost and electric (250W at peak GPU usage).

    • >Also you're forgetting the decreasing cost of AI

      AI is still being heavily subsidized. None of the major players have turned a profit, and they are all having to do 4D Chess levels of financing to afford the capex.

      1 reply →

No, no companies and no CEOs. Just a user. It's like StarTrek replicator. Food replication. No you are not a chef, not a restaurant manager, not agrifarm CEO but just a user that orders a meal. So yes you will need "skills" to specify the type of meal but nothing beyond that.