← Back to context

Comment by akmarinov

20 days ago

> I noticed that LLMs need a very heavy hand in guiding the architecture, otherwise they'll add architectural tech debt. One easy example is that I noticed them breaking abstractions

That doesn’t matter anymore when you’re vibe coding it. No human is going to look at it anyway.

It can all be if/else on one line in one file. If it works and if the LLMs can work at, iterate and implement new business requirements, while keeping performance and security - code structure, quality and readability don’t matter one bit.

Customers don’t care about code quality and the only reason businesses used to care is to make it less money consuming to build and ship new things, so they can make more money.

Wild take. Let’s just hand over the keys to LLMs I suppose, the fancy next token predictor is the capitan now.

  • Not that wild TBH.

    This is a common view, and I think will be the norm on the near-to-mid term, especially for basic CRUD apps and websites. Context windows are still too small for anything even slightly complex (I think we need to be at about 20m before we start match human levels), but we'll be there before you know it.

    Engineers will essentially become people who just guide the AIs and verify tests.

    • Have you ever tried to get those little bits of styrofoam completely off of a cardboard box? Have you ever seen something off in the distance and misjudged either what it was or how long it would take to get there?

LLMs need a very heavy hand in guiding the architecture because otherwise they'll code it in a way that even they can't maintain or expand.

  • Hook up something like Taskmaster or Shrimp, so that they can document as they go along and they can retrieve relevant context when they overflow their context to avoid this issue.

    Then as the context window increases, it’s less and less of an issue