← Back to context

Comment by ap99

8 hours ago

> That's your job.

Exactly.

AI assisted development isn't all or nothing.

We as a group and as individuals need to figure out the right blend of AI and human.

  > AI assisted development isn't all or nothing.
  > We as a group and as individuals need to figure out the right blend of AI and human.

This is what makes current LLM debate very much like the strong typing debate about 15-20 years ago.

"We as a group need to figure out the right blend of strong static and weak dynamic typing."

One can look around and see where that old discussion brought us. In my opinion, nowhere, things are same as they were.

So, where will LLM-assisted coding bring us? By rhyming it with the static types, I see no other variants than "nowhere."

  • As a former “types are overrated” person, Typescript was my conversion moment.

    For small projects, I don’t think it makes a huge difference.

    But for large projects, I’d guess that most die-hard dynamic people who have tried typescript have now seen the light and find lots of benefits to static typing.

Seriously. I've known for a very long time that our community has a serious problem with binary thinking, but AI has done more to reinforce that than anything I can think of in modern memory. Nearly every discussion I get into about AI is dead out of the gate because at least one person in the conversation has a binary view that it's either handwritten or vibe coded. They have an insanely difficult time imagining anything in the middle.

Vibe coding is the extreme end of using AI, while handwriting is the extreme end of not using AI. The optimal spot is somewhere in the middle. Where exactly that spot is, I think is still up for debate. But the debate is not progressed in any way by latching on to the extremes and assuming that they are the only options.

  • The "vibe coding" term is causing a lot of brain rot.

    Because when I see people that are downplaying LLMs or the people describing their poor experiences it feels like they're trying to "vibe code" but they expect the LLM to automatically do EVERYTHING. They take it as a failure that they have to tell the LLM explicitly to do something a couple times. Or they take it as a problem that the LLM didn't "one shot" something.

    • I'd like it to take less time to correct than it takes me to type out the code I want and as of yet I haven't had that experience. Now, I don't do Python or JS, which I understand the LLMs are better at, but there's a whole lot of programming that isn't in Python or JS...

      1 reply →

  • I think you will find this is not specific to this community nor AI but any topic involving nuance and trade-offs without a right answer

    For example, most political flamefests

I'm only writing 5-10% of my own code at this point. The AI tools are good, it just seems like people that don't like them expect them to be 100% automatic with no hand holding.

Like people in here complaining about how poor the tests are... but did they start another agent to review the tests? Did they take that and iterate on the tests with multiple agents?

I can attest that the first pass of testing can often be shit. That's why you iterate.

  • > I can attest that the first pass of testing can often be shit. That's why you iterate.

    So far, by the time I’m done iterating, I could have just written it myself. Typing takes like no time at all in aggregate. Especially with AI assisted autocomplete. I spend far more time reading and thinking (which I have to do to write a good spec for the AI anyways).