Comment by zmmmmm

15 hours ago

It's really weird, I'm seeing across the board that people who never believed in them before are suddenly all into good software eng practices (starting with writing a spec) because of AI.

It's kind of fascinating that we never were willing to do these things for humans but now that AI needs it ... we are all in. A bit depressing in the sense that I think mostly the reason we happy to do it for AI is that we perceive it will benefit us personally rather than some abstract future human.

> It's really weird, I'm seeing across the board that people who never believed in them before are suddenly all into good software eng practices (starting with writing a spec) because of AI.

> It's kind of fascinating that we never were willing to do these things for humans but now that AI needs it ... we are all in. A bit depressing in the sense that I think mostly the reason we happy to do it for AI is that we perceive it will benefit us personally rather than some abstract future human.

I don't think that's the reason.

I think it's because they take time, and few people were willing to put in time for "maybe it'll make writing the actual code faster" gains when the code was going to take a few times longer to write itself.

You also can get faster feedback to iterate on your spec now, which improves the probability of it helping future-you.

So combine that with the fact that the llms are more likely to get lost if you don't spec stuff in advance, and the value of up-front work is higher (whereas a human is more likely to land on the right track, just more slowly than otherwise, making the value harder to quantify).

  • Yeah I think a lot of pushback to best practices is basic cost/benefit; I like writing documentation, but I'm also often feeling a bit depressed that nobody will actually read it in as much detail as I wrote it. But LLMs do / can.

    Actually there's a lot of projection there too; I don't read documentation in detail. And nowadays, I point an LLM at documentation so that it can find the details I would otherwise skip over.

    The destruction of the millennial attention span is real, and it's worse in the younger generations, lmao.

    • Well it's also just that you have a list of 20 features to add, and if it works, you want to ship it, and someone might even get mad if you spend a day dawdling on best practices and documentation and so on. Corporate cultures generally don't have the same long term thinking about reusability and legibility and fault-tolerance that an individual coder may have about the code they want to write once and forget. (Neither do LLMs, for that matter).

    • Our reduced attention spans are just adaptation to a world filled with meaningless distractions.

      Imagine how crippled you would be if you felt compelled to follow every comment thread to its end.

      We're just monkeys looking for the good bits among a pile of rotten fruit.

My friend at a faang was talking about the "massive overhauls to make everything ready for ai". I asked for an example. He said "basically just documenting the shit out of everything"

I guess that just never occurred to anybody before.

  • The CEO of Uber made the same comment on Diary of a CEO recently. I think it was for their customer service team if I'm not mistaken, they threw their existing docs at an LLM and it was all over the place because policies were poorly documented and defined. The team is now documenting everything from scratch, focusing on outcomes rather than process - TBD if it works out.

    • Yeah, someone made the point in a popular post here recently that all the firings are reducing institutional knowledge. IMHO, replacing that knowledge with LLM-written documentation is even more potentially catastrophic. Just from organizations I've worked in, a lot of the useful human knowledge is in knowing how to handle either undocumented edge cases or situations where the documents are outdated or wrong. Working with LLMs and reminding them to update those docs every time? Good luck. And if it's something where the docs touch actual real world operations, that's an area where only human operators with hands-on experience are going to recognize the potential conflicts or cognitive dissonance.

    • Companies really want to use AI because they can cut the workforce. But today's AI is generally not able to fill in the gaps in processes and documentation a human could. Hence the renewed focus on formalizing everything properly because it's the only way it will work.

      1 reply →

  • Having the humans document the code seems backward (maybe that's not what they're doing, but "make everything ready for ai" sound manual). And hopefully there aren't that many scary surprises that humans need to manually document.

    One of the best parts of LLMs is that you can use them to bootstrap your documentation, or scan for outdated things, etc, far more quickly than ever before.

    Don't just throw a mountain at it and ask it to get it right, but use a targeted process to identify inconsistencies, duplicates, etc, and then resolve those.

    And then you have better onboarding material for the next human OR llm...

    • > Having the humans document the code seems backward (maybe that's not what they're doing, but "make everything ready for ai" sound manual).

      No, that's forward. Any documentation an AI can make, another AI can regenerate. If an LLM didn't write the code, it shouldn't document it either. You don't want to bake in slop to throw off the next LLM (or person).

  • If he's using AI to write that documentation (like everyone else) he'll soon find out why that doesn't work out in the end.

  • There was a recent effort at work to make it possible for agents to provide up-to-date help on how to do various admin/setup tasks. A very sensible goal: We already have lots of documentation, the problem is that it's scattered everywhere and mostly out of date. Turns out the new solution amounted to someone manually going through it all and painstakingly preparing some Markdown files for consumption by said agent.

    Somebody pointed out that those Markdown files might be helpful for people to read directly. Bit of an Emperor's new clothes moment. (I wanted to slap a : rolling_on_the_floor_laughing: reaction on it, but sadly it turns out I'm actually too chickenshit to do that in today's job market.)

My manager just told me that after 12 years of trying to get one of the founders to understand the difference between dev docs and user docs, they tried getting Claude to do it and he finally got it that they are different. He'd been saying this whole time that customer could just read the dev docs. If they could they wouldn't need our software.

  • How firm is the boundary between a dev doc and a user doc in your opinion? I have found that the overlap can be quite large if the users are also technically proficient. Right now I'm trying to balance "how X works so you can use the app better" with "how X works so you can contribute or build your own plugin". DeepWiki really helps as a backstop for anything not already covered though it's not without its own caveats of course.

Had similar discussion around prompting. Spend years clearly outlining required data inputs, creating forms to dummy proof communication from users. Now that AI is in the picture, users are willing to learn to write elaborate Shakespearean-scale prompts. They are more willing to learn how to communicate to a computer more than how to clearly communicate to a human.

I always knew the dev world leaned more toward interesting technical challenges and interoperability than maximizing the benefit to humanity- it’s why I switched to design. However, I didn’t realize the intensity of that preference until the entire industry got ridiculously AI-pilled.

It’s an interesting psychological phenomenon. It’s like the way I keep my house way tidier since I got a robot vacuum. Pick things up off the floor for aesthetics’ sake? Nah. Pick them up because the vacuum will attempt to eat them and might get sick? Of course!