Comment by bloppe
10 hours ago
My friend at a faang was talking about the "massive overhauls to make everything ready for ai". I asked for an example. He said "basically just documenting the shit out of everything"
I guess that just never occurred to anybody before.
The CEO of Uber made the same comment on Diary of a CEO recently. I think it was for their customer service team if I'm not mistaken, they threw their existing docs at an LLM and it was all over the place because policies were poorly documented and defined. The team is now documenting everything from scratch, focusing on outcomes rather than process - TBD if it works out.
Yeah, someone made the point in a popular post here recently that all the firings are reducing institutional knowledge. IMHO, replacing that knowledge with LLM-written documentation is even more potentially catastrophic. Just from organizations I've worked in, a lot of the useful human knowledge is in knowing how to handle either undocumented edge cases or situations where the documents are outdated or wrong. Working with LLMs and reminding them to update those docs every time? Good luck. And if it's something where the docs touch actual real world operations, that's an area where only human operators with hands-on experience are going to recognize the potential conflicts or cognitive dissonance.
Companies really want to use AI because they can cut the workforce. But today's AI is generally not able to fill in the gaps in processes and documentation a human could. Hence the renewed focus on formalizing everything properly because it's the only way it will work.
If he's using AI to write that documentation (like everyone else) he'll soon find out why that doesn't work out in the end.
Having the humans document the code seems backward (maybe that's not what they're doing, but "make everything ready for ai" sound manual). And hopefully there aren't that many scary surprises that humans need to manually document.
One of the best parts of LLMs is that you can use them to bootstrap your documentation, or scan for outdated things, etc, far more quickly than ever before.
Don't just throw a mountain at it and ask it to get it right, but use a targeted process to identify inconsistencies, duplicates, etc, and then resolve those.
And then you have better onboarding material for the next human OR llm...
The humans are the only ones knowing the why. The what is cheap.
> Having the humans document the code seems backward (maybe that's not what they're doing, but "make everything ready for ai" sound manual).
No, that's forward. Any documentation an AI can make, another AI can regenerate. If an LLM didn't write the code, it shouldn't document it either. You don't want to bake in slop to throw off the next LLM (or person).
AI might actually RTFM
It would / should / can, but there's big efforts in reducing token consumption now, so AI will likely try to skim and pick documentation just like real humans.
There was a recent effort at work to make it possible for agents to provide up-to-date help on how to do various admin/setup tasks. A very sensible goal: We already have lots of documentation, the problem is that it's scattered everywhere and mostly out of date. Turns out the new solution amounted to someone manually going through it all and painstakingly preparing some Markdown files for consumption by said agent.
Somebody pointed out that those Markdown files might be helpful for people to read directly. Bit of an Emperor's new clothes moment. (I wanted to slap a : rolling_on_the_floor_laughing: reaction on it, but sadly it turns out I'm actually too chickenshit to do that in today's job market.)