Comment by wcfrobert

18 hours ago

> "Requirements documents that were once a page are now twelve. Status updates that were once three sentences are now bulleted summaries of bulleted summaries. Retrospective notes, post-incident reports, design memos, kickoff decks: every artifact that can be elongated is, by people who do not read what they produce, for readers who do not read what they receive."

Great article. The "elongation" of workplace artifacts resonated with me on such deep level. Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays. Professional formatting, length, and clear prose are no longer indicators of care and work quality (they never were, but in the past, if someone drafts up a twelve page spec, at least you know they care enough to spend a lot of time on it).

So now the "productivity-gain bottleneck" is people who still care enough to review manually.

This paragraph hit home with me as well. I work at a large tech company that's a household name and the practice of using AI to pad out design documents has become totally out of control over the last 4 or 5 months. Writing documentation is arduous and a little painful, which as it turns out is a good thing as it incentivizes the writer to be as succinct as possible. Why the fuck should I -- along with five other engineers -- bother to read and review your design if you didn't even bother to write it?

  • I'm starting to see pushback for this. I know a Product Manager that was fired for padding his documentation with AI to the point there were mistakes and wasted work due to AI hallucinations.

  • Taking a distance uni class now to maybe swap away from dev work and my submitted works that are to be reviewed and commented on by other students all come back with AI generated feedback and it's making me go insane. If I needed AI feedback I'd go ask an AI but for any communication now it's a cointoss if you're getting a human reply.

    /rant

    • I wonder could you ask for a video instead of a text, like a screen recording with a voice recorder.

      Harder to fake.

  • I see it even on my GitHub project, issues and pull request comments get longer, responses get longer, all generated by ai and read by ai. This text is no longer for human consumption, but to provide context to ai.

  • I've seen some of this as well. It's OK to send me an agentic screed if it's just going to be consumed by my agent, but I want a nicely written summary up top that was made by you... I'm starting to value poor grammar, typos, and other signs of legitimacy

  • What I find particularly irritating is that you can actually prompt the fcking AI to be short.

    > Writing documentation is arduous and a little painful, which as it turns out is a good thing as it incentivizes the writer to be as succinct as possible.

    It takes more effort to be brief, even for humans. Good documentation writers were always brief.

I work under the assumption that the primary audience of everything I write at work is an AI. Managers will take what I send and have it summarized and evaluated by some chatbot or agent. (Of course, I cannot send them the summary myself.)

So like ATS checkers for resumes, I find myself needing an AI checker for my text.

Ultimately, we will have AI write everything for another AI to parse, which will be a massive waste of energy. If only there was some agreed-upon set of rules, structures, standards, and procedures to facilitate a more efficient communication...

  • If that is your manager, do so, sure. But make sure your manager is "such a manager".

    If I was your manager, and you sent me your seventeen page AI generated thing coz you think I'm just gonna summarize anyway and I expect something long: You misread me.

    I make a point all the time to everyone that won't listen, to not send me walls of text. I'm not gonna read them. I'm gonna ignore them, close your bug reports until I can understand them because you spent the time to make them short and legible. If you use AI for that, I don't care. But I better have something short and that when I read it makes actual sense and when I verify it, holds up. If I wanted to just ask AI, I'd do it myself. You have to "value add" to the AI if you want to be valuable yourself.

    • I agree. I send 2 sentence replies to most things my bosses boss sends me. He’s near retirement, dude doesn’t want me to send him a book. He knows the thinking under the work our team is doing is solid.

      The only time I send something longer is if it’s a postmortem for some prod issue, which I write by hand.

      I use AI every day, often multiple agents at once, but knowing when it’s appropriate and when I need to be the one thinking really hard about something.

  • I go through this with my vendor budgets and contract negotiations right now. We are encouraged to put all their proposals in AI and have it refute each point. I know for a fact they are putting my negotiations in their own AI and having it counter-propose my points. It's an arms race of my AI fighting against their AI. Where does it end.

  • I’m too lazy to tell the AI what I want to say, then copy and send its output.

    I just type what I want to say and hit send. YOLO

    • > I just type what I want to say and hit send. YOLO

      Made me smile. Perhaps the new term for making a human hand-written reply is that I didnt use AI … “I YOLOed it”.

  • I'll argue there's potentially a standards based advantage at the end when this all shakes out.

    It will probably take a couple hundred years but I'm pretty sure I'm right about this :)

    • I'm also sure about things that will happen after me and my whole audience are dead.

  • I have a hard time trying to find any reasons for the S̶k̶y̶n̶e̶t̶ owners of the Skynet not to get rid of that walking bipedal inefficiency called human.

    API or die /s.

    Seriously, though, fuck that shit!..

> Professional formatting, length, and clear prose are no longer indicators of care and work quality (they never were, but in the past, if someone drafts up a twelve page spec, at least you know they care enough to spend a lot of time on it).

I feel the loss of this signal acutely. It’s an adjustment to react to 10-30 page “spec” choc-a-block with formatting and ascii figures as if it were a verbal spitball … because these days it likely is.

> Requirements documents that were once a page are now twelve.

man I see this on Jira a PM or BA is like "yeah I'll write that AC for you" giant bullet list filled in a bunch of emojis and checkmarks

  • Does anyone know where that style came from? Did it become popular in listicles or on github or something? Or is there one person deep inside OpenAI or Anthropic who built the synthetic data pipeline and one day made the decision on a whim to doom us to an eternity of emoji bullet points?

    • I think it likely performed well in A/B preference tests with chat users.

      I've noticed Claude does far fewer listicles than ChatGPT. I suspect that they don't blindly follow supervised learning feedback from chats as much as ChatGPT. I get Apple vs Google design approach from those two companies, in that Apple tends not to obsess over interaction data, instead using design principles, while Google just tests everything and has very little "taste."

      In general I feel like the data approach really blinds people to the obvious problem that "a little" of something can be preferable while "a lot" of the same is not. I don't mind some bullet points here and there but when literally everything is in bullet points or pull quotes it's very annoying. I prefer Claude's paragraph style.

      I suppose the downside is that using "taste" like Apple does can potentially lead a product design far, far away from what people want (macOS 26), more so than a data approach, whereas a data approach will not get it so drastically wrong but will never feel great.

      7 replies →

    • I first noticed it when Notion became popular.

      All of the PMs I interacted with across companies started using Notion for everything at the same time. Filling Notion documents with emojis was the style of the time.

      This slightly pre-dated AI tools becoming entirely usable for me.

      2 replies →

    • It's the style of "blazing fast library made with :heart: in rust :crab:" that was popular in github README.md. My guess is that because the models are told to use md they overfit to the style of md documents too.

    • Both predate common use of LLMs, unless my memory is even more shaky than usual on this. I'm sure I saw them appear a fair amount on GitHub and related project pages, but I couldn't tell you more specifically how they started & grew.

      Somehow they must have been over-represented in the training data (or something in the tokenising/training/other processes magnifies the effective presence of punctuation) because I don't remember them being that common and LLMs seem to love spewing them out. Or perhaps it is a sign of the Habsburg problem: people asked LLMs to produce README files like that because they'd seen the style elsewhere, it having spread more organically at first, and the timing was just right for lots of those early examples to get fed back into training data for subsequent models.

    • It was an annoying way of writing on places like LinkedIn and marketing copy for 3 or 4 years before LLMs appeared on the scene. I remember realising that I can't read them (my brain jumps between the words and the picture making it hard to focus on the content) before AI appeared.

  • You're not supposed to read the Jira ticket. You're supposed to paste the link along with instructions for your Claude agent to "do this ticket, no mistakes," then raise an MR for whatever it writes. The text is a wire protocol between agents. If a PM doesn't care enough about the requirements to write, or even read them, then would they even notice if the code works or not? Why would they care about that? What does "works" even mean if no human knows the spec?

    How quickly we become reverse centaurs.

  • God I hate the emoji and checkmark usage so much. It feels so try-hard cutesy.

    Just give me normal bulleted items, I can read.

    • I like them. It tells very clearly how much effort went into someone's work.

      I like them even more on code comments. It tells _precisely_ how much effort went into the pull request, so I don't spend time reviewing lazy work.

      14 replies →

    • Checkmarks as bullets on progress/comparison lists I really like, assuming you mean //. The emoji properly put me off looking deeper into whatever it is that I am looking at unless I was really interested to start with.

      Both predate common use of LLMs, unless my memory is even more shaky than usual on this, but must have been over-represented in the training data (or something in the tokenising/training/other processes magnifies the effective presence of punctuation) because LLMs seem to love spewing them out.

I wish cultural norms around documentation would shift to "pull" rather than "push" — generating "views" of organized knowledge on the fly instead of making endless rearrangements of the same information. It's become too cheap in terms of proof of (mental) work to spray endless pages of notes, reports, memos, decks, etc. but the "documentation is good" paradigm hasn't caught up yet.

Ideally AI would minimize excessive documentation. "Core knowledge" (first principles, human intent, tribal knowledge, data illegible to AI systems) would be documented by humans, while AI would be used to derive everything downstream (e.g. weekly progress updates, changelogs). But the temptation to use AI to pad that core knowledge is too pervasive, like all the meaningless LLM-generated fluff all too common in emails these days.

I work for an "AI-native" company now and have found this to be the case.

EVERYONE (engineers, pms, managers, sales) uses Claude Code to read and write Google Docs (google workspace mcp). Ideas, designs, reports. It's too much for one person to read and, with a distributed async team, there's an endless demand for more.

So for every project there's always one super Google Doc with 50 tabs and everyone just points their claude code at it to answer questions. It's not to be read by a human, it's just context for the agent.

  • This is literally losing the whole process to a stochastic parrot.

    • They are so far removed from the process they can claim they are any % more productive and no one is able to contradict them. Call it a ‘productivity theatre’

      The economic reality check is going to be devastating. It won’t be a crash of AI as a tech, it will be a crash of every ‘AI native’ company that does not even know what is their product any more.

      1 reply →

    • To be fair, a lot of those people were stochastically parroting by themselves for years already. They are just capable to stochastically parrot more.

      These companies have enough market power that they can afford to be ineffective. So they were. And they are ineffective in novel way.

> The "elongation" of workplace artifacts resonated with me on such deep level

Well put. I generally skip AI-generated PR descriptions for this reason as they tend to miss the forest for the trees. Sometimes a large change can be explained by a short yet information-rich description ("migrate to use X instead of Y", "Implement F using pattern P") that only a human could and should write.

  • We need to demand better from our coworkers and from ourself.

    Young "AI native" coworker opens PRs with 3 screen slop description, I flagged that "I know he ain't reading all that, and therefore I ain't reading all that", so he should just give a max half-screen overview. I expect that the PR description makes sense, is correct, and have been reviewed by the person opening the PR. You can still use agents for that, but at least there is a chance with shorter descriptions that it's not completely bs.

I just don’t read this crap. The problem solves itself since anyone sending me that isn’t going to bother to follow up about it anyway.

  • Unfortunately, there is pressure to treat this stuff in good faith. Maybe the PR author really did write all this. Maybe they really did spend 6 hours writing this document.

    So, I approach it in good faith, but I do get upset when people say "I'll ask claude". You need to be the intermediary, I can also prompt claude and read back the result. If you are going to hire an employee to do work on your behalf, you are responsible for their performance at the end of the day. And that's what an AI assistant is. The buck stops with you. But I don't think people understand that and that they don't understand they aren't adding value. At some point, you have to use your brain to decide if the AI is making sense, that's not really my job as the code/doc reviewer. I want to have a conversation with you, not your tooling, basically.

    • > If you are going to hire an employee to do work on your behalf, you are responsible for their performance at the end of the day.

      So, what you are saying is that I should fire the bottom N% of underperforming agent instances?

      You know, like employers do as opposed to taking any responsibility?

    • > I do get upset when people say "I'll ask claude"

      The dude is just acting like a manager with a technical employee (agent) who does the hands-on work. If you are upset about this you should be hopping mad about the whole manager-director-VP-SVP hierarchy above this dude.

      1 reply →

  • They likely haven’t read it either, so they’ll never know you didn’t as well.

  • I just stopped reading my work emails and the announcement channels. Everything that actually matters either ends up DMed to me or shows up in my calendar.

This had me crack up!

I used to have a colleague (senior engineer) who never cared to write a single line in Pull Request descriptions, as if other people had to magically know what he meant to achieve with such changes.

Now? His PRs have a full page description with "bulleted summaries of bulleted summaries"!

  • My colleague had a problem with commit messages, so now they're all written by AI. I don't know what depth of hell he managed to get the prompt from, but they're all now in the format "Updated /path/to/file: fixed issue in thingamabob", which means they're all at least 200 characters long and half of it is the file path, an absolutely pointless thing to put in a commit message. The best part is that whenever you look at GitLab or GitHub, instead of seeing the commit message next to the file you just see the file name again, then the message is cut off.

> Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays.

Minimum word lengths are the greatest dis-service high school and college have ever done to future communication skills. It takes years for people to unlearn this in the workplace.

Max word counts only please. Especially now with AI making it so easy to produce fluff with no signal.

  • I write the words that I hear in my head, as though I am speaking. With the exception of timed, in-class essays, I always turned in papers far in excess of any minimum during high school.

    In college, I took a constructive writing course because I thought "Hey, easy A!" After the second or third week, the professor told me that, while the class had a word minimum, I would also be given a separate word maximum. She said I needed to learn brevity and simplicity, before anything else.

    The point being: I was able to cruise through high school with my longwindedness as a cheat code, never stressing about minimum lengths, despite my writing being crap in other ways.

    Although I have regressed in the two decades since, it helped me a good deal. I am grateful to that professor for doing that.

    • I write a lot and have on several occasions tried dictation as an initial draft authoring step. It was trash every time.

      Good for thinking through a concept but unsalvageable in the edit phase. Easier to throw away and rewrite now that you know what to say.

      Nowadays I like conversation as an ideating step. Talk to a bunch of people, try to explain yourself until they get it, see what questions they ask. Sometimes in HN threads like this :)

      Then write it down.

      You get super high signal writing where every sentence is load bearing. I’ve had people take my documents and share them around the company as “this is how it’s done”

      It can take weeks of work to produce a 500 word product vision document. And then several months to implement, even with AI.

      5 replies →

    • I design boardgames and it's easy to write a lot of rules. It's more difficult to write concise rules. Most of my time is spent editing rules to their absolute minimum.

      "I have made this letter longer than usual, only because I have not had time to make it shorter." - Blaise Pascal

      2 replies →

    • I had the opposite issue. Writing was agony and every section would be written, reviewed and rewritten to get my point across; only to be tortured by a miminum word count that was 20% away after saying all i cound think of saying.

      I've gotten better at phrasing myself adequately in one go. Rute mechanical memorization has also made writing itself cheaper. (read my username)

      I can now yap quite adequately over text, yet i regularly find AIs at a minimum 2x as verbose as my preferred phrasing after manual word mashing.

  • Same as the heavy focus on rewording in your own words: basically teaching you to plagiarise by cheating. I find it distasteful.

    Even though almost copying is everywhere (patents, graphic design, business): albeit in other areas it is often applauded and less obviously deceptive.

    We talk about countries copying e.g. Japan was notorious for it. I think the underlying motivation there is ownership - greedy people feeling they own everything (arts and technology). "We own that and you stole it from us" along with the entitlement of never recognizing when copying others.

  • Minimum word lengths were really a terrible idea and I wonder what arguments were used to get all the teachers to buy into that system.

    • Considering that many high school kids won’t want to put in any effort at all, how else do you convey the amount of detail and effort you expect for a given writing assignment? It’s an imperfect proxy but I can’t think of a better one.

      9 replies →

    • Have a second of critical thinking on this topic will make it abundantly obvious why this line of questioning is anti-education and anti-intellectual. You write in school to practice. No just composition, but grammar, spelling, individual sentences. Practice requires volume.

      Subject yourself to a classroom of kids that you must teach to write, and throw out minimums. Will some students do fine? Sure, of course, and what of the others that turn in one sentence? That never grow? That have to go into the math class and hear their idiot parents say "why are you learning that we have calculators"

      4 replies →

    • It can help to force depth into a topic that requires it, and more expression and emotion into writing where that is of value. It also forces the writer to think more deeply about the topic and organize their thoughts.

      While I hated it in high school, but think I better understand it now. I think part of the problem is they never explained the "why" or the "how", just the requirement. I wasn't able to write anything more than a page or two without extreme difficultly until college when the requirements went up to 30 pages.

      In theory, someone who can write a 30 page paper could effectively distill it down to a short memo when needed, summarizing their primary point(s). Someone who can only write short memos would have a hard time writing something longer one day if/when required. I was trying to do a knowledge transfer one day, opened up Word, and just typed 20 pages on everything I knew about a tool we used heavily, but wasn't documented anywhere. I don't think I could have done that before I was forced to write those longer papers in college.

    • Where I encounter it at the higher education level is that academic-level research almost universally has maximum word counts or page counts rather than minimums: if you think you can get your point across in fewer words, you should. No reviewer is going to object to the paper being too short, so long as you succeeded in making your case.

      John Nash's Ph.D. Thesis is notorious for being short: it's still 27 pages (typed, with hand-written equations and a whopping total of two citations) but that's an order of magnitude below average. On the other hand, most of us don't invent game theory.

      Students used to minimum-word-count essays sometimes have to do some self-retraining to realize that the expectation is that you have more that you want to say than you have room to say it, and the game is now to figure out how to say more in fewer words.

      1 reply →

    • Journalists and writers are often given a deadline and a target length. "Give me 500 words of copy by the end of tomorrow." The editor and publisher of a magazine need to get all words and graphics ready by a strict and regular deadline.

    • The idea was to get people to include more substance. Instead of just saying "Washington crossed the Delaware" to get students to include reasons why, impacts, further narrative, etc. IDK if it was effective or not. Probably at least a little; there's only so many ways to rewrite the same thing over and over. I know in my case though I submitted essays below the word count a few times, but since I actually included the content they were looking for I didn't have any problems

Well, in many layers of overhead in companies people operate at the level of high schoolers, so it is no surprise unfortunately, that the output comes across like that too.

it was only after I had to manage others that I realized the logic for a lot of these simplistic metrics and rules. they are in place to hold accountable the worst performers. a simple example is when i introduced flexible work hours. it was fine with most people, but there are always a few members that abuse the system. they stretch it to the very limit to what can be interpreted as "flexible". as a manager it posed a dilemma for me. i didn't want to take away this privilege just because of a few abusers, but it was both unfair and set bad precedents if I allowed them to get away with this. and let's say they couldn't be easily fired. most of my peers simply ended up going back to a system where people punched in and out.

  • Could not you just say to those few: 'you can't because I do not trust you'? You are the manager after all, your job is not to make them feel good but to make them work.

    • I don't think "some people on the team have privileges and others don't based on the manager's discretion" would be healthy in the long run either. Can you imagine interviewing for a team, asking about the PTO policy, and finding out that it varied like that? It would look pretty indistinguishable from "the people who that manager likes have special treatment" to me. You could hide it from prospective employees, but not knowing about it beforehand and then finding out from one of my teammates that the manager revoked their privileges (who presumably would have a chip on their shoulder about it and present the info with their own biases) would make me concerned that there was a bait-and-switch and now I'm stuck on a toxic team.

I remember my first semester university writing class, when on the first day the teacher told us we had learned to pad our writing in high school, and now we were going to learn how to be short and concise because every assignment would be limited to one page.

  • I had a "Violence in the Political System" professor who only assigned executive summary research assignments. No more than one page.

    His explanation: I don't want to read more than that, and you should be able to fit all the most important details in one page.

    Great lesson.

> Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays.

A huge AI signal to me is not em dashes, not emoji, not even the "not X, it's Y" construction which oh god I'm falling into the trap right now aren't I.

It's a combination of these factors plus a tendency to fluff out the piece with punchy but vague language, often recapitulating the same points in slightly reworded ways, that sounds like... an eighth grader trying to write an impressive-sounding essay that clears the minimum word limit.

Did the bright sparks who trained these things just crack open the printer paper boxes in their parents' homes filled with their old schoolwork, and feed that into the machine to get it started?

  • Another commenter above this proposed a pretty compelling theory for the source of this style: SEO-inflated prose online. If the models were trained on the internet, "higher quality" content needed to be indicated to them during RL somehow. Search engine ranking is an easy-to-obtain metric that's kind of like "quality" if you squint, turn around, and lobotomize yourself. So the AIs have a high likelihood of producing the kinds of content that is rewarded by Google SEO.

    • Bingo but i also think it is just the nature of the technology. It is going to be wordy but not usefully so.

  • Another hint is when the structure and formality of the response doesn’t match the medium. Like when someone sends you a whole article back in DMs along with headings for the sections.

    Even though real humans write like that when writing documents, they never did that in informal messaging.

it actually insane that this sort of thing is tolerated. Its a culture thing and frankly just rude. My org is pretty AI-pilled and this type of behavior will just not fly. I need to be assured im talking to a human who is using their brain.

  • If I paste something from an AI into chat, I always identify it as such by saying something like "my claude instance says this:". I also don't blindly copy paste from it, I always read it first and usually edit it for brevity or tone. Feel like this should be the absolute minimum for sending AI content to a person.

    • Even that is pretty useless because we have no idea what context "your Claude instance" has. All you're doing is dressing up some bullshit to look authoritative.

      When I started my PhD I was already really good at typesetting with LaTeX. I started to bring in fully typeset works in progress for my supervisor to read through. These proofs often had fatal flaws. He asked me to stop typesetting until after the work had been verified because it looked too convincingly correct due to being typeset.

      That was about 15 years ago but I've never forgotten it. Drafts should look like drafts. Scrappy work and proofs of concept should look as such. Stop fucking with people by making your bullshit, scrappy ideas look legit. Progress is a cooperative effort. It's not about trying to make people say yes.

      2 replies →

  • I see it as rude as well. The literal interpretation is: "your time is worth absolutely nothing to me."

  • There’s people who use AI to solve problems, and then there’s people who have completely offloaded all of their thinking to LLMs. I have a manager who when asked a question won’t think even for a moment about it and will just paste paragraphs of AI generated text back.

This is happening at my place as well. I am a senior leader, but I find it hard to push back on this. I something looks plausible and everyone has reacted with a thumbs up (but probably only skimmed the document), when is the first one saying “what is this shit?”

The length itself is not an indicator per se, but you can sense when it is not honest. If others do not have a sense for it, it seems like complaining about something new.

Since we're all so trusting of AI, maybe we can use AI to score how "excessively wordy" communications are, and pressure people to stop.

In my experience I'm pasting a lot more into AI to get the high level summary though.

  • And they are generating the longer version with AI, that you are then using AI to summarize.

    This is not adding value for anyone except people whose function is to look busy, and people trying to avoid their busy work.

    • Yes, I don't find AI generated documents are useful, they just add a ton of fluff. but it's removable fluff at least was my point.

    • Put that way it's basically competitive evolutionary pressure to exhaust the context window of the other LLM.

  • That’s the funny thing is the only way to battle it is with more AI

    In the future everyone will have a bot and our bots will just handle all interactions

Whenever I see a document with horizontal rules between headers and the blues and purples that Claude Cowork adds to .docx files, I sigh.

  • Whenever I see AI-generated content put forward for my attention, I extract myself from the situation with the minimum possible time expenditure from my side.

    It's some sort of a leverage: "I spend 5 minutes prompting, so that you could spend 30 minutes reviewing". Not gonna happen LLM buddies.

>>The "elongation" of workplace artifacts resonated with me on such deep level.

Bulk of pretty much every thing is fluff. Not just work place artifacts.

In many ways this is the root of all complexity.

“Anything more than the truth would be too much.”

- Robert Frost