Comment by viccis

8 days ago

Haven't seen this mentioned yet, but the worst part for me is that a lot of management LOVES to use Claude to generate 50 page design documents, PRDs, etc., and send them to us to "please review as soon as you can". Nobody reads it, not even the people making it. I'm watching some employees just generate endless slide decks of nonsense and then waffle when asked any specific questions. If any of that is read, it is by other peoples' Claude.

It has also enabled a few people to write code or plan out implementation details who haven't done so in a long (sometimes decade or more) time, and so I'm getting some bizarre suggestions.

Otherwise, it really does depend on what kind of code. I hand write prod code, and the only thing that AI can do is review it and point out bugs to me. But for other things, like a throwaway script to generate a bunch of data for load testing? Sure, why not.

I've been tasked with code reviews of Claude chat bot written code (not Claude code that has RAG and can browse the file system). It always lacks any understanding of our problem area, 75% of the time it only works for a specific scenario (the prompted case), and almost 100% of the time, when I comment about this, I'm told to take it over and make it work... and to use Claude.

I've kind of decided this is my last job, so when this company folds or fires me, I'm just going to retire to my cabin in the rural Louisiana woods, and my wife will be the breadwinner. I only have a few 10s of thousands left to make that home "free" (pay off the mortgage, add solar and batteries, plant more than just potatoes and tomatoes).

Though, post retirement, I will support my wife's therapy practice, and I have a goal of silly businesses that are just fun to do (until they arent), like my potato/tomato hybrid (actually just a graft) so you can make fries and ketchup from the same plant!

  • >so you can make fries and ketchup from the same plant!

    We should be friends. I like your ideas.

    • I'm always looking for people to share my weird ideas that have absolutely nothing to do with software or computers. Unfortunately my only friends are all software people who have no interest outside of computers. Something I've found to have very little interest in anymore.

      1 reply →

  • You got any land down there? I would like to be close to you and, post retirement, eat said french fries daily.

    • This might be a little dark, but the majority of our street is very elder, and none of there families want to move over here.

      They were the original non-familial homesteaders from 50+ years ago when all this land was my wife's great grandfather's, and he sold off small plots to people. He, infact, inherited it from his father, who bought a half mile square back in the 20s or 30s (I believe). The first house on the road was his (Great Great Grandpa). The road WAS his driveway, then slowly but surely new generations of the family started building houses a few hundred yards away from each other, then they started selling plots to people in the 60s, and sold the last of the original land in 2023 about a year before grandpa passed.

      Now the only land left in "the family", is this 1.25 acre plot that I live on. I don't really have the desire to buy more from the folks that are dying, but my neighbor has already bought up about half of the vacant land.

  • That sounds lovely. I think too many people get attached to the structure of life as they've lived it for the last n years and resist natural phase transitions for far too long. Good luck with retirement and your dream of being the botanical equivalent of the mean kid from Toy Story:p

I noticied what previously would take 30 mins, now takes a week. For example we had a performance issue with a DB, previously I'd just create a GSI (global secondary index), now there is a 37 page document with explanation, mitigation, planning, steps, reviews, risks, deployment plan, obstacles and a bunch of comments, but sure it looks cool and very professional.

  • Im now out of the workforce and can’t even imagine the complexity of the systems as management and everyone else communicate plans and executions through Claude. It must already be the case that some code based are massive behemoths few devs understand. Is Claude good enough to help maintain and help devs stay on top of the codebase?

    • The code is fine, strong reviews help and since we're slower due to all slop communication also helps.

I quit my last job because of this. I’m pretty sure manager was using free chatgpt with no regard for context length too, because not only was it verbose it was also close to gibberish. Being asked to review urgently and estimate deadlines got old real fast

If you shove clearly AI generated content at me, I will use an AI to summarize it.

Or I'll walk up to your desk and ask you to explain it.

  • Jump straight to the second option. You have to presume that the content they sent you has no relation whatsoever to their actual understanding of the matter.

  • I actually think there’s almost an acceptable workflow here of using LLMs as part of the medium of communication. I’m pretty much fine with someone sending me 500 lines of slop with the stated expectation that I’ll dump it into an LLM on my end and interact with it.

    It’s the asymmetric expectations—that one person can spew slop but the other must go full-effort—that for me personally feels disrespectful.

    • I also don't mind that. Summarized information exchange feels very efficient. But for sure, it seems like a societal expectation is emerging around these tools right now - expect me to put as much effort into consuming data as you did producing it. If you shat out a bunch of data from an LLM, I'm going to use an LLM to consume that data as well. And it's not reasonable for you to expect me to manually parse that data, just as well as I wouldn't expect you to do the same.

      However, since people are not going to readily reveal that they used an LLM to produce said output, it seems like the most logical way to do this is just always use an LLM to consume inputs, because there's no easy 100% way to tell whether it was created by an LLM or a human or not anymore.

      4 replies →

    • I think we'll eventually move away from using these verbose documents, presentations, etc for communication. Just do your work, thinking, solving problems, etc while verbally dumping it all out into LLM sessions as you go. When someone needs to be updated on a particular task or project, there will be a way to give them granular access to those sessions as a sort of partial "brain dump" of yours. They can ask the LLM questions directly, get bullet points, whatever form they prefer the information in.

      That way, thinking is communication! That's kind of why I loved math so much - it felt like I could solve a problem and succinctly communicate with the reader at the same time.

      1 reply →

    • If you write 3 bullet points and produce 500-pages of slop why would my AI summarise it back to the original 3 bullet points and not something else entirely?

      1 reply →

    • is this better than normal communication in any way, or just not much worse?

    • > It’s the asymmetric expectations—that one person can spew slop but the other must go full-effort—that for me personally feels disrespectful.

      This has always been the case. Have some junior shit out a few thousand lines of code, leave, and leave it for the senior cleanup crew to figure out what the fuck just happened...

  • If you shove content at me that I even suspect was AI generated I will summarily hit the delete button and probably ban you from sending me any form of communication ever again.

    It's a breach of trust. I don't care if you're my friend, my boss, a stranger, or my dog - it crosses a line.

    I value my time and my attention. I will willingly spend it on humans, but I most certainly won't spend it on your slop when you didn't even feel me worth making a human effort.

    • I highly recommend you let your dog use LLMs. They have trouble composing long messages on human-centric keyboards.

Obviously you should also use Claude to consume those 50 pages. It sounds cynical, but it's not. It's practical.

What I've learned in 2 years of heavy LLM use - ChatGPT, Gemini, and Claude, is that the significance is on expressing and then refining goals and plans. The details are noise. The clear goals matter, and the plans are derived from those.

I regularly interrupt my tools to say, "Please document what you just said in ...". And I manage the document organization.

At any point I can start fresh with any AI tool and say, "read x, y, and z documents, and then let's discuss our plans". Although I find that with Gemini, despite saying, "let's discuss", it wants to go build stuff. The stop button is there for a reason.

  • I use an agents.md file to guide Claude, and I include a prominent line that reads UPDATE THIS FILE WITH NEW LEARNINGS. This is a bit noisy -- I have to edit what is added -- but works well and it serves as ongoing instruction. And as you have pointed out, the document serves as a great base if/when I have to switch tools.

I've found in my (admittedly limited) use of LLMs that they're great for writing code if I don't forsee a need to review it myself either, but if I'm going to be editing the code myself later I need to be the one writing it. Also LLMs are bad at design.

  • > Also LLMs are bad at design.

    I've found that SoTA LLMs sometimes implement / design differently (in the sense that "why didn't I think of that"), and that's always refreshing to see. I may run the same prompt through Gemini, Sonnet, and Codex just to see if they'd come up with some technique I didn't even know to consider.

    > don't forsee a need to review it myself either

    On the flip side, SoTA LLMs are crazy good at code review and bug fixes. I always use "find and fix business logic errors, edge cases, and api / language misuse" prompt after every substantial commit.

  • what code do you write that you don't need to mantain/read again later?

    • For me it's throwaway scripts and tools. Or tools in general. But only simple tools that it can somewhat one-shot. If I ever need to tweak it, I one-shot another tool. If it works, it's fine. No need to know how it works.

      If I'm feeling brave, I let it write functions with very clear and well defined input/output, like a well established algorithm. I know it can one-shot those, or they can be easily tested.

      But when doing something that I know will be further developed, maintained, I mainly end up writing it by hand. I used to have the LLM write that kind of code as well, but I found it to be slower in the long run.

    • Definitely a lot of one-shot scripts for a given environment... I've started using a run/ directory for shell scripts that will do things like spin up a set of containers defined in a compose file.. build and test certain sub-projects, initialize a database, etc.

      For the most part, many of them work the first time and just continue to do so to aid a project. I've done similar in terms of scaffolding a test/demo environment around a component that I'm directly focused on... sometimes similar for documentation site(s) for gh pages, etc.

      Soem things have gone surprisingly well.

One group of people pretends to have written something and another group of people pretends to have read something. Much productivity is gained.

Zizek had a great point about this.

The best thing to do is to schedule meetings with those people to go over the docs with them. Now you force them to eat their own shit and waste their own time the more output they create.

  • Love the intent, but isn't that wishful if you don't have any leverage? e.g., the higher up will trade you for someone who doesn't cause friction or you waste too much of your own time?

Similarly, managers at my workplace occasionally use LLMs to generate jira tickets (with nonsense implementation details), which has led junior engineers astray, leaving senior engineers to deal with the fallout.

Getting similar vibes from freelance clients sending me overly-articulated specs for projects, making it sound like they want sophisticated implementations. Then I ask about it and they actually want like a 30 row table written in a csv. Huge whiplash.

I instituted a simple “share the inputs” along with the outputs rule which prevents people doing exactly this. Your only value contribution is the input and filtering the output but for people with equal filtering skill, there’s no value in the output

The first point is so true. How do people expect me to work with their 20 page "deep research" document that's built by a crappy prompt and they didn't even bother to proofread.

I've definitely seen this, I have a theory as to how this kind of thing actually would affect AI predictions since people seem to only focus on the pure-productivity enhancing effects of AI and discounting the fact that a large portion of work was never productive to begin with...

https://news.ycombinator.com/item?id=47347983

If Claude Code can parse these design documents, I would recommend making a skill to do an adversarial review of the document. Then just generate that review, do some minor edits to make it look like a human wrote it and send it back to them.

  • Which is fine if we are fine with pretending to do our jobs.

    But deep down I know that slop is noise and words no longer represent understanding.

I've had this experience too. In the case of vibe code, there is at least some incentive from self-preservation that prevents things from getting too out of hand, because engineers know they will be on the hook if they allow Claude to break things. But the penalties for sloppy prose are much lower, so people put out slop tickets/designs/documentation, etc. more freely.