Comment by darepublic

4 hours ago

I have seen people just generate large docs with Claude cowork and they themselves have not scrutinized it or know why/how it's useful. It's just kind of impressive in its volume and well formatedness. And then they dump it in your lap as being helpful

> And then they dump it in your lap as being helpful

I've been guilty of this and gotten pushback from my manager: "this feels like homework, cut these options down to 100 words each, max".

Curation and refinement are even more important when you can have genAI generate reams of text.

Seeking outside signals is even more important, like talking to customers, looking at real usage data, and more. It's too easy to trust believe what Claude tells you, even if you say "please argue against this idea", which you always should.

  • It's all fun and games until some high level executive realizes everyone is using it and still demanding the same paycheck.

I'm beginning to see this in my industry (consulting). I was at a client site last week and in a room with some heavy hitters both from my side and client side but in a casual setting (lunch). Everyone was discussing how they sometimes "cheat" using genAI to put together decks when one of the out-of-the-blue 1 sentence questions that takes 4 hours to answer come down from the c-suite. They all said they heavily edit the output but at least it gives them a place to start. I have my doubts though, i wonder how many times they just take it as gospel and forward the deck on.

to be fair, i've been guilty of this with code. Ask claude to generate a python script that takes X as input and produces Y as output, run it, pipe to more, output looks ok but i don't check everything, write it to a file, send it on.

Yep, I've received a few powerpoints like that.

I'm using Claude to write large files too, but it's a very iterative process and involves a lot of reading and correcting.

We've really reached the point where one person uses AI to create an impressive report based on a few prompts with some keywords, and the receiver uses another AI to summarize the report to a short TL;DR that's almost identical to the input prompts.

  • This, creating order from chaos (reducing entropy) is difficult and requires real intelligence. Inflating some small prompt into a wall of text and creating a bunch of entropy in the process is not as useful as it appears.

  • This reminds me of that game of telephone. Eventually, the message gets morphed and transformed into something different from what was originally said. Is this really what we want?