← Back to context

Comment by SkyPuncher

6 days ago

It's easy to output LLM junk, but I and my colleagues are doing a lot of incredible work that simply isn't possible without LLMs involved. I'm not talking a 10 turn chat to whip out some junk. I'm talking deep research and thinking with Opus to develop ideas. Chats where you've pressure tested every angle, backed it up with data pulled in from a dozen different places, and have intentionally guided it towards an outcome. Opus can take these wildly complex ideas and distill them down into tangible, organized artifacts. It can tune all of that writing to your audience, so they read it in terms they're familiar with.

Reading it isn't the most fun, but let's face it - most professional reading isn't the most fun. You're probably skimming most of the content anyways.

Our customers don't care how we communicate internally. They don't care if we waste a bunch of our time rewriting perfectly suitable AI content. They care that we move quickly on solving their problems - AI let's us do that.

> Reading it isn't the most fun, but let's face it - most professional reading isn't the most fun. You're probably skimming most of the content anyways.

I find it difficult to skim AI writing. It's persuasive even when there's minimal data. It'll infer or connect things that flow nice, but simply don't make sense.

I hear stories like this a lot (on here anyway) but I haven't seen any output that backs it up. Any day now I guess.

  • I don't really understand this retort. I assume most of us work in a professional environment where it's difficult, if not impossible, to share our work.

    We've been discussing these types of anecdotes with code patterns, management practices, communication styles, pretty much anything professionally for years. Why are the LLM conversations held to this standard?

    • Well, because I've worked in different places, and with different organizations, and can see for myself how different approaches to professional conduct manifest in the finished product, or the flexibility of the team, effectiveness of communication, etc.

      Especially with things like code and writing, I assess the artifacts: software and prose. These stories of incredibly facility of LLMs on code and writing are never accompanied by artifacts that back up these claims. The ones that I can assess don't meet the bar that is being claimed. So everyone who has it working well is keeping it to themselves, and only those with bad-to-mediocre output are publishing them, I am meant to believe? I can't rule it out entirely of course, but I am frustrated at the ongoing demands that I maintain credulity.

      FWIW I have sat out many other professional organization and software development trends because I wanted to wait and assess for myself their benefits, which then failed to materialize. That is why I hold LLMs to this standard, I hold all tools to this standard: be useful or be dismissed.

    • Because I have a proof of the Riemann hypothesis but I'm not showing it to you because I don't want you to steal my idea.

  • Pretty sure people are trying to prompt chatgpt to write Brandon Sanderson-like stories and we'll see their successful prints anytime now.

  • It's really interesting that I've only seen a few actual pieces of large-scale LLM output by people boasting about it, and most of them (e.g. the trash fire of a "web browser" by Anthropic) are bad.

To build what, though? I’m truly curious. You talk about researching and developing ideas — what are you doing with it?

> but I and my colleagues are doing a lot of incredible work that simply isn't possible without LLMs involved

...Which part is impossible? "Writing a bunch of ideas down" was definitely possible before.