Comment by necovek

10 months ago

I've already asked a number of colleagues at work producing insane amount of gibberish with LLMs to just pass me the prompt instead: if LLM can produce verbose text with limited input, I just need that concise input too (the rest is simply made up crap).

Something I’ve found very helpful is when I have a murky idea in my head that would take a long time for me to articulate concisely, and I use an LLM to compress what I’m trying to say. So I type (or even dictate) a stream of consciousness with lots of parentheticals and semi-structured thoughts and ask it to summarize. I find it often does a great job at saying what I want to say, but better.

(See also the famous Pascal quote “This would have been a shorter letter if I had the time”).

P.s. for reference I’ve asked an LLM to compress what I wrote above. Here is the output:

When I have a murky idea that’s hard to articulate, I find it helpful to ramble—typing or dictating a stream of semi-structured thoughts—and then ask an LLM to summarize. It often captures what I mean, but more clearly and effectively.

  • Like the linked article, I’d rather read your original text, even if it’s less structured and rough

    • Agreed, the messiness of the original text has character and humanity that is stripped from the summarized text. The first text is an original thought, exchanged in a dialogue, imperfectly.

      Elsewhere in this comment section, it's discussed about the importance of having original thought, which the summarized text specifically isn't, and has leeched away.

      The parent comment has actually made the case against the summarized text being "better" (if we're measuring anything that isn't word count).

  • Learning to articulate your thoughts is pretty vital in learning to think though.

    An LLM could make something sound articulate even if your input is useless rambling containing the keywords you want to think about. Having someone validate a lack of thought as something useful doesn't seem good for you in the long term

    • Yeah, so the problem I’m solving is not that I don’t think enough about something, or even that I don’t think about it in the right way. “Murky” was maybe the wrong word. It’s more that I often find my audience does not have the longest attention span or forgiveness for sloppy writing; thus, the onus is on me to make my thoughts as easy to digest as possible.

      1 reply →

  • Your original here is distinctly better! It shows your voice and thought patterns. All character is stripped away in the "compressed" version, which unsurprisingly is longer, too.

“Someone sent me this ai generated message. Please give me your best shot at guessing the brief prompt that originated the text”.

Done, now ai is just lossy prettyprinting.

  • Jokes aside, this happens all the time.

    I have it write doc strings. I later ask it to explain a section of code, wherein it uses the doc strings to understand and explain the code to me.

    A less lossy way to capture this will probably emerge at some point.

Recently I wasted half a day trying to make sense of story requirements given to me by a BA that were contradictory and far more elaborate than we had previously discussed. When I finally got ahold of him he confessed that he had run the actual requirements through ChatGPT and "didn't have time to proofread the results". Absolutely infuriating.

This is how I've felt about using LLMs for things like writing resumes and such. It can't possibly give you more than the prompt since it doesn't know anything more about you than you gave it in the prompt.

It's much more useful for answering questions that are public knowledge since it can pull from external sources to add new info.

The one case where this doesn't work, is if the prompt is, say 3 ideas, which the LLM expand to 20, and the colleague then trimmed down to 10.

Ideally there's some selection done, and the fact you're receiving it means it's better than a mean answer. But sometimes they haven't even read the LLM output themselves :-(

Chatgpt very useful for adding softness and politeness to my sentences. Would you like more straight forward text which probably will be rude for regular american?

  • Yes. I can't stand waffle from native or non-native speakers. Waste of electrons and oxygen :-) that might just be me however. Know your audience ;-)

  • If we can detach content and presentation, then the reader can choose tone and length.

    At some point we will stop making decisions about what future readers want. We will just capture the concrete inputs and the reader's LLM will explain it.

    • I don't think form and function can be separated so cleanly in natural language. However you encode what's between your ears into text, you've made (re)presentational choices.

      A piece of text does not have a single inherently correct interpretation. Its meaning is a relation constructed at run- (i.e. read-)time between the reader, the writer, and (possibly) the things the text refers to, that is if both sides are well enough aligned to agree on what those are.

      Words don't speak, they only gesture.

      3 replies →

  • >which probably will be rude

    as long as the text isn't at risk of being written up by HR, I don't particularly care about the tone of the message.