Comment by mlhpdx
2 days ago
Where does the line fall? I can use an LLM to help form new and novel thoughts into prose, right? To structure and present it in conventional language rather than stream of thought. Is that disrespectful? It doesn't feel so.
> I can use an LLM to help form new and novel thoughts into prose, right? To structure and present it in conventional language rather than stream of thought.
Better to post your stream of thought.
Using LLMs to turn stream of thoughts into prose is mostly just adding fluff and expanding the text to make it look more like thoughtful prose. What you get looks nice to the creator because they agree with what it's saying, but it wastes other reader's time as they have to dissect the extra LLM prose to get back to the author's stream of thought.
Just post what you're thinking, even if it's not elegant prose. Don't have an LLM wrap it in structures and cliches that disguise it as something else.
I strive to be understood, and my streams of thought are often weird and generally intractable. Nobody really wants to read that; nobody wants the deep threads required to explain it.
I value reading novel and interesting thoughts and ideas. I don't feel "tricked" when I read something of substance or thought provoking, even if LLM generated and decorated with the platitudes and common forms for dull readers.
Something I try very hard to impress on my PhD students is that the process of writing is part of the process of thinking. We often have cool things in our head that don't sound right when we write them down, and that's usually because the thing in our head was more amorphous than we realized. The time you put in getting the written expression of it to work is actually helping you crystallize what you're thinking in the first place.
I guarantee you that I would endlessly rather read your streams of thought about amateur boat building than read another AI-generated Hacker News comment ever again. Don't sell yourself short.
1 reply →
I get that feeling, and I’ll echo my sibling comment: I’d much rather read your stream of thought and get on that brain train with you than see some fluffed up and sterilized version.
I also think that having that authentic voice, while it does open us up to criticism and maybe being misunderstood, also gives us a way to receive actionable feedback to improve.
I think we all want to be understood, and for me part of that understanding is seeing the person. How you write is a part of who you are, and I hope you don’t feel like you need to suppress that.
Feel bad for the people who used to do that for you. Many people have difficulty expressing what they're thinking in words. Those people always feel happy when they see someone else say what they're thinking. If AI can do that now then you don't need them. No point in coming onto Hacker News and using AI to participate in playing that role when you can just talk to the AI. If too many people do this then Hacker News won't even be able to play a vestigial role.
2 replies →
I sucked at writing myself. It's been my experience that over time practicing to becoming a better writer helped me structure my thoughts into something cohesive on the page. And I got better over time.
Sorry, but I prefer original human streams of thought. I now have a pretty darn good filter for ignoring AI gen text just like a filter for skipping over page ads.
> Where does the line fall?
For now I would argue when ai edits for you instead of helping you edit. Take a look at the examples that Dang posted if you have not yet: https://news.ycombinator.com/item?id=47342616
The first 5 I looked at were pretty egregious and not subtle.
Yes, I have also done the search and found that the beta on "LLM!" objections is very high; often seeming wrong as right.
As of this comment which ones are you finding wrong? 5 of the first 7 are confessed ai users, the other 2 look like ai to me too.
2 replies →
> Is that disrespectful
It is, by way of being extremely dishonest in at least two ways:
- there's no way you would do this if you were required to disclose that you used an LLM to write your comment.
- therefore, if your primary goal isn't communication, then you must be doing it to look smart and "win" the conversation
Same reason people desperately post links to scientific papers they don't understand in a frantic attempt to stay on top of some imaginary debate.