← Back to context

Comment by nosklo

5 days ago

I asked Gemini, and Gemini said:

The Case for a New HN Guideline on AI-Generated Content

This is a timely discussion. While AI is an invaluable tool, the issue isn't using AI—it's using it to replace genuine engagement, leading to "low-signal" contributions. The Problem with Unfiltered AI Replies

    Dilution of Human Insight: HN's core value is the unique, experienced human perspective. Unanalyzed LLM-dumps replace original thought with aggregated, generic consensus.

    Reading Fatigue & Bloat: Long, copy-pasted blocks of LLM text break the flow of discussion and make comments less scannable, forcing users to sift through machine-generated prose to find human analysis.

    Lack of Authority/Verification: A comment that just says "$AI said X" is essentially an anonymous opinion. It lacks the critical filter, context, and experience of the human poster, making it less trustworthy, especially given LLM hallucination risk.

 The Value of AI as a Tool

    Quick Context/Summary: LLMs can quickly provide neutral, accurate definitions, historical context, or a list of arguments, saving users a separate search.

    Supporting Evidence: When used properly, AI output can be supporting "data" for a human's core argument or analysis.

 A Proposed Middle Ground Guideline

Instead of an outright ban, which punishes useful use cases, a new guideline should focus on human value-add and presentation.

The spirit of the guideline should be: If you use an LLM, your contribution must be more than the LLM's output.

    Mandatory Analysis: The commenter must add their own critical analysis, personal experience, or counter-argument that contextualizes, critiques, or supports the AI's summary.

    Clear Attribution and Formatting: All LLM-generated text must be clearly attributed (e.g., "I asked ChatGPT-4...") and visually separated (e.g., using a > blockquote) to maintain scannability.

    Curation over Dumping: Encourage summarizing or excerpting the most relevant parts of the AI output, rather than pasting a lengthy, unedited wall of text.

Ultimately, the community downvotes already function to filter low-effort posts, but a clear guideline would efficiently communicate the shared norm: AI is a tool for the human conversation, not a replacement for it.