← Back to context

Comment by waterhouse

2 days ago

I guess, in theory, this can eventually be countered by people using LLM browser integrations to tell them whether comments are worth reading (and maybe to summarize long comments). Is anyone currently working on that? It might be interesting to see.

First we would run into the spam-filter problem no different to email. Then we have to choose: do we concede to viewing the world through a lens of WhatEverAI, or train it locally on our own thoughts/views on the world, and hope that AI model is never compromised.

I don't believe that delegating reading comprehension to an LLM is really any better than delegating writing ability. In fact I'd argue it's worse to have an automation advising on what's worth reading or not.

There are a lot of people who have no time for something like Infinite Jest and even getting through the first few chapters is an effort. But at least they tried. An LLM excluding the possibility of reading this book because it is 1000 pages of postmodern absurdity effectively optimises away the fringes of human creativity and leaves only the average stuff behind.

AI slop detectors already exist and are no better than snake oil, because a person can have an LLM-smelling writing style without actually using AI. After all, LLMs were originally trained on human input.