Comment by atomic128
8 days ago
Once men turned their thinking over to machines
in the hope that this would set them free.
But that only permitted other men with machines
to enslave them.
...
Thou shalt not make a machine in the
likeness of a human mind.
-- Frank Herbert, Dune
You won't read, except the output of your LLM.
You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you?
You won't think or analyze or understand. The LLM will do that.
This is the end of your humanity. Ultimately, the end of our species.
Currently the Poison Fountain (an anti-AI weapon, see https://news.ycombinator.com/item?id=46926439) feeds 2 gigabytes of high-quality poison (free to generate, expensive to detect) into web crawlers each day. Our goal is a terabyte of poison per day by December 2026.
Join us, or better yet: deploy weapons of your own design.
You shouldn't take a sci-fi writer's words as a prophecy. Especially when he's using an ingenious gimmick to justify his job. I mean, we know that it's impossible for anyone to tell how the world will be like after the singularity, by the very definition of singularity. Therefore Herbert had to devise a ploy to plausibly explain why the singularity hadn't happened in his universe.
I agree with the fact that fiction isn't prophetic, but it can definitely be a societal-wide warning shot. On a personal level, it's not that far-fetched to read a piece of fiction that challenges one's perception on many levels, and as a result changes the behavior of the person itself.
Fiction should not be trivialized and shun because it's fiction, and should be judged by its contents and message. To paraphrase a video game quote from Metaphor; Re-Fantazio: "Fantasy is not just fiction".
I like the idea that Frank Herbert’s job was at risk and that’s why he had to write about the Butlerian Jihad because it kind of sounds like on the other side you have Ray Kurzweil, who does not have to justify his job for some reason.
Does seem funny to think of sci fi writers as being particularly concerned about justifying their jobs.
If only we could look into the future to see who is right and which future is better so we could stop wasting our time on pointless doomerism debate. Though I guess that would come with its own problems.
Hey, wait...
If you read this through a synth, you too can record the intro vocal sample for the next Fear Factory album
i think im ok with HN becoming like reddit now that reddit has become like whatever it is
"The end of humanity" has been proclaimed many times over. Humanity won't end. It will change like it always has.
We get rid of some problems, and we get a bunch of new problems instead. And on, and on, and on.
Russell's chicken (or turkey) would like a word.
https://en.wikipedia.org/wiki/Turkey_illusion
I love that you brought this up.
Chickens are killed ALL the time. It’s a recurring mass event. If you were a smart chicken you could see that pattern and put it into a formula.
In contrast, the end of Humanity would be a singular event. It’s even in the name…
And that is fiction / speculation in comparison. It’s not backed by any data. Human survival over 300,000 years by contrast is.
I mean it’s fine to dream things up, but let’s be fair and call it what it is.
19 replies →
It only has to be right once. Humanity won’t end until it does.
Humanity may end if someone else goes to the top of food chain.
I would bet a lot of money on your poison is already identified and filtered out of training data.
Like partial courses of antibiotics, this will only relatively-advantage thoae leading efforts best able to ignore this 'poison', accelerating what you aim to prevent.
yes. whoever has the best (least detectable) model is best poised to poison the ladder for everyone.
Looking through the poison you linked, how is it generated? It's interesting in that it seems very similar to real data, unlike previous (and very obvious) markov chain garbage text approaches.
We do not discuss algorithms. This is war. Loose lips sink ships.
We urge you to build and deploy weapons of your own unique design.
[flagged]
2 replies →
[flagged]
I call your Frank Herbert machine dystopia and raise you the Ian Banks machine utopia...
I'm gonna call that raise: How does one get to the anarchist Culture when all the machines are being built by profit-hungry capitalists?
We build our own with data that we've collected ourselves ethically. Then we execute once the big guys are distracted.
>Why write code or prose when the machine can write it for you?
I like to do it.
>You won't think or analyze or understand. The LLM will do that.
The clear lack of analysis seems to be your issue.
>This is the end of your humanity. Ultimately, the end of our species.
Doubtful.
The “poison fountain” is just a little script that serves data supplied by… somebody from my domain? It seems like it would be super easy for whoever maintains the poison feed to flip a switch and push some shady crypto scam or whatever.
I think we overestimate the amount of reading, writing, and thinking that occurred before LLMs.
Also, this was literally said about the technology of - what OP fantasizes as - "good ole pen and paper writing" at some point by vintage philosophers. Nothing new here.
I think you’re missing the point of Dune. They had their Butlerian Jihad and won - the machines were banned. And what did it get them? Feudalism, cartels, stagnation. Does anyone seriously want to live in the Dune universe?
The problem isn’t in the thinking machines, it’s in who owns them and gets our rent. We need open source models running on dirt cheap hardware.
The point of Dune is that the worst danger are people who obey authority without questioning it.
Then wouldn't open source models running on commodity hardware be the best way to get around that? I think one of the greatest wins of the 21st century is that almost every human today has more computing power than the entire US government in the 1950s. More computer power has democratized access and ability to disperse information. There are tons of downsides to that which we're dealing with but on the net, I think it's positive.
3 replies →
That's not the point of Dune. Who blindly obeyed who?
2 replies →
... which overthrowing the machines didn't stop. People just found another authority to mindlessly obey.
>You won't read/write/think/understand etc...
I can't see it. We have LLMs now and none of that applies to me. I find them quite handy as a sort of enhanced Google search though.
[dead]
Humans have been around for millions of years, only a few thousand of which they've spend reading and writing. For most of that time you are lucky if you can understand what your neighbor is saying.
If we consider humans with the same anatomy the numbers are ~300,000 ~50,000 for language ~6,000 for writing ~100 for standardized education
The "end of your humanity" already happened when anybody could make up good and evil irrespective of emotions to advance some nation
Bold of you to assume people will be writing in any form in the future. Writing will be gone, like the radio and replaced with speaking. Star Trek did have it right there.
> You won't read, except the output of your LLM. > You won't write, except prompts for your LLM. Why write code or prose when the machine can write it for you? > You won't think or analyze or understand. The LLM will do that.
Sounds great! I'll finally have time to relax! Bring it on...
how about that 'ole speaking for yourself thing?
end of humanity announced, perhaps in 2027? buy the ticket, take the ride, do it again in 2028, thank you for your custom
Are you not just making it more expensive to acquire clean data, thus giving an edge to the megacorps with big funding?
do... do the "poison" people actually think that will make a difference? that's hilarious.
It works for Russian propaganda, I can't see why it should not work for shitty code
How would you do it better?
Let the kiddies have their crusade
Kiddies? Why are you trying to demean people?
Lol. Speak for yourself, AI has not diminished my thinking in any material way and has indeed accelerated my ability to learn.
Anyone predicting the "end of humanity" is playing prophet and echoing the same nonsensical prophecies we heard with the invention of the printing press, radio, TV, internet, or a number of other step-change technologies.
There's a false premise built into the assertion that humanity can even end - it's not some static thing, it's constantly evolving and changing into something else.
A large number of people read a work of fiction and conclude that what happened in the work of fiction is an inevitability. My family has a genetically-selected baby (to avoid congenital illness) and the Hacker News link to the story had these comments all over it.
> I only know seven sci-fi films and shows that have warned about how this will go badly.
and
> Pretty sure this was the prologue to Gattaca.
and
> I posted a youtube link to the Gattaca prologue in a similar post on here. It got flagged. Pretty sure it's virtually identical to the movie's premise.
I think the ironic thing in the LLM case is that these people have outsourced their reasoning to a work of fiction and now are simple deterministic parrots of pop culture. There is some measure of humor in that. One could see this as simply inter-LLM conflict with the smaller LLMs attempting to fight against the more capable reasoning models ineffectively.
Using fiction as an interpretive vehicle to explore, challenge and contrast our assumptions and perceptions about our own world isn't even in the same universe as "outsourcing their reasoning to a work of fiction".
"Haha you're basically a human LLM!" is such a weak, boringly robotic rebuttal in nearly any context given how it can be generically applied to literally anything.
2 replies →
Now that you mention it, it is pretty strange to see HN users parroting other people’s thinking (sci-fi writers) like literal sub-sapient parrots, while simultaneously decrying the danger of machines turning people into sub-sapient parrots…
Following that logic… the closest problem would be literally inbetween their ears.
1 reply →
Reddit already exists, my dude.
A better approach is to make AI bullshit people on purpose.
This is essentially just that. The idea is that "poisoned" input data will cause AIs that consume it to become more likely to produce bullshit.