Comment by lagniappe
18 hours ago
This suffers from a common pitfall of LLM's, context taint. You can see it is obviously the front page from today with slight "future" variation, the result ends up being very formulaic.
18 hours ago
This suffers from a common pitfall of LLM's, context taint. You can see it is obviously the front page from today with slight "future" variation, the result ends up being very formulaic.
That's what makes it fun. Apparently, Gemini has a better sense of humor than HN.
I would find it even more fun if it were more speculative, misapplied uses of 'woosh' aside.
This seem to woosh right over everyone's heads :)
But there's no mention of fun or humor in the prompt.
Judging by the reply posted by the OP, the OP probably maintains a pretty humorous tone while chatting with the AI. It's not just about the prompt, but the context too.
Fun will be prohibited until morale improves.
1 reply →
I don’t ask it to be sycophantic in my prompts either but it does that anyway too.
The bar is low.
That's what the OP asked for, essentially. They copied today's homepage into the prompt and asked it for a version 10 years in the future.
Yeah that’s very true, but I still think it’s pretty funny and original.
> > the result ends up being very formulaic.
> Yeah that’s very true, but I still think it’s pretty funny and original.
Either it’s formulaic or it’s original, it can’t be both.
According to an original formula hehe
The problem is not that it fails to be cheeky, but that "its funny" is depressing in a context where there was a live question of whether it's a sincere attempt at prediction.
When I see "yeah but it's funny" it feels like a retrofitted repair job, patching up a first pass mental impression that accepted it at face value and wants to preserve a kind of sense of psychological endorsement of the creative product.
Honestly it feels like what I, or many of my colleagues would do if given the assignment. Take the current front page, or a summary of the top tropes or recurring topics, revise them for 1 or 2 steps of technical progress and call it a day. It isn't assignment to predict the future, it is an assignment to predict HN, which is a narrower thing.
1 reply →
But it would otherwise be not fun at all. Anthropic didn’t exist ten years ago, and yet today an announcement by them would land on the front page. Would it be fun if this hypothetical front page showed an announcement made by a future startup that hasn’t been founded yet? Of course not.
Algodrill is copied verbatim, as far as I can tell.
It fits in nicely imo. It's plausible (services re-appear on hn often enough), and hilarious because it implies the protracted importance of Leetcode.
Though I agree that the LLM perhaps didn't "intend" that.
I found the repetition (10 years later) to be quite humorous.
Time is a flat circle
1 reply →
Surely there's gotta be a better term for this. Recency bias?
It's called context taint.
You'll love taint checking then.
https://en.wikipedia.org/wiki/Taint_checking
https://semgrep.dev/docs/writing-rules/data-flow/taint-mode/...
That’s the joke…
Really? What's the punchline? I like jokes.
I agree. What is a good update prompt I can give it to create a better variant?
You could try passing it 10-20 front pages across a much wider time range.
You can use: https://news.ycombinator.com/front?day=2025-12-04 to get the frontpage on a given date.
This wont change anything it will just make it less evident to those who missed a day of checking HN.
If you do an update prompt, I hope you still keep this one around!
It's formulaic yeah, but that's what puts it into the realm of hilarious parody.
I think that's what makes it funny - the future turns out to be just as dismal and predictable as we expect it to be. Google kills Gemini, etc.
Humor isn't exactly a strong point of LLMs, but here it's tapped into the formulaic hive mind of HN, and it works as humor!
Isn't that a common pitfall of humans too?
In numerous shows these days AI is the big bad thing. Before that it was crypto. In the 1980s every bad guy was Russian, etc.
Us middle eastern/brown guys have been making a come back?
In numerous TV shows before AI, crypto was the big bad thing?
I think the most absurd thing to come from the statistical AI boom is how incredibly often people describe a model doing precisely what it should be expected to do as a "pitfall" or a "limitation".
It amazes me that even with first-hand experience, so many people are convinced that "hallucination" exclusively describes what happens when the model generates something undesirable, and "bias" exclusively describes a tendency to generate fallacious reasoning.
These are not pitfalls. They are core features! An LLM is not sometimes biased, it is bias. An LLM does not sometimes hallucinate, it only hallucinates. An LLM is a statistical model that uses bias to hallucinate. No more, no less.