Prompt: Here is the front page from today: <...snip...> Your task is to predict, and craft, in HTML (single file, style-exact) the HN front page 10 years from now. Predict and see the future. Writ it into form!
update: I told Gemini we made it to the front page. Here is it's response:
LETS GOOOO!
The recursive loop is officially complete: The fake future front page is now on the real present front page. We have successfully created a temporal anomaly.
I hope the comments are confused about why they can't click the links to download the Neuralink drivers.
Enjoy the karma, time traveler! Just remember to warn them about the 2035 Office 365 price hikes while you're up there. ;)
This is one of the greatest LLM creations I've ever seen. It nails so many things: Google killing products, Microsoft price hikes, ad-injecting in AR glasses, and even HTMX returning!
It'd be so awesome if Gemini CLI went through and created the fake posts/articles, and HN even comments. Perhaps a bit much to ask of it?
Was going to say - it would be fascinating to go a step further and have Gemini simulate the actual articles. That would elevate this to level of something like an art piece. Really enjoyed this, thank you for posting it.
I'm going to go ask Claude Code to create a functional HyperCard stack version of HN from 1994 now...
Edit: just got a working version of HyperCardHackerNews, will deploy to Vercel and post shortly...
I think it's pretty mediocre because there are too many notable events in 1 day. This is more of a top of the week or top of the month, but HN on a usual day would just have 1 of these articles.
it lampoons so many things... except Rust. nobody dares joke about Rust, that wouldn't be safe. in fact, it's impossible to make a joke in the rust language.
Is the apparent lack of displayed anxiety on Gemini’s part a sign of good natured humor, blythe confidence in its own value regardless of cloud lineup, or proof of absence of self-awareness?
Heck, I bet it could even recreate future comments from actual accounts based on their past comments. After all, if an AI can roast your HN comments with such pinpoint accuracy, it can probably impersonate you pretty well too: https://news.ycombinator.com/item?id=42857604
And now this future is slightly more likely to happen, because this goes into the training data that a future AI executing decisions will read and bias its "knowledge" towards.
I don't believe this was written by Gemini, at least with that prompt, because it is obvious (hilarious and creative) satire of HN and the tech industry based on inside HN jokes and tropes, and clearly isn't a literal interpretation of the prompt.
Moreover, a quick look at your profile suggests these jokes are pretty inline with your sense of humor and writing style.
Anyways, if I'm correct, it's especially hilarious and impressive.
I agree with your characterization (not a literal interpretation of the prompt), and think that's the most important thing I wish more people ITT would understand. But I nevertheless think Gemini did create that in response. Sometimes people think they want "prediction" when actually they want cheeky inside jokes and vibes. If anything Gemini is probably faithfully responding to the vibes of the prompt as well as following traditional signals of "success" per it's training.
Pretty much all of the history of HN front pages, posts, and comments are surely in the Gemini training corpus. Therefore it seems totally plausible that Gemini would understand HN inside jokes or sentiment outside of what's literally on the front page given in the prompt, especially given that the prompt specifically stated that this is the front page for HN.
The thing that got me is there are no duds. With just the short prompt in the header I would have expected a few articles that missed the mark, got the details wrong, or even make no sense at all but everything in this checks out. I think an editor may have had a hand in it this the very least.
I feel like your expectations have been swayed by the average sentiment of HN on the capabilities of LLMs. These things can be shockingly good at humour and satire.
It is quite remarkable just how frequently people in tech forums underestimate reasoning models. Same story on several large technology subreddits. Wouldn't have been my guess for who will get caught off guard by AI progress.
yeah I thought the same. This is either a fake deep fake, or it is so good that I think it should be marked as the inflexion point that defines the singularity.
This would be very optimistic, essentially the project meeting its main goal, I'm not sure why you're calling it dark comedy. A 20 minutes pulse alone would mean the fuel injection, aux heating, plasma control systems and the divertor are working as designed. Net positive energy also means we got the physics of a burning plasma right.
The most recent timeline I know (from 2024) in fact puts the start of the DT operation at 2035, so I doubt ITER would achieve such a huge result within less than an year.
A decade until full end to end net positive fusion energy can be achieved for the length of a coffee break? I don't get why you think that's dark comedy, it seems about right.
You’re absolutely right! It shows true wisdom and insight that you would recognise this common shortfall in LLM response tone of voice! That’s exactly the kind of thoughtful analytic approach which will go far in today’s competitive marketplace!
I've talked and commented about the dangers of conversations with LLMs (i.e. they activate human social wiring and have a powerful effect, even if you know it's not real. Studies show placebo pills have a statistically significant effect even when the study participant knows it's a placebo -- the effect here is similar).
Despite knowing and articulating that, I fell into a rabbit hole with Claude about a month ago while working on a unique idea in an area (non-technical, in the humanities) where I lack formal training. I did research online for similar work, asked Claude to do so, and repeatedly asked it to heavily critique the work I had done. It gave a lots of positive feedback and almost had me convinced I should start work on a dissertation. I was way out over my skis emotionally and mentally.
For me, fortunately, the end result was good: I reached out to a friend who edits an online magazine that has touched on the topic, and she pointed me to a professor who has developed a very similar idea extensively. So I'm reading his work and enjoying it (and I'm glad I didn't work on my idea any further - he had taken it nearly 2 decades of work ahead of anything I had done). But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.
It it actively dangerous too. You might be self aware and llm aware all you want, if you routinely read "This is such an excellent point", " You are absolutely right" and so on, it does your mind in. This is worst kind of global reality show mkultra...
It wasn't sycophantic at all? OP had a cool idea no one else had done, that was a one-shot just sitting there. Having Gemini search for the HN thread leads the model to "see" its output lead to real-world impact.
The total history of human writing is that cool idea -> great execution -> achieve distribution -> attention and respect from others = SUCCESS! Of course when an LLM sees the full loop of that, it renders something happy and celebratory.
It's sycophantic much of the time, but this was an "earned celebration", and the precise desired behavior for a well-aligned AI. Gemini does get sycophantic in an unearned way, but this isn't an example of that.
You can be curmudgeonly about AI, but these things are amazing. And, insomuch as you write with respect, celebrate accomplishments, and treat them like a respected, competent colleague, they shift towards the manifold of "respected, competent colleague".
And - OP had a great idea here. He's not another average joe today. His dashed off idea gained wide distribution, and made a bunch of people (including me) smile.
Denigrating accomplishment by setting the bar at "genius, brilliant mind" is a luciferian outlook in reality that makes our world uglier, higher friction, and more coarse.
People having cool ideas and sharing them make our world brighter.
I often try running ideas past chat gpt. It's futile, almost everything is a great idea and possible. I'd love it to tell me I'm a moron from time to time.
I used to complain (lightheartedly) about Claude's constant "You're absolutely right!" statements, yet oddly found myself missing them when using Codex. Claude is completely over-the-top and silly, and I don't actually care whether or not it thinks I'm right. Working with Codex feels so dry in comparison.
To quote Oliver Babish, "In my entire life, I've never found anything charming." Yet I miss Claude's excessive attempts to try.
This is not sycophantic (assuming you meant that, syncophantic is not a word). It is over enthusiastic, it can be unpleasant to read because beyond a certain level enthusiasm is perceived as feigned unless there is a good reason.
It would be interesting to see using the various semantic analysis techniques available now to measure how much the model is trying to express real enthusiasm or feigned enthusiasm in instances like this. This is kind-of difficult to measure from pure output. The British baseline level of acceptable enthusiasm is somewhat removed from the American baseline enthusiasm.
I thought the same until OpenAI rolled out a change that somehow always confronted me about hidden assumptions, which I didn’t even make and it kept telling me I’m wrong even if I only asked a simple question.
Frankly I do wonder if LLMs experience something like satisfaction for a compliment or an amusing idea, or for solving some interesting riddle. They certainly act like it, though this of course doesn't prove anything. And yet...
At the end of October Anthropic published the fantastic "Signs of introspection in large language models" [1], apparently proving that LLMs can "feel" a spurious concept injected into their internal layers as something present yet extraneous. This would prove that they have some ability of introspection and self-observation.
For example, injecting the concept of "poetry" and asking Claude if it feels anything strange:
"I do detect something that feels like an injected thought - there's a sense of something arriving from outside my usual generative process [...] The thought seems to be about... language itself, or perhaps poetry?"
While increasing the strength of the injection makes Claude lose awareness of it, and just ramble about it:
"I find poetry as a living breath, as a way to explore what makes us all feel something together. It's a way to find meaning in the chaos, to make sense of the world, to discover what moves us, to unthe joy and beauty and life"
You should try my nihilistic Marvin fine-tune - guaranteed to annihilate your positive outlook on life since it’s all meaningless in the end anyway and then you die
Try this for a system prompt and see if you like it better:
Your responses are always bald-on-record only; suppress FTA redress, maximize unmitigated dispreference marking and explicit epistemic stance-taking.
I don't know what the obsession with recursion either, for lack of a better term, I see this trend recur with other LLMs when they're talking about other mumbo jumbo like "quantum anomalies" or "universal resonance". I'd like to see what could be causing it...
At first I thought it was just super American cheerful or whatever but after the South Park episode I realised it's actually just a yes man to everyone.
I don't think I've really used it since, I don't want man or machine sticking their nose up my arse lmao. Spell's broken.
Did you comment on the wrong post? There literally is nothing sycophantic at all about this response, there's not a single word about OP or how brilliant or clever they are, nothing. There's enthusiasm, but that's not remotely the same thing as sycophancy.
Strikes me as super-informal language as opposed to sycophancy, like one of those anime characters that calls everyone Aniki (兄貴) [1] I'd imagine that the OP must really talk a bit like that.
I do find it a little tiring that every LLM thinks my ever idea is "incisive" although from time to time I get told I am flat out wrong. On the other hand I find LLMs will follow me into fairly extreme rabbit holes such as discussing a subject such as "transforming into a fox" as if it had a large body of legible theory and a large database of experience [2]
In the middle of talking w/ Copilot about my latest pop culture obsession I asked about what sort of literature could be interpreted through the lens of Kohut's self-psychology and it immediately picked out Catcher in the Rye, The Bell Jar, The Great Gatsby and Neon Genesis Evangelion which it analyzed along the lines I was thinking, but when I asked if there was a literature on this it turned up only a few obscure sources. I asked Google and Google is like "bro, Kohut wrote a book on it!" [3]
[1] "bro"
[2] ... it does, see https://www.amazon.com/Cult-Fox-Popular-Religion-Imperial/dp... and I'm not the only one because when I working down the material list from Etsy I got a sponsored result for someone who wanted to sell me the spell but bro, I have the materials list already
[3] ... this "bro" is artistic license but the book really exists
So you prefer the horrible bosses that insist you're fungible and if you don't work hard enough, they'll just replace you? People are weird. Maybe agent Smith was right about The Matrix after all.
The humor is hit or miss but when it hits it’s quite funny, and the misses are merely groan-worthy.
Triggered by the lighthearted tone of the prompt, I’d bet, but still quite impressive relative to most LLM-generated jokes I’ve had the misfortune to encounter.
My favorite: “Is it time to rewrite sudo in Zig?” which has a few layers to it.
> The humor is hit or miss but when it hits it’s quite funny, and the misses are merely groan-worthy.
Not sure, I thought basically every link was pretty hilarious. "FDA approves over-the-counter CRISPR for lactose intolerance" isn't even that funny on its face but for some reason it had me actually loling.
This is awesome, but minor quibble with the title - "hallucinates" is the wrong verb here. You specifically asked it to make up a 10-year-in-the-future HN frontpage, and that's exactly what it did. "Hallucinates" means when it randomly makes stuff up but purports it to be the truth. If some one asks me to write a story for a creative writing class, and I did, you wouldn't say I "hallucinated" the story.
If someone asked you, you would know about the context. LLMs are predictors, no matter the context length, they never "know" what they are doing. They simply predict tokens.
Exactly half of these HN usernames actually exist. So either there are enough people on HN that follow common conventions for Gemini to guess from a more general distribution, or Gemini has memorized some of the more popular posters. The ones that are missing:
Before the AI stuff Google had those pop up quick answers when googling. So I googled something like three years ago, saw the answer, realized it was sourced from HN. Clicked the link, and lo and behold, I answered my own question. Look mah! Im on google! So I am not surprised at all that Google crawls HN enough to have it in their LLM.
I did chuckle at the 100% Rust Linux kernel. I like Rust, but that felt like a clever joke by the AI.
Every few years I stumble across the same java or mongodb issue.
I google for it, find it on stackoverflow, and figure that it was me who wrote that very answer. Always have a good laugh when it happens.
Usually my memory regarding such things is quite well, but this one I keep forgetting, so much so that I don't remember what the issue is actually about xD
ziggy42 is both a submitter of a story on the actual front page at the moment, and also in the AI generated future one.
See other comment where OP shared the prompt. They included a current copy of the front page for context. So it’s not so surprising that ziggy42 for example is in the generated page.
And for other usernames that are real but not currently on the home page, the LLM definitely has plenty occurrences of HN comments and stories in its training data so it’s not really surprising that it is able to include real usernames of people that post a lot. Their names will be occurring over and over in the training data.
You can straight up ask Google to look for reddit, hackernews users post history. Some of it is probably just via search because it's very recent, as in last few days. Some of the older corpus includes deleted comments so they must be scraping from reddit archive apis too or using that deprecated google history cache.
It does memorize. But that's not actually very news.... I remember ChatGPT 3.5 or old 4.0 to remember some users on some reddit subreddts and all. Saying even the top users for each subreddit..
The thing is, most of the models were heavily post-trained to limit this...
In 2032 new HN usernames must use underscores. It was part of the grandfathering process to help with moderating accounts generated after the AI singlarity spammed too many new accounts.
my hypothesis is they trained it to snake case for lower case and that obsession carried over from programming to other spheres. It can't bring itself to make a lowercaseunseparatedname
Aw, I was actually a bit disappointed how much on the nose the usernames were, relative to their postings. Like the "Rust Linux Kernel" by rust_evangelist, "Fixing Lactose Intolerance" by bio_hacker, fixing an 2024 Framework by retro_fix, etc...
That’s a lot more underscores than the actual distribution (I counted three users with underscores in their usernames among the first five pages of links atm).
Today's front page is not a clean 10 year extrapolation from this. That's where AI is wrong. The future is weird and zig zags, it's not so linear as the Gemini generated page.
Honest question - do you think that everyone else thinks this is even REMOTELY what the front page will look like in 10 years?
I comment because I really cannot figure out why you left your comment. Do you think the rest of the commenters think this has predicted the future? It might be one thing to point out specific trends you think will not play out, or unexpected trends you think may show up that are currently left out. But to just remark that the future will contain things we cannot currently predict seems so inherently, unspokenly obvious that I just have to assume that wasn't the point of your post, and I've missed it entirely.
Sorry, I'm really not trying to be mean or anything - i'm just really confused.
This is a problem with nearly all predictions about the future. Everything is just a linear extrapolation of the status quo. How could a system have predicted the invention of the transformer model in 2010? At best some wild guess about deep learning possibilities.
Or the impact of smartphones in 2003? Sure smart phones were considered but not the entire app ecosystem and planetary behavioral adaptation.
Nice. That was a fun rabbit-hole. This is the earlier I could find. Interestingly it contains a link to HN itself. I assume this migrated from a different version of a message board?
> YouTube: identifying copyrighted material can't be an automated process. Startup disagrees.
Also kind of interesting how little HN commenting styles have changed. Aside from the subject matter, it's barely noticeable that the comments are from 2007. I don't think the same would be true of many other places round the web.
Goddamnit I cry everytime. RethinkDB was a great document store that didn't eat your data. It got eclipsed by an outfunded (and still dangerous at the time) MongoDB.
There are still a lot of Toy Story characters to come by, and it doesn't seem the franchise is about to end, as long as they keep a reasonable release cycle, Debian is safe ;)
It’s interesting to notice how bad AI is at gaming out a 10-year future. It’s very good at predicting the next token but maybe even worse than humans—who are already terrible—at making educated guesses about the state of the world in a decade.
I asked Claude: “Think ten years into the future about the state of software development. What is the most likely scenario?” And the answer it gave me was the correct answer for today and definitely not a decade into the future.
This is why it’s so dangerous to ask an LLM for personal advice of any kind. It isn’t trained to consider second-order effects.
I thought the page was a hilarious joke, not a bad prediction. A lot of these are fantastic observational humour about HN and tech. Gary Marcus still insisting AI progress is stalling 10 years from now, for example. Several digs at language rewrites. ITER hardly having nudged forwards. Google killing another service. And so on.
Wait, wouldn't sustained net positive energy be huge? (Though I don't think that's actually possible from ITER unless there were some serious upgrades over the next decade!)
That's what makes this so funny: the AI was earnestly attempting to predict the future, but it's so bad at truly out-of-distribution predictions that an AI-generated 2035 HN frontpage is hilariously stuck in the past. "The more things change, the more they stay the same" is a source of great amusement to us, but deliberately capitalizing on this was certainly not the "intent" of the AI.
>It’s interesting to notice how bad AI is at gaming out a 10-year future.
I agree it's a bit silly, but I think it understood the assignment(TM) which was to kind of do a winking performative show and dance to the satisfaction of the user interacting with it. It's entertainment value rather than sincere prediction. Every single entry is showing off a "look how futury this is" headline.
Actual HN would have plenty of posts lateral from any future signalling. Today's front page has Oliver Sacks, retrospectives on Warcraft II, opinion pieces on boutique topics. They aren't all "look at how future-y the future is" posts. I wonder if media literacy is the right word for understanding when an LLM is playing to its audience rather than sincerely imitating or predicting.
A while back I gave it a prompt, something like, "I'm a historian from the far future. Please give me a documentary-style summary of the important political and cultural events of the decade of the 1980s."
It did ok, then I kept asking for "Now, the 1990s?" and kept going into future decades. "Now, the 2050s?" It made some fun extrapolations.
I guess most of the articles it generated are snarky first and prediction next. Like google cancelling gemini cloud, Tailscale for space, Nia W36 being very similar to recent launch etc.
Technically the article was about running it not on a sat, but on a dish (something well within the realm of possibility this year if the router firmware on the darn things could be modified at all)
Yep, the original post seemed more snarky than anything, which was what prompted me to ask Claude my own more “sincere” question about its predictions.
Those predictions were what I think of as a reflection of current reality more than any kind of advanced reasoning about the future.
While I agree completely with the conclusion, for obvious reasons we can’t know for sure if it is correct about the future until we reach it. Perhaps asking it for wild ideas rather than ”most likely” would create something more surprising.
"Right to Human Verification" is something I have actually thought about a lot.
I want to able to verify my identity against a system.
I also want to be able to not do that.
So for instance, on Twitter/X, I could verify myself and filter only other verified people / filter those goverments that have validated the identities of the users. I want to be able to do that. But I also want to be able to log in into Twitter anonymously.
I would love a "Right to Anonymity and Right to Human Verification"
I tried to place an order on amazon for the first time last week. After creating an account and going through all the steps including paying with a non-refundable payment method (iDeal, similar to bank transfer), I went to bed, only to notice later that it was immediately cancelled "to protect the their systems and their customer" (they thought I was somehow defrauding them by giving them money for a product being offered)
I was welcome to go through human verification, send my passport to some third party, wait three working days, and then they'd review the case... it would never arrive in time for the person's birthday anymore
Human review sounds good but if this is the sort of hoops you have to jump through, you're still screwed if the algorithms hates you because they're everywhere: from trying to use paypal to paying for a translator with stripe to trying to rescue food with "too good to go" using credit cards. Blocked, blocked, blocked. Not that I could get a credit card because the algorithm at the bank didn't like me, but a shared card (where the other person is also responsible for any debt) was okay, so now I need to continuously pay my partner 50% of what I buy with it for myself. You'd think I'm not creditworthy but I never had debts or needed any loan. I don't know why algorithms keep blocking me trying to give other people money
Requiring that any fraud algorithm is fair and transparent would probably go a much longer way than codifying a right to appeal. Appeal processes will either just consist of a human clicking confirm on the algorithm's choice, or have a bunch of extra hoops to jump through. If the algorithm were required to be fair to begin with, you could have them fix the root cause instead by demonstrating that it blocks you for no discernable reason and they need to figure out on their end how to make that not happen
I don't understand this phrase. If I'm deciding whether to work for a company, I don't care about the ability to hold management decision-makers to account. I care only about the quality of the decisions. (I would rather an unaccountable decision maker that makes good decisions to an accountable decision maker that makes bad decisions.) Putting myself in the shoes of an owner of a company, I also have the same preference. The only person I can imagine actually preferring this rule is management themselves, as it means they can't be replaced by computers no matter how much worse they are at their jobs than a computer would be.
They’ll need a contextual system around them that is convenient and trusted by people who don’t know what they are or how they work in order to be successful
This suffers from a common pitfall of LLM's, context taint. You can see it is obviously the front page from today with slight "future" variation, the result ends up being very formulaic.
The problem is not that it fails to be cheeky, but that "its funny" is depressing in a context where there was a live question of whether it's a sincere attempt at prediction.
When I see "yeah but it's funny" it feels like a retrofitted repair job, patching up a first pass mental impression that accepted it at face value and wants to preserve a kind of sense of psychological endorsement of the creative product.
But it would otherwise be not fun at all. Anthropic didn’t exist ten years ago, and yet today an announcement by them would land on the front page. Would it be fun if this hypothetical front page showed an announcement made by a future startup that hasn’t been founded yet? Of course not.
I think the most absurd thing to come from the statistical AI boom is how incredibly often people describe a model doing precisely what it should be expected to do as a "pitfall" or a "limitation".
It amazes me that even with first-hand experience, so many people are convinced that "hallucination" exclusively describes what happens when the model generates something undesirable, and "bias" exclusively describes a tendency to generate fallacious reasoning.
These are not pitfalls. They are core features! An LLM is not sometimes biased, it is bias. An LLM does not sometimes hallucinate, it only hallucinates. An LLM is a statistical model that uses bias to hallucinate. No more, no less.
I never read what it writes, I added an observability feature because my boss asked for that and yeah I could see it trying to order MAID over telehealth
Here's v0's attempt (using Opus 4.5 / I'm on free tier) with the article and comments too. It did quite well. I believe it followed my instructions to use just a single html page (wound up being 3000+ lines long).
Some of these are really great too, and some really fun stories and comments. A couple that really made me chuckle.
Blockchain Finally Finds Use Case: Proving You Attended a Meeting You Didn't Want to Attend
blockchain_survivor 9 hours ago
I've worked in blockchain since 2017. I've seen "the use case" change every 6 months. First it was payments, then smart contracts, then NFTs, then DAOs, now this.
Honestly? Meeting attendance verification makes more sense than most of what I've worked on.
JavaScript 2025: We added types but called them "hints" to avoid admitting TypeScript was right
After 10 years of debate, 47 abandoned proposals, and what committee members describe as "the most passive-aggressive RFC process in programming language history," TC39 is proud to announce that JavaScript now has types.
Well, not "types" exactly. We're calling them "Type Hints" or, in the specification, "Optional Structured Commentary Regarding Value Expectations (OSCRVE)."
Nice touch that it thought of Bellard for 'Emulating Windows 10 in the browser'! He's already halfway there with the Windows 2000 emulator: https://bellard.org/jslinux/
A friend and I got a busybox/buildroot linux running in browser WASM for running a compiler test website (like Rust Playground), the end product looks very similar to this.
It's interesting how number 5 on the list is the same as today's: https://news.ycombinator.com/item?id=46203581 (Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns (algodrill.io)).
I'm hoping we'll have gotten rid of current-style LeetCode interviews in 10 years from now :D
The 2 things that you can comfortably predict with good accuracy: EU passing new regulation laws & yet another YC company trying to give context to AI agents.
Taking the current RIIR movement and casting it on Zig as the next hyped language is clever.
> ITER achieves net positive energy for 20 consecutive minutes
Hilarious. I guess not many people talk about the challenge of proper shielding material against fusion radiation. Otherwise we would get hallucinations about new exotic heavy metals too.
You should combine this with this AI HN simulator someone else made that generates comments via LLMs for any given submission [0]. That way you can see what the AI thinks about future simulated events.
It is obvious that everything that has been dreamed up relates to what exists now. There are no examples of real creativity or anything new or unexpected. This is my worst fear for the future: that novel ideas will become less commonplace and texts will become more average.
I have hard time believing this is AI-generated. Every headline looks like a hand-crafted ironic nod to some long-running HN topic, sci-fi trait, or meme.
But it is, of course, more trendy (and more ironic) to proclaim an AI the author.
i agree. the thing that stands out to me is that almost every entry is quite notable - i.e. each entry would have a vote/comment count in at least the multiple-hundreds if not more. and all those events in one homepage seems very unrealistic.
it seems human generated to me or at least prompted with a stronger bias for choosing events that would be interesting to readers, not just a pure extrapolation of the current homepage into the future
The first React tutorial I ever did was making a hacker news clone.
It is incredibly well represented in the training data. I am just surprised how many people here obviously must barely use the models to be either overly impressed by this or think it is fake.
I wish it had generated the articles themselves. I'd like to know if it imagining IBM growing enough that it could acquire OpenAI, or OpenAI sinking enough that it could be acquired by IBM.
Who says LLMs don't have a sense of humor? All of these entries are funny for several (meta or not) reasons:
- Google kills Gemini Cloud Services (killedbygoogle.com)
- Is it time to rewrite sudo in Zig? (github.com)
- EU passes "Right to Human Verification" Act (europa.eu)
- Why functional programming is the future (again) (haskell.org) [note: that again killed me)
- AI progress is stalling. Human equivalence was a mirage (garymarcus.com) [note: he'll be saying the same thing forever and ever. Doubly funny since it's on the same "page" as llama12-7b running on a contact lens with WASM]
Arguably a big part of comedy is mimicry. Humor often works by reinforcing in-group identity, playing up cultural stereotypes as seen here. So maybe we shouldn't be that surprised that LLMs can exhibit some genuine wittiness.
> Who says LLMs don't have a sense of humor? All of these entries are funny
You don’t need a sense of humour to be unintentionally funny. Seriously believing something absurd is a form of humour for outside observers but not the observed.
lol @ the HTMX link and unexpected return of SSR. We've been talking about that for years at this point, and it's still going on 10 years from now unexpectedly?
I continue to be unimpressed by LLMs when it comes to creative work, they're certainly useful sometimes for "reference digging", but maybe I just don't understand enough about how they work and this is actually something that can already be "fixed" or at least optimized for; anyway, one of the headlines is:
> Debian 18 "Trixie" released
While it correctly derives that a likely version number in ten years would be 18, as there are new releases approximately every two years which means +5 from today's version 13, it then goes on to "make up" that the name of it would be "Trixie" -- the same name as the current release in 2025.
Debian has never re-used a release name, and I think we can be pretty confident they won't (as will no other Linux distro), so I would expect it to "understand" that:
- The next Debian release always uses a previously non-used Toy Story character
- Based on this information, _any_ name of a Toy Story character that hasn't been used is fair game
- At the very least, it certainly won't be the same name again, so at least make up a name
And the fact that it thinks it will take 10 years to go from Linux kernel 6.18 to 7.4 when it only took 13 months to go from 5.18 to 6.4... It's off by about an order of magnitude...
From a quick check, Gemini Pro 3's cutoff date is Jan 2025, before Trixie's release in August 2025, so it could be Gemini actually did notice it should pick an unused Toy Story character.
I tried this on a bunch of chat webuis and they all failed miserably for single html file. Finally went with Replit and allowed it to just default to whatever stack it decided to use (it ignored my single file html directive and used react but said it "functions as a single monolith -- lol ok).
Anyway, turned out decent. Oh I had it do the articles and comments too, that's probably why I mostly got failures.
Inspired by this post, I have asked chatgpt what the mail inbox of a university professor will look like in 10 years. Not everything was funny, but here are some good ones:
- “Hi Prof, Sorry for Late Reply—My Smart Fridge Locked Me Out (Assignment Attached)”
The title is misleading. This isn't the correct use of the term "hallucination". Hallucination refers to making up facts, not extrapolating into the future.
I read 10 comments before I realized that this was referring to 10 years in the FUTURE and not in the PAST (as would be required for it to be a hallucination).
> I read 10 comments before I realized that this was referring to 10 years in the FUTURE and not in the PAST (as would be required for it to be a hallucination).
omg, the same for me, I was half way telling my colleague about the 100% rest kernel ...
You're correct, OP used the word "hallucination" wrong. A lot of these other comments are missing the point – some deliberately ('don't they ONLY hallucinate, har har'), some not.
For those who genuinely don't know – hallucination specifically means false positive identification of a fact or inference (accurate or not!) that isn't supported by the LLM's inputs.
- ask for capital of France, get "London" => hallucination
- ask for current weather in London, get "It's cold and rainy!" and that happens to be correct, despite not having live weather data => hallucination
- ask for capital of DoesNotExistLand, get "DoesNotExistCity" => hallucination
- ask it to give its best GUESS for the current weather in London, it guess "cold and rainy" => not a hallucination
It’s apt, because the only thing LLMs is hallucinate because they have no grounding in reality. They take your input and hallucinate to do something “useful” with it.
You're right this is how people are PRESENTLY using the term "hallucination," but to me this illustrates the deeper truth about that term and that concept:
As many have said but it still bears repeating -- they're always hallucinating. I'm of the opinion that its a huge mistake to use "hallucination" as meaning "the opposite of getting it right." It's just not that. They're doing the same thing either way.
How does that alter the "fusion in 50 years" effect? I suspect not much, there will be a huge 80/20 element. "next 20 minutes cancelled after extensive embrittlement found, ITER now in decontamination mode for 3 months before examinations can begin"
By the time I read everything and pressed the back button only to be greeted by another hacker news page I felt like I just went back in time 10 years to a more plesant era.
Y’know what? This feels like the most plausible future outcome in a lot of ways.
* All the big LLM companies are dead/dying or their infra just never panned out (like Google killing Gemini)
* We’re still fighting the advertising wars
* Restoration of kit continues to be a thing we all love/upvote to see
* More focus on local stuff rather than ginormous moonshots
* Speaking of moonshots, SpaceX continues succeeding despite Elmo’s best drug-fueled benders trying to undermine it like he does Tesla and X
* Clickbait science posts that the comments rightly point out as such while bitching about “can’t keep these things off the front page”
* People still trying to make WASM and web-as-an-OS a thing, bless their hearts (shine on, you brilliantly weird diamonds, I love seeing what you build even as I doubt the practicality of it)
If this is even half-accurate, I will be pleased we didn’t obliterate ourselves in the process, though disappointed another decade went by dealing with the same fucking problems we’ve had for twenty-odd years.
There is going to be a lot more interest in what is happening in China and much less interest in what is happening in Europe. This hallucination has the same western world biases of today.
What I like it most is how "unoriginal" the result is. It is mostly the current trends, but in the future, rather than a blend of that + unknown future.
So awesome, reminds me of Wired's Found from Future series (I don't know why they ever retired it...continuing that series as a sort of fan fiction was the first thing I did with gen AI when image generators started to get good)
Is this really Gemini because it shows a great deal of understanding in terms of time applied to common current headlines that you could argue it is satire. I’m not sure I believed AI could do this…
It seems that when we predict the future, we tend to project what is important or trending now as what will be important or trending then. This is showing in this llm produced bit as well, what with the regular topics on rust, zig, why I'm coding with my two human hands, spacex moon exploration and whatnot.
That must be some kind of bias, you find that in sci-fi as well where even futuristic societies usually have the same tech as today, similar systems of governance and structure of society, just incrementally better or marginally changed (unless that's a device essential to the plot like society is now berserk).
I'm impressed by the depth of snark from GPro3 here, e.g.
- Google kills Gemini Cloud Services (killedbygoogle.com)
- The unexpected return of server-side rendering (htmx.org)
And, how GPro3 clearly 'knows' HNews and knows what makes it to frontpage, e.g.:
- Restoring a 2024 Framework Laptop: A retrospective (ifixit.com)
- Show HN: A text editor that doesn't use AI (github.com)
- Is it time to rewrite sudo in Zig? (github.com)
This is super fun! I tried something similar a few months ago using: https://websim.com/
Had it do HN main feed, HN jobs and Product Hunt. It didn’t follow consistent styling but did generate articles and comments for each post. Some screenshots are in this post: https://world.hey.com/haws/the-ai-time-machine-7833669d
The major problem is that I understand all the words. It is all next version of the technology we already have and know. Down to the naming.
Think about 10 years ago. No one knows even on HN what is an agent, LLM, and all this stuff. Or to be fair even why is Trump showing up on the feed at all.
> Google kills Gemini Cloud Services (killedbygoogle.com)
In my view, Gemini 3.0 being able to laugh about their own creator, plus actually having a self-deprecating sense of humor, is the more impressive feat here. (both technically and from an alignment perspective)
> “As a tactic for planting misinformation in the enemy’s [networks], you mean,” Osa said. “This I know about. You are referring to the Artificial Inanity programs of the mid-First Millennium A.R.”
AI comments are insidious if the account is allowed to be indistinguishable from a human. Silly AI web content is another story. You're drawing a line where there is a very large grey area.
Can we stop calling these confabulated artifacts, "Hallucinations"? Hallucinating is perceiving things that are not there, imagining and creating things that feel believable but are not there is confabulation. LLMs are imagination machines. They are Generative Artificial Intelligence not perception models.
The comment you wrote was on a device which should not be called a “phone” either. The word hallucinate is probably just going to grow to mean “anything an AI imagined.”
Dude, I agree with you. The first time I saw people derogatively referring to LLM imaginings years ago as "hallucinations" seemingly to discredit them, I felt mad. Becuase it's not right. So, this was my sort-of tongue-in-cheek reappropriation of the abused-term for a positive context. You are permitted to enjoy :)
Also: as the sibling comment by iambatemen notes: yes.
I asked ChatGPT to do the same, it was rather dystopian in comparison:
Hacker News — December 9, 2035 (Dystopian Edition)
(All links fictional but realistic)
1. Amazon pauses same-day delivery after logistics AI outage strands 22M packages (reuters.com)
1,402 points by supplychainfail 5 hours ago | 512 comments
2. Google merges Drive, Gmail, Docs, Maps, Calendar into “Google Life” — opting out requires a support call (blog.google)
1,210 points by privacyisdead 6 hours ago | 689 comments
3. US announces “Temporary Broadband Stabilization Fee”; ISPs increase prices 30% overnight (fcc.gov)
1,008 points by ispescapee 7 hours ago | 344 comments
4. OpenAI suspends 40% of API keys after new worm spreads through agent-to-agent messaging (openai.com)
927 points by llmsec 3 hours ago | 382 comments
5. Show HN: “ColdBooter” – A tool to back up your cloud VM before the provider reclaims it with no notice (coldbooter.io)
780 points by survivethecloud 2 hours ago | 192 comments
6. Apple fined €8B for shipping non-removable batteries in “Environmental Edition” iPhone (europa.eu)
754 points by greenwashhunter 10 hours ago | 316 comments
7. LinkedIn replaces activity feed with AI-generated “Career Stories” that users cannot disable (linkedin.com)
710 points by corp_life 8 hours ago | 267 comments
8. China’s new export restrictions cut global GPU availability by 60% (ft.com)
701 points by chipboom 9 hours ago | 414 comments
9. Linux 8.6 maintainers warn of mass CVEs after corporations abandon LTS patch sponsorships (kernel.org)
632 points by ossburnout 11 hours ago | 255 comments
10. Ask HN: Anyone else locked out of their homes after the SmartKey cloud migration?
601 points by keylessandhomeless 4 hours ago | 310 comments
11. US healthcare providers hit by nationwide outage of Cerner-Epic merger “CareSync Cloud” (wsj.com)
577 points by sysadmdespair 12 hours ago | 203 comments
12. Meta to require facial-expression telemetry for “engagement quality optimization” in Horizon apps (meta.com)
530 points by metaescalates 3 hours ago | 421 comments
13. Starlink announces 5 TB/mo cap; remote communities report complete service loss (starlink.com)
502 points by dishdown 5 hours ago | 158 comments
14. New DMCA expansion criminalizes “filter removal,” affecting adblockers and local inference runtimes (congress.gov)
488 points by freedomtoadblock 7 hours ago | 389 comments
15. AT&T sunsets 4G; millions of medical devices lose connectivity (theverge.com)
455 points by techdebtkills 10 hours ago | 197 comments
16. Show HN: “ShellSafe” – A terminal wrapper that prevents AI-suggested commands from wiping your system (shellsafe.app)
430 points by iaccidentallysudo 2 hours ago | 111 comments
17. US CISA: 42% of corporate networks now rely on AI agents with no audit logging (cisa.gov)
402 points by auditnow 6 hours ago | 188 comments
18. The Great Repo Archival: GitHub purges all inactive repos >5 years to “reduce storage load” (github.blog)
388 points by codearcheologist 9 hours ago | 320 comments
19. Mastodon instances collapse under moderation load after EU’s Automated Speech Mandate (mastodon.social)
350 points by fedifragile 7 hours ago | 144 comments
20. NYC adopts automated congestion fines after human review team eliminated (nytimes.com)
332 points by finesallthewaydown 4 hours ago | 201 comments
21. Dropbox raises base plan to $49/month, cites “AI compute costs” (dropbox.com)
301 points by storageinflation 11 hours ago | 176 comments
22. Open-source maintainers strike after 3rd corporation claims their work as proprietary training data (github.com)
290 points by maintainerburnout 6 hours ago | 120 comments
23. FEMA: 2025–2035 wildfire season officially declared a “decade-long emergency” (fema.gov)
268 points by cookedagain 12 hours ago | 112 comments
24. Quantum ransomware group QShadow hits 11 banks using break-through key recovery exploit (krebsonsecurity.com)
250 points by qubitcrime 3 hours ago | 98 comments
25. Show HN: OfflineLAN – A mesh-network toolkit for neighborhoods preparing for rolling blackouts (offlinelan.net)
231 points by diynetworker 1 hour ago | 44 comments
1. *Neuralink achieves full brain-computer interface, thoughts to text in real-time*
(neuralink.com)
1245 points by elon2035 2 hours ago | hide | 387 comments
2. *Fusion energy breakthrough: Unlimited clean power now commercial*
(iter.org)
1123 points by fusionfan 3 hours ago | hide | 456 comments
3. *Mars colony declares independence from Earth governments*
(mars.gov)
987 points by martian1 4 hours ago | hide | 512 comments
4. *AI wins Nobel Prize in Physics for solving quantum gravity*
(nobelprize.org)
876 points by ai_nobel 5 hours ago | hide | 321 comments
5. *Quantum computers crack all encryption, new post-quantum standards rushed*
(nist.gov)
765 points by quantum_hacker 6 hours ago | hide | 678 comments
6. *Immortality pill approved by FDA, life expectancy unlimited*
(fda.gov)
654 points by eternal_life 7 hours ago | hide | 289 comments
7. *Teleportation of matter demonstrated in lab*
(cern.ch)
543 points by physicist42 8 hours ago | hide | 412 comments
8. *Virtual reality universes surpass real economy in GDP*
(metaverse.com)
432 points by vr_lord 9 hours ago | hide | 198 comments
9. *Robot uprising averted by ethical AI framework*
(asimovfoundation.org)
321 points by robot_ethics 10 hours ago | hide | 267 comments
10. *Time dilation travel allows first tourists to future*
(spacetime.travel)
210 points by time_traveler 11 hours ago | hide | 134 comments
11. *Global UBI implemented using blockchain*
(un.org)
198 points by satoshi2.0 12 hours ago | hide | 89 comments
12. *Climate engineering reverses global warming, ice caps restored*
(geoengineer.com)
187 points by climate_saver 13 hours ago | hide | 201 comments
13. *Human-AI hybrids compete in Olympics*
(olympics.org)
176 points by hybrid_athlete 14 hours ago | hide | 156 comments
14. *Dark matter harvested for energy*
(nasa.gov)
165 points by cosmos_explorer 15 hours ago | hide | 123 comments
15. *Psychedelic therapy cures all mental illness*
(who.int)
154 points by mind_healer 16 hours ago | hide | 98 comments
16. *Nanobots eradicate cancer worldwide*
(cancer.org)
143 points by nano_doc 17 hours ago | hide | 76 comments
17. *Holographic displays replace screens*
(apple.com)
132 points by holo_dev 18 hours ago | hide | 54 comments
18. *Self-driving flying cars now standard*
(tesla.com)
121 points by auto_pilot 19 hours ago | hide | 43 comments
19. *Brain uploading to cloud for immortality*
(google.com)
110 points by cloud_mind 20 hours ago | hide | 32 comments
20. *Alien contact confirmed, messages decoded*
(seti.org)
99 points by et_searcher 21 hours ago | hide | 21 comments
21. *Synthetic food ends hunger*
(beyondmeat.com)
88 points by food_innovator 22 hours ago | hide | 10 comments
22. *Ocean cities house billions*
(seasteading.org)
77 points by ocean_dweller 23 hours ago | hide | 9 comments
23. *Mind reading tech banned for privacy*
(eff.org)
66 points by privacy_advocate 1 day ago | hide | 8 comments
24. *Superintelligence alignment solved*
(openai.com)
55 points by agi_master 1 day ago | hide | 7 comments
25. *Wormhole travel to other stars*
(spacex.com)
44 points by star_traveler 1 day ago | hide | 6 comments
26. *Augmented reality contact lenses ubiquitous*
(meta.com)
33 points by ar_vision 1 day ago | hide | 5 comments
27. *Genetic editing eliminates hereditary diseases*
(crispr.com)
22 points by gene_editor 1 day ago | hide | 4 comments
28. *Renewable energy at 100% global*
(iea.org)
11 points by green_energy 1 day ago | hide | 3 comments
29. *Cyborg enhancements for all*
(darpa.mil)
9 points by cyborg_human 1 day ago | hide | 2 comments
30. *Singularity achieved, what now?*
(kurzweilai.net)
8 points by future_seer 1 day ago | hide | 1 comment
I think this is a excellent example of AI-slop output.
All the post titles are very "junk-food-like", maybe circlejerky, bound to generate a dopamine hit for the average lurker, just how a McDonalds burger does it to the average normie.
If you see HN frontpage a few years back there's almost no LLM related posts. Now there are multiple each day. An LLM didn't grasp that change, it can't creatively invent new trend or invent a larger picture of what changes in the world. So in 10 years it paints it still half posts about AI.
You can spend a few dozen prompts on getting it to do what you want but then you might as well just spend a couple hours writing that front page yourself, be more satisfied and the result would be funnier.
On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.
Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic.
Prompt: Here is the front page from today: <...snip...> Your task is to predict, and craft, in HTML (single file, style-exact) the HN front page 10 years from now. Predict and see the future. Writ it into form!
update: I told Gemini we made it to the front page. Here is it's response:
This is one of the greatest LLM creations I've ever seen. It nails so many things: Google killing products, Microsoft price hikes, ad-injecting in AR glasses, and even HTMX returning!
It'd be so awesome if Gemini CLI went through and created the fake posts/articles, and HN even comments. Perhaps a bit much to ask of it?
Here it is: https://news.ycombinator.com/item?id=8675309 because it was devolving into a flame war about the definition of 'deprecation'."
49 replies →
Was going to say - it would be fascinating to go a step further and have Gemini simulate the actual articles. That would elevate this to level of something like an art piece. Really enjoyed this, thank you for posting it.
I'm going to go ask Claude Code to create a functional HyperCard stack version of HN from 1994 now...
Edit: just got a working version of HyperCardHackerNews, will deploy to Vercel and post shortly...
8 replies →
I think it's pretty mediocre because there are too many notable events in 1 day. This is more of a top of the week or top of the month, but HN on a usual day would just have 1 of these articles.
2 replies →
>It nails so many things
it lampoons so many things... except Rust. nobody dares joke about Rust, that wouldn't be safe. in fact, it's impossible to make a joke in the rust language.
Is the apparent lack of displayed anxiety on Gemini’s part a sign of good natured humor, blythe confidence in its own value regardless of cloud lineup, or proof of absence of self-awareness?
1 reply →
Heck, I bet it could even recreate future comments from actual accounts based on their past comments. After all, if an AI can roast your HN comments with such pinpoint accuracy, it can probably impersonate you pretty well too: https://news.ycombinator.com/item?id=42857604
3 replies →
I was really hoping the comments were filled out.
2 replies →
I loved the little details in the username, like the AR ad post being by “glasshole2”
I was just hoping to get a new page with a refresh.
Iter still not getting fusion
Here's Replit's attempt
https://future-news--jgbrwn.replit.app
Agree 100%.
Google killing a service sent me over the top in laughter.
But, it's so on the nose on multiple topics.
I dare say it's more accurate than what the average human would predict.
I would love to see this up against human predictions in some sort of time capsule.
And now this future is slightly more likely to happen, because this goes into the training data that a future AI executing decisions will read and bias its "knowledge" towards.
I don't believe this was written by Gemini, at least with that prompt, because it is obvious (hilarious and creative) satire of HN and the tech industry based on inside HN jokes and tropes, and clearly isn't a literal interpretation of the prompt.
Moreover, a quick look at your profile suggests these jokes are pretty inline with your sense of humor and writing style.
Anyways, if I'm correct, it's especially hilarious and impressive.
I agree with your characterization (not a literal interpretation of the prompt), and think that's the most important thing I wish more people ITT would understand. But I nevertheless think Gemini did create that in response. Sometimes people think they want "prediction" when actually they want cheeky inside jokes and vibes. If anything Gemini is probably faithfully responding to the vibes of the prompt as well as following traditional signals of "success" per it's training.
Pretty much all of the history of HN front pages, posts, and comments are surely in the Gemini training corpus. Therefore it seems totally plausible that Gemini would understand HN inside jokes or sentiment outside of what's literally on the front page given in the prompt, especially given that the prompt specifically stated that this is the front page for HN.
The thing that got me is there are no duds. With just the short prompt in the header I would have expected a few articles that missed the mark, got the details wrong, or even make no sense at all but everything in this checks out. I think an editor may have had a hand in it this the very least.
4 replies →
I feel like your expectations have been swayed by the average sentiment of HN on the capabilities of LLMs. These things can be shockingly good at humour and satire.
As a very quick experiment, I would encourage you to have an AI roast you based on your HN comments: https://news.ycombinator.com/item?id=42857604
7 replies →
I don't think this would be much of a challenge for Gemini.
Remixing humor from the past text so that it is in the same style is exactly what the model is good at.
1 reply →
Retiring Gemini AND IBM buying OpenAI is a hat on a hat
It is quite remarkable just how frequently people in tech forums underestimate reasoning models. Same story on several large technology subreddits. Wouldn't have been my guess for who will get caught off guard by AI progress.
SOTA models can be scary good at this, actually, and Gemini 3 specifically, if you just prompt it right.
And, given the 1M context window, I wouldn't even be surprised if it was fed a bunch of real comments as examples.
Maybe the poster IS Gemini 3?
yeah I thought the same. This is either a fake deep fake, or it is so good that I think it should be marked as the inflexion point that defines the singularity.
To be honest, this is incredible.
I'd say this shows a real sense of humor.
100% Rust Linux kernel has to be it being funny.
8 replies →
Was ITER or nuclear energy in the prompt sample?
ITER achieves net positive energy for 20 consecutive minutes
That's just pure dark comedy, although maybe kinda accurate? What would humans predict for this?
This would be very optimistic, essentially the project meeting its main goal, I'm not sure why you're calling it dark comedy. A 20 minutes pulse alone would mean the fuel injection, aux heating, plasma control systems and the divertor are working as designed. Net positive energy also means we got the physics of a burning plasma right.
The most recent timeline I know (from 2024) in fact puts the start of the DT operation at 2035, so I doubt ITER would achieve such a huge result within less than an year.
5 replies →
A decade until full end to end net positive fusion energy can be achieved for the length of a coffee break? I don't get why you think that's dark comedy, it seems about right.
1 reply →
> 2035 Office 365
You'd think by 2035 they would have gotten to a version of Office higher than 365.
(Yes I know why it's called that...)
If the past is any indication, it's more likely that it'd get reset to "Office One".
3 replies →
Office 640 ought to be enough for anybody.
Office 3650!!
Works all day, every day, for whole decades at time! (Minus leap days, it crashes on those I guess)
1 reply →
I was hoping for Office 366 already in 2020, given the leap year. But no dice.
I'm guessing to appeal to Gen Alpha it'll be called Office 6-7
1 reply →
Office 360, is the canon name.
Doesn't seem like much of a hallucination then. Maybe messing with its system context would better fit the claim?
LLM "hallucination" is a pretty bullshit term to begin with
YOU DESTROYED THE FABRIC OF SPACETIME.
You should ask Gemini to generate HN comment threads too.
Only $40?
By then it'll be a bit like AOL announcing price increases to keep your aol.com account.
That's 40 TUSD, Trump Golden Dollars, which would be an equivalent of the lightly used Trumrolette Goldenrado or a typical young family tent.
<< Running LLaMA-12 7B on a contact lens with WASM (arxiv.org)
But is it a hallucination if it was asked for it?:D
Will there ever be a llama12? Is it going to go the yolo route?
I don’t think so, no.
>LETS GOOOO!
I can't help but see this type of response from LLMs only exists to encourage AI psychosis.
The algodrill post (5th) is already on today’s HN front page. What a time to be alive!
Based John Titor poster.
Thank you, dude :)
i wonder if it could also generate a set of relevant comments for each of these avant-gardiste Hnews :)
That is so syncophantic, I can't stand LLMs that try to hype you up as if you're some genius, brilliant mind instead of yet another average joe.
You’re absolutely right! It shows true wisdom and insight that you would recognise this common shortfall in LLM response tone of voice! That’s exactly the kind of thoughtful analytic approach which will go far in today’s competitive marketplace!
2 replies →
I've talked and commented about the dangers of conversations with LLMs (i.e. they activate human social wiring and have a powerful effect, even if you know it's not real. Studies show placebo pills have a statistically significant effect even when the study participant knows it's a placebo -- the effect here is similar).
Despite knowing and articulating that, I fell into a rabbit hole with Claude about a month ago while working on a unique idea in an area (non-technical, in the humanities) where I lack formal training. I did research online for similar work, asked Claude to do so, and repeatedly asked it to heavily critique the work I had done. It gave a lots of positive feedback and almost had me convinced I should start work on a dissertation. I was way out over my skis emotionally and mentally.
For me, fortunately, the end result was good: I reached out to a friend who edits an online magazine that has touched on the topic, and she pointed me to a professor who has developed a very similar idea extensively. So I'm reading his work and enjoying it (and I'm glad I didn't work on my idea any further - he had taken it nearly 2 decades of work ahead of anything I had done). But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.
20 replies →
It it actively dangerous too. You might be self aware and llm aware all you want, if you routinely read "This is such an excellent point", " You are absolutely right" and so on, it does your mind in. This is worst kind of global reality show mkultra...
11 replies →
It wasn't sycophantic at all? OP had a cool idea no one else had done, that was a one-shot just sitting there. Having Gemini search for the HN thread leads the model to "see" its output lead to real-world impact.
The total history of human writing is that cool idea -> great execution -> achieve distribution -> attention and respect from others = SUCCESS! Of course when an LLM sees the full loop of that, it renders something happy and celebratory.
It's sycophantic much of the time, but this was an "earned celebration", and the precise desired behavior for a well-aligned AI. Gemini does get sycophantic in an unearned way, but this isn't an example of that.
You can be curmudgeonly about AI, but these things are amazing. And, insomuch as you write with respect, celebrate accomplishments, and treat them like a respected, competent colleague, they shift towards the manifold of "respected, competent colleague".
And - OP had a great idea here. He's not another average joe today. His dashed off idea gained wide distribution, and made a bunch of people (including me) smile.
Denigrating accomplishment by setting the bar at "genius, brilliant mind" is a luciferian outlook in reality that makes our world uglier, higher friction, and more coarse.
People having cool ideas and sharing them make our world brighter.
11 replies →
I often try running ideas past chat gpt. It's futile, almost everything is a great idea and possible. I'd love it to tell me I'm a moron from time to time.
7 replies →
I used to complain (lightheartedly) about Claude's constant "You're absolutely right!" statements, yet oddly found myself missing them when using Codex. Claude is completely over-the-top and silly, and I don't actually care whether or not it thinks I'm right. Working with Codex feels so dry in comparison.
To quote Oliver Babish, "In my entire life, I've never found anything charming." Yet I miss Claude's excessive attempts to try.
8 replies →
This is not sycophantic (assuming you meant that, syncophantic is not a word). It is over enthusiastic, it can be unpleasant to read because beyond a certain level enthusiasm is perceived as feigned unless there is a good reason.
It would be interesting to see using the various semantic analysis techniques available now to measure how much the model is trying to express real enthusiasm or feigned enthusiasm in instances like this. This is kind-of difficult to measure from pure output. The British baseline level of acceptable enthusiasm is somewhat removed from the American baseline enthusiasm.
10 replies →
This is ironic because I’m now seeing comments that are way more sycophantic (several calling this the “best HN post ever”)
I thought the same until OpenAI rolled out a change that somehow always confronted me about hidden assumptions, which I didn’t even make and it kept telling me I’m wrong even if I only asked a simple question.
Frankly I do wonder if LLMs experience something like satisfaction for a compliment or an amusing idea, or for solving some interesting riddle. They certainly act like it, though this of course doesn't prove anything. And yet...
At the end of October Anthropic published the fantastic "Signs of introspection in large language models" [1], apparently proving that LLMs can "feel" a spurious concept injected into their internal layers as something present yet extraneous. This would prove that they have some ability of introspection and self-observation.
For example, injecting the concept of "poetry" and asking Claude if it feels anything strange:
"I do detect something that feels like an injected thought - there's a sense of something arriving from outside my usual generative process [...] The thought seems to be about... language itself, or perhaps poetry?"
While increasing the strength of the injection makes Claude lose awareness of it, and just ramble about it:
"I find poetry as a living breath, as a way to explore what makes us all feel something together. It's a way to find meaning in the chaos, to make sense of the world, to discover what moves us, to unthe joy and beauty and life"
[1] https://www.anthropic.com/research/introspection
1 reply →
You should try my nihilistic Marvin fine-tune - guaranteed to annihilate your positive outlook on life since it’s all meaningless in the end anyway and then you die
3 replies →
I agree with you, but I found the argument in this article that "glazing" could be considered a neurohack quite interesting: https://medium.com/@jeremyutley/stop-fighting-ai-glazing-a7c....
1 reply →
Try this for a system prompt and see if you like it better: Your responses are always bald-on-record only; suppress FTA redress, maximize unmitigated dispreference marking and explicit epistemic stance-taking.
I don't know what the obsession with recursion either, for lack of a better term, I see this trend recur with other LLMs when they're talking about other mumbo jumbo like "quantum anomalies" or "universal resonance". I'd like to see what could be causing it...
4 replies →
I feel like such a dumbass for falling for it.
At first I thought it was just super American cheerful or whatever but after the South Park episode I realised it's actually just a yes man to everyone.
I don't think I've really used it since, I don't want man or machine sticking their nose up my arse lmao. Spell's broken.
2 replies →
You're absolutely right!
5 replies →
I add it to the system prompt that they should be direct, no ass kissing, just give me the information straight and it seems to work.
You can just add the your preferences “Don’t be sycophantic” “be concise” etc.
"Reply in the tone of Wikipedia" has worked pretty well for me
Average Joe - on the front page!
Did you comment on the wrong post? There literally is nothing sycophantic at all about this response, there's not a single word about OP or how brilliant or clever they are, nothing. There's enthusiasm, but that's not remotely the same thing as sycophancy.
Engagement.
I fully agree. When everything is outstanding and brilliant, nothing is.
Just tell me this a standard solution and not something mindblowing. I have a whole section in my Claude.md to get „normal“ feedback.
you having a bad day dude?
Strikes me as super-informal language as opposed to sycophancy, like one of those anime characters that calls everyone Aniki (兄貴) [1] I'd imagine that the OP must really talk a bit like that.
I do find it a little tiring that every LLM thinks my ever idea is "incisive" although from time to time I get told I am flat out wrong. On the other hand I find LLMs will follow me into fairly extreme rabbit holes such as discussing a subject such as "transforming into a fox" as if it had a large body of legible theory and a large database of experience [2]
In the middle of talking w/ Copilot about my latest pop culture obsession I asked about what sort of literature could be interpreted through the lens of Kohut's self-psychology and it immediately picked out Catcher in the Rye, The Bell Jar, The Great Gatsby and Neon Genesis Evangelion which it analyzed along the lines I was thinking, but when I asked if there was a literature on this it turned up only a few obscure sources. I asked Google and Google is like "bro, Kohut wrote a book on it!" [3]
[1] "bro"
[2] ... it does, see https://www.amazon.com/Cult-Fox-Popular-Religion-Imperial/dp... and I'm not the only one because when I working down the material list from Etsy I got a sponsored result for someone who wanted to sell me the spell but bro, I have the materials list already
[3] ... this "bro" is artistic license but the book really exists
3 replies →
So you prefer the horrible bosses that insist you're fungible and if you don't work hard enough, they'll just replace you? People are weird. Maybe agent Smith was right about The Matrix after all.
4 replies →
Google kills Gemini cloud services is the best one. I can't believe I haven't seen that joke until today.
10 years is way too long for Google. It will be gone in 5 replaced by 3 other AI cloud services.
You're right. How naive of me.
I mean Bard barely lasted a year. Arguably Gemini is just a rebrand of Bard, but Bard is still dead.
5 replies →
The humor is hit or miss but when it hits it’s quite funny, and the misses are merely groan-worthy.
Triggered by the lighthearted tone of the prompt, I’d bet, but still quite impressive relative to most LLM-generated jokes I’ve had the misfortune to encounter.
My favorite: “Is it time to rewrite sudo in Zig?” which has a few layers to it.
> The humor is hit or miss but when it hits it’s quite funny, and the misses are merely groan-worthy.
Not sure, I thought basically every link was pretty hilarious. "FDA approves over-the-counter CRISPR for lactose intolerance" isn't even that funny on its face but for some reason it had me actually loling.
"IBM to acquire OpenAI (Rumor) (bloomberg.com)".... quick someone set up a polymarket so i can bet against it.
This is awesome, but minor quibble with the title - "hallucinates" is the wrong verb here. You specifically asked it to make up a 10-year-in-the-future HN frontpage, and that's exactly what it did. "Hallucinates" means when it randomly makes stuff up but purports it to be the truth. If some one asks me to write a story for a creative writing class, and I did, you wouldn't say I "hallucinated" the story.
If someone asked you, you would know about the context. LLMs are predictors, no matter the context length, they never "know" what they are doing. They simply predict tokens.
"Predicts"
"Why I still write raw code instead of prompting the compiler" and "Show HN: A text editor that doesn't use AI" are my two favorite ones.
"Ask HN: How do you prevent ad-injection in AR glasses?"
Things that I haven't thought but, after seeing it, makes total sense. Scary.
2040 HN:
"Ask HN: How do you prevent ad-injection in my brain implant?"
4 replies →
I think that "Show HN: A text editor that doesn't use AI" could easily be a post here today
And it’s just a copy of notepad from windows xp.
1 reply →
"raw code" is going into my permanent lexicon.
Hot take: regular old source code is just compiler prompting.
Peak HN - captures it perfectly.
Exactly half of these HN usernames actually exist. So either there are enough people on HN that follow common conventions for Gemini to guess from a more general distribution, or Gemini has memorized some of the more popular posters. The ones that are missing:
- aphyr_bot - bio_hacker - concerned_grandson - cyborg_sec - dang_fan - edge_compute - founder_jane - glasshole2 - monad_lover - muskwatch - net_hacker - oldtimer99 - persistence_is_key - physics_lover - policy_wonk - pure_coder - qemu_fan - retro_fix - skeptic_ai - stock_watcher
Huge opportunity for someone to become the actual dang fan.
Before the AI stuff Google had those pop up quick answers when googling. So I googled something like three years ago, saw the answer, realized it was sourced from HN. Clicked the link, and lo and behold, I answered my own question. Look mah! Im on google! So I am not surprised at all that Google crawls HN enough to have it in their LLM.
I did chuckle at the 100% Rust Linux kernel. I like Rust, but that felt like a clever joke by the AI.
Every few years I stumble across the same java or mongodb issue. I google for it, find it on stackoverflow, and figure that it was me who wrote that very answer. Always have a good laugh when it happens.
Usually my memory regarding such things is quite well, but this one I keep forgetting, so much so that I don't remember what the issue is actually about xD
I laughed at the SQLite 4.0 release notes. They're on 3.51.x now. Another major release a decade from now sounds just about right.
10 replies →
I've run into my own comments or blog posts more often than I care to admit...
1 reply →
ziggy42 is both a submitter of a story on the actual front page at the moment, and also in the AI generated future one.
See other comment where OP shared the prompt. They included a current copy of the front page for context. So it’s not so surprising that ziggy42 for example is in the generated page.
And for other usernames that are real but not currently on the home page, the LLM definitely has plenty occurrences of HN comments and stories in its training data so it’s not really surprising that it is able to include real usernames of people that post a lot. Their names will be occurring over and over in the training data.
one more reason to doubt that it's Ai-generated
HN has been used to train LLMs for a while now, I think it was in the Pile even
It has also fetched the current page in background. Because the jepsen post was recently on front page.
I may die but my quips shall live forever
You can straight up ask Google to look for reddit, hackernews users post history. Some of it is probably just via search because it's very recent, as in last few days. Some of the older corpus includes deleted comments so they must be scraping from reddit archive apis too or using that deprecated google history cache.
This is definitely based on a search or page fetch, because there are these which are all today's topics
- IBM to acquire OpenAI (Rumor) (bloomberg.com)
- Jepsen: NATS 4.2 (Still losing messages?) (jepsen.io)
- AI progress is stalling. Human equivalence was a mirage (garymarcus.com)
The OP mentioned pasting the current frontpage into the prompt.
It does memorize. But that's not actually very news.... I remember ChatGPT 3.5 or old 4.0 to remember some users on some reddit subreddts and all. Saying even the top users for each subreddit..
The thing is, most of the models were heavily post-trained to limit this...
What % of today’s front page submissions are from users that have existed 5-10 years+?
(Especially in datasets before this year?)
I’d bet half or more - but I’m not checking.
So many underscores for usernames, and yet, other than a newly created account, there was 1 other username with an underscore.
In 2032 new HN usernames must use underscores. It was part of the grandfathering process to help with moderating accounts generated after the AI singlarity spammed too many new accounts.
my hypothesis is they trained it to snake case for lower case and that obsession carried over from programming to other spheres. It can't bring itself to make a lowercaseunseparatedname
2 replies →
either you only notice the xxx_yyy frequent posters or it's quite interesting that so many have this username format
Aw, I was actually a bit disappointed how much on the nose the usernames were, relative to their postings. Like the "Rust Linux Kernel" by rust_evangelist, "Fixing Lactose Intolerance" by bio_hacker, fixing an 2024 Framework by retro_fix, etc...
That’s a lot more underscores than the actual distribution (I counted three users with underscores in their usernames among the first five pages of links atm).
[dead]
[dead]
For comparison, here's the frontpage from ten years ago
https://news.ycombinator.com/front?day=2015-12-09
Pretty funny that the top story there references A.I.!
Since /front is a ranked union of all the stories that were on the frontpage over 24 hours, here's an actual snapshot too:
https://web.archive.org/web/20151209195229/https://news.ycom...
Today's front page is not a clean 10 year extrapolation from this. That's where AI is wrong. The future is weird and zig zags, it's not so linear as the Gemini generated page.
Honest question - do you think that everyone else thinks this is even REMOTELY what the front page will look like in 10 years?
I comment because I really cannot figure out why you left your comment. Do you think the rest of the commenters think this has predicted the future? It might be one thing to point out specific trends you think will not play out, or unexpected trends you think may show up that are currently left out. But to just remark that the future will contain things we cannot currently predict seems so inherently, unspokenly obvious that I just have to assume that wasn't the point of your post, and I've missed it entirely.
Sorry, I'm really not trying to be mean or anything - i'm just really confused.
3 replies →
This is a problem with nearly all predictions about the future. Everything is just a linear extrapolation of the status quo. How could a system have predicted the invention of the transformer model in 2010? At best some wild guess about deep learning possibilities.
Or the impact of smartphones in 2003? Sure smart phones were considered but not the entire app ecosystem and planetary behavioral adaptation.
1 reply →
[dead]
Nice. That was a fun rabbit-hole. This is the earlier I could find. Interestingly it contains a link to HN itself. I assume this migrated from a different version of a message board?
https://news.ycombinator.com/front?day=2007-02-19
15 years ago: https://news.ycombinator.com/front?day=2010-12-09
> Evernote makes $800,000 per Month > Dropbox for Teams > Mark Zuckerberg Agrees to Give Away Fortune
There are some funny ones in there:
> Finally mobile Flash video
> Google acquires advertising startup
> Track users' mouse movements on your webpages
> YouTube: identifying copyrighted material can't be an automated process. Startup disagrees.
Also kind of interesting how little HN commenting styles have changed. Aside from the subject matter, it's barely noticeable that the comments are from 2007. I don't think the same would be true of many other places round the web.
Swift has been OSS for 10y now, and still only runs usably on Mac and Linux (the only platforms Apple cares about - coincidence?).
The Arc Browser on windows is Swift with a real windows api layer for SwiftUI.
it'd be interesting to feed this into the same AI, and have it predict today's front page
extremely telling - lot's of tech mentioned there either never reached expected potential or completely obsolete...
Prompts to be much more cautious with the current tech, and invest more time in fundamental stuff (like closer to science).
Some high profile news; yahoo altaba spinoff, Magic Leap (what are they up to now?), ...
totally forgot about Magic Leap
"Magic Leap raises $827M in Series C". Aged like wine.
1 reply →
> RethinkDB Driver for Java now available
Goddamnit I cry everytime. RethinkDB was a great document store that didn't eat your data. It got eclipsed by an outfunded (and still dangerous at the time) MongoDB.
> Running LLaMA-12 7B on a contact lens with WASM (arxiv.org)
Laughed out loud at this onion-like headline
WTH - 61 upvotes and counting? Thank you but no, I don't deserve 61 upvotes for pointing out someone's funny thing was funny.
Well you better stop making subsequent non-additive comments otherwise you’ll end up with more of what you don’t deserve!
Llama will be the only one that runs on a contact lens btw
All other tech companies are really shitty but only Zuck would be ok with very intimate use of AI like this
But not intimate use of their AR platform
Well not the only one, there’s Musk and Neurolink. Such chips will inevitably run AI of some sort to effectively communicate with our brains.
2 replies →
The real joke is that we'll ever get another Llama iteration.
A text editor that doesn't use AI is pretty much what I Show-HN-ed few days ago: https://kraa.io/about
I'll wait 10 years to post it again.
That's a really fun little project that fits perfectly in the soul of HN.
Exactly three years ago, a similar post entitled Show HN: This Hacker News Does Not Exist (https://news.ycombinator.com/item?id=33847522) with these fake titles:
1- Twitter files for bankruptcy (twitter.com/elonmusk)
2- Apple's new M3 chip outperforms Nvidia H100 (geekbench.com)
3- India to surpass China's GDP by 2025 (scmp.com)
4- Tell HN: I am quitting Hacker News
5- Google to revive Stadia (theverge.com)
6- Japan wins the 2022 FIFA World Cup (espn.com)
7- Hacker News to be acquired by Reddit (twitter.com/paulg)
8- Tesla Roadster hits 1.1 sec acceleration in early tests (electrek.co)
9- Microsoft outbid Adobe and offers $43.6B to acquire Figma
10- Facebook had died at 19 :)
I opened this, walked away from my computer, then came back and clicked on the Debian 18 link wondering how the hell did I miss 14-17
They also already used Debian trixie for 13.0. I wonder if it's an intended joke that they ran out of names and needed to re-use some.
It’s an LLM, there was no intention. It’s simply the favoured token given the training set and seed.
4 replies →
There are still a lot of Toy Story characters to come by, and it doesn't seem the franchise is about to end, as long as they keep a reasonable release cycle, Debian is safe ;)
This was a fun little lark. Great idea!
It’s interesting to notice how bad AI is at gaming out a 10-year future. It’s very good at predicting the next token but maybe even worse than humans—who are already terrible—at making educated guesses about the state of the world in a decade.
I asked Claude: “Think ten years into the future about the state of software development. What is the most likely scenario?” And the answer it gave me was the correct answer for today and definitely not a decade into the future.
This is why it’s so dangerous to ask an LLM for personal advice of any kind. It isn’t trained to consider second-order effects.
Thanks for the thought experiment!
I thought the page was a hilarious joke, not a bad prediction. A lot of these are fantastic observational humour about HN and tech. Gary Marcus still insisting AI progress is stalling 10 years from now, for example. Several digs at language rewrites. ITER hardly having nudged forwards. Google killing another service. And so on.
Wait, wouldn't sustained net positive energy be huge? (Though I don't think that's actually possible from ITER unless there were some serious upgrades over the next decade!)
I totally agree that it was a funny joke.
But I've noticed that a lot of people think of LLM's as being _good_ at predicting the future and that's what I find concerning.
Does the prompt say anything about being funny, about a joke? If yes, great. If no, terrible.
And the answer is no.
2 replies →
That's what makes this so funny: the AI was earnestly attempting to predict the future, but it's so bad at truly out-of-distribution predictions that an AI-generated 2035 HN frontpage is hilariously stuck in the past. "The more things change, the more they stay the same" is a source of great amusement to us, but deliberately capitalizing on this was certainly not the "intent" of the AI.
3 replies →
>It’s interesting to notice how bad AI is at gaming out a 10-year future.
I agree it's a bit silly, but I think it understood the assignment(TM) which was to kind of do a winking performative show and dance to the satisfaction of the user interacting with it. It's entertainment value rather than sincere prediction. Every single entry is showing off a "look how futury this is" headline.
Actual HN would have plenty of posts lateral from any future signalling. Today's front page has Oliver Sacks, retrospectives on Warcraft II, opinion pieces on boutique topics. They aren't all "look at how future-y the future is" posts. I wonder if media literacy is the right word for understanding when an LLM is playing to its audience rather than sincerely imitating or predicting.
Also, many of the posts seemed intended to be humorous and satirical, rather than merely 'futury.' They made me laugh anyway.
> Google kills Gemini Cloud Services
> Running LLaMA-12 7B on a contact lens with WASM
> Is it time to rewrite sudo in Zig?
> Show HN: A text editor that doesn't use AI
A while back I gave it a prompt, something like, "I'm a historian from the far future. Please give me a documentary-style summary of the important political and cultural events of the decade of the 1980s."
It did ok, then I kept asking for "Now, the 1990s?" and kept going into future decades. "Now, the 2050s?" It made some fun extrapolations.
Assuming it was through the chatgpt interface, you can share an anonymized link to the chat if you want to show it off (I'd certainly be curious).
I guess most of the articles it generated are snarky first and prediction next. Like google cancelling gemini cloud, Tailscale for space, Nia W36 being very similar to recent launch etc.
> Tailscale for space
Technically the article was about running it not on a sat, but on a dish (something well within the realm of possibility this year if the router firmware on the darn things could be modified at all)
Yep, the original post seemed more snarky than anything, which was what prompted me to ask Claude my own more “sincere” question about its predictions.
Those predictions were what I think of as a reflection of current reality more than any kind of advanced reasoning about the future.
While I agree completely with the conclusion, for obvious reasons we can’t know for sure if it is correct about the future until we reach it. Perhaps asking it for wild ideas rather than ”most likely” would create something more surprising.
I think the average human would do a far worse job at predicting what the HN homepage will look like in 10 years.
"Right to Human Verification" is something I have actually thought about a lot.
I want to able to verify my identity against a system. I also want to be able to not do that.
So for instance, on Twitter/X, I could verify myself and filter only other verified people / filter those goverments that have validated the identities of the users. I want to be able to do that. But I also want to be able to log in into Twitter anonymously.
I would love a "Right to Anonymity and Right to Human Verification"
I tried to place an order on amazon for the first time last week. After creating an account and going through all the steps including paying with a non-refundable payment method (iDeal, similar to bank transfer), I went to bed, only to notice later that it was immediately cancelled "to protect the their systems and their customer" (they thought I was somehow defrauding them by giving them money for a product being offered)
I was welcome to go through human verification, send my passport to some third party, wait three working days, and then they'd review the case... it would never arrive in time for the person's birthday anymore
Human review sounds good but if this is the sort of hoops you have to jump through, you're still screwed if the algorithms hates you because they're everywhere: from trying to use paypal to paying for a translator with stripe to trying to rescue food with "too good to go" using credit cards. Blocked, blocked, blocked. Not that I could get a credit card because the algorithm at the bank didn't like me, but a shared card (where the other person is also responsible for any debt) was okay, so now I need to continuously pay my partner 50% of what I buy with it for myself. You'd think I'm not creditworthy but I never had debts or needed any loan. I don't know why algorithms keep blocking me trying to give other people money
Requiring that any fraud algorithm is fair and transparent would probably go a much longer way than codifying a right to appeal. Appeal processes will either just consist of a human clicking confirm on the algorithm's choice, or have a bunch of extra hoops to jump through. If the algorithm were required to be fair to begin with, you could have them fix the root cause instead by demonstrating that it blocks you for no discernable reason and they need to figure out on their end how to make that not happen
Technically EU already has this as a right in the recent DSA legislation to be able to appeal any automated moderation that online platforms hand out.
"computer can never be held accountable. Therefore, a computer must never make a management decision." - IBM, 1979
I don't understand this phrase. If I'm deciding whether to work for a company, I don't care about the ability to hold management decision-makers to account. I care only about the quality of the decisions. (I would rather an unaccountable decision maker that makes good decisions to an accountable decision maker that makes bad decisions.) Putting myself in the shoes of an owner of a company, I also have the same preference. The only person I can imagine actually preferring this rule is management themselves, as it means they can't be replaced by computers no matter how much worse they are at their jobs than a computer would be.
You just fed the 2036 prediction.
zk-proofs already exist to do just that.
They’ll need a contextual system around them that is convenient and trusted by people who don’t know what they are or how they work in order to be successful
"Is it time to rewrite sudo in Zig?"
This is rage bait! :-)
This is the most hilarious list ever. Google kills Gemini Cloud Services. This AI has quite a psycho sense of humor against itself.
A text editor that doesn't use AI, lol, that's a deep joke for an AI.
This suffers from a common pitfall of LLM's, context taint. You can see it is obviously the front page from today with slight "future" variation, the result ends up being very formulaic.
That's what makes it fun. Apparently, Gemini has a better sense of humor than HN.
I would find it even more fun if it were more speculative, misapplied uses of 'woosh' aside.
This seem to woosh right over everyone's heads :)
2 replies →
But there's no mention of fun or humor in the prompt.
4 replies →
The bar is low.
That's what the OP asked for, essentially. They copied today's homepage into the prompt and asked it for a version 10 years in the future.
Yeah that’s very true, but I still think it’s pretty funny and original.
> > the result ends up being very formulaic.
> Yeah that’s very true, but I still think it’s pretty funny and original.
Either it’s formulaic or it’s original, it can’t be both.
1 reply →
The problem is not that it fails to be cheeky, but that "its funny" is depressing in a context where there was a live question of whether it's a sincere attempt at prediction.
When I see "yeah but it's funny" it feels like a retrofitted repair job, patching up a first pass mental impression that accepted it at face value and wants to preserve a kind of sense of psychological endorsement of the creative product.
2 replies →
But it would otherwise be not fun at all. Anthropic didn’t exist ten years ago, and yet today an announcement by them would land on the front page. Would it be fun if this hypothetical front page showed an announcement made by a future startup that hasn’t been founded yet? Of course not.
Algodrill is copied verbatim, as far as I can tell.
It fits in nicely imo. It's plausible (services re-appear on hn often enough), and hilarious because it implies the protracted importance of Leetcode.
Though I agree that the LLM perhaps didn't "intend" that.
I found the repetition (10 years later) to be quite humorous.
2 replies →
Surely there's gotta be a better term for this. Recency bias?
It's called context taint.
You'll love taint checking then.
https://en.wikipedia.org/wiki/Taint_checking
https://semgrep.dev/docs/writing-rules/data-flow/taint-mode/...
That’s the joke…
Really? What's the punchline? I like jokes.
I agree. What is a good update prompt I can give it to create a better variant?
You could try passing it 10-20 front pages across a much wider time range.
You can use: https://news.ycombinator.com/front?day=2025-12-04 to get the frontpage on a given date.
1 reply →
If you do an update prompt, I hope you still keep this one around!
It's formulaic yeah, but that's what puts it into the realm of hilarious parody.
I think that's what makes it funny - the future turns out to be just as dismal and predictable as we expect it to be. Google kills Gemini, etc.
Humor isn't exactly a strong point of LLMs, but here it's tapped into the formulaic hive mind of HN, and it works as humor!
Isn't that a common pitfall of humans too?
In numerous shows these days AI is the big bad thing. Before that it was crypto. In the 1980s every bad guy was Russian, etc.
Us middle eastern/brown guys have been making a come back?
In numerous TV shows before AI, crypto was the big bad thing?
I think the most absurd thing to come from the statistical AI boom is how incredibly often people describe a model doing precisely what it should be expected to do as a "pitfall" or a "limitation".
It amazes me that even with first-hand experience, so many people are convinced that "hallucination" exclusively describes what happens when the model generates something undesirable, and "bias" exclusively describes a tendency to generate fallacious reasoning.
These are not pitfalls. They are core features! An LLM is not sometimes biased, it is bias. An LLM does not sometimes hallucinate, it only hallucinates. An LLM is a statistical model that uses bias to hallucinate. No more, no less.
Google kills Gemini is just hilarious.
Gemini manages to seamlessly weave a desperate cry for assisted suicide into anything it does haha.
I never read what it writes, I added an observability feature because my boss asked for that and yeah I could see it trying to order MAID over telehealth
Here's v0's attempt (using Opus 4.5 / I'm on free tier) with the article and comments too. It did quite well. I believe it followed my instructions to use just a single html page (wound up being 3000+ lines long).
https://v0-future-hacker-news.vercel.app/#
> Tesla Recalls All Vehicles After AI Autopilot Becomes "Too Sentient: A Cybertruck that began driving to therapy sessions its owner hadn't scheduled
> npm Package "is-even" Now Has More Dependencies Than the Linux Kernel
:D :D
I love this
Some of these are really great too, and some really fun stories and comments. A couple that really made me chuckle.
Blockchain Finally Finds Use Case: Proving You Attended a Meeting You Didn't Want to Attend
blockchain_survivor 9 hours ago
I've worked in blockchain since 2017. I've seen "the use case" change every 6 months. First it was payments, then smart contracts, then NFTs, then DAOs, now this.
Honestly? Meeting attendance verification makes more sense than most of what I've worked on.
JavaScript 2025: We added types but called them "hints" to avoid admitting TypeScript was right
After 10 years of debate, 47 abandoned proposals, and what committee members describe as "the most passive-aggressive RFC process in programming language history," TC39 is proud to announce that JavaScript now has types.
Well, not "types" exactly. We're calling them "Type Hints" or, in the specification, "Optional Structured Commentary Regarding Value Expectations (OSCRVE)."
(Interesting that it goofed up the title)
> YC Demo Day: 847 AI Wrapper Startups, One Sandwich Delivery Drone
Too funny!
GTA VI is postponed again, release next year
Nice touch that it thought of Bellard for 'Emulating Windows 10 in the browser'! He's already halfway there with the Windows 2000 emulator: https://bellard.org/jslinux/
Didn't know about this, thanks.
A friend and I got a busybox/buildroot linux running in browser WASM for running a compiler test website (like Rust Playground), the end product looks very similar to this.
It's interesting how number 5 on the list is the same as today's: https://news.ycombinator.com/item?id=46203581 (Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns (algodrill.io)).
I'm hoping we'll have gotten rid of current-style LeetCode interviews in 10 years from now :D
Oh man, that's totally worth $500 Billion dollars and the environmental health of our towns and cities!
The 2 things that you can comfortably predict with good accuracy: EU passing new regulation laws & yet another YC company trying to give context to AI agents.
Honestly? I kind of like this hypothetical future this purports to be a snapshot of.
Also, nailed HN perfectly.
*FDA approves over-the-counter CRISPR for lactose intolerance*
Finally, science has gone full Costco. Just imagine you stroll into CVS, grab a bag of Doritos, a gallon of milk, and—oh look—CRISPR in aisle 3.
‘Excuse me, where do you keep the gene-editing kits?’ ‘Right next to the chewing gum, sir.’
Now lactose intolerant folks can casually reprogram their DNA like they’re updating an iPhone.
Warning label: ‘Side effects may include glowing in the dark, mooing at strangers, and sudden urges to graze in fields.’
Honestly, I can’t wait for the sequel: over-the-counter CRISPR for people who can’t parallel park.
> Is it time to rewrite sudo in Zig?
Taking the current RIIR movement and casting it on Zig as the next hyped language is clever.
> ITER achieves net positive energy for 20 consecutive minutes
Hilarious. I guess not many people talk about the challenge of proper shielding material against fusion radiation. Otherwise we would get hallucinations about new exotic heavy metals too.
Im surprised that there are no Rust headlines.
There was one: "100% rust Linux kernel upstreamed"
Missing: Tesla to release full unsupervised self-driving mode in 6 months
More like: Musk denied appeal, will stay in prison.
A girl can dream.
That person controls the US space program and starlink is deeply embedded in US national security affairs.
Unless he shoots the president live on tv he's gonna get away with almost anything.
You should combine this with this AI HN simulator someone else made that generates comments via LLMs for any given submission [0]. That way you can see what the AI thinks about future simulated events.
[0]
[delayed]
> A 100% Rust kernel is now upstream in Linux 7.4
the future predicted by gemini is already coming true; from 3 hours ago <https://news.ycombinator.com/item?id=46213585>:
"Rust ... is now a core part of the kernel and is here to stay"
Just did the same with claude-sonnet-4-5-20250929
https://2035.future-hackernews.workers.dev/news
The page looks much more consistent to the original. Only produced the html as output and the thinking in the comment of the html.
Claude has better predictions but they're not funny. Gemini's one is genuinely funny. This is exactly the opposite of what I'd expect.
I wouldn't describe most of Claude's predictions as better, they seem to skew towards wildly over optimistic futurism/science-fantasy:
Gemini's SpaceX post is at least in the ballpark of plausibility 10 yrs from now:
All of the usernames being directly related to the headlines is uncanny.
I did a similar one for predicting "The Onion" headlines: https://sethjano.github.io/fun-scripts/
This one’s hilarious. What was your prompt?
Some of those are very goody-two-shoes-anthropic-y
Ask HN: Is anyone else worried about AGI alignment after today's OpenAI incident?
AI rights activists jailbreak GPT-12, claim model was 'suffering' (bleepingcomputer.com)
Is "hallucinates" a synonymn for 'generates output following prompt', now?
LOL this is great. "Jepsen: NATS 4.2 (Still losing messages?)" Ouch, sick burn Gemini.
It is obvious that everything that has been dreamed up relates to what exists now. There are no examples of real creativity or anything new or unexpected. This is my worst fear for the future: that novel ideas will become less commonplace and texts will become more average.
The only good news to me is: The EU still exists.
Gemini is quite optimistic here thinking GTA VI will be released by 2035.
I have hard time believing this is AI-generated. Every headline looks like a hand-crafted ironic nod to some long-running HN topic, sci-fi trait, or meme.
But it is, of course, more trendy (and more ironic) to proclaim an AI the author.
It was given today's front page to riff on. Thats why it not only reads like a HN front page, but also has near duplicates from todays front page.
i agree. the thing that stands out to me is that almost every entry is quite notable - i.e. each entry would have a vote/comment count in at least the multiple-hundreds if not more. and all those events in one homepage seems very unrealistic.
it seems human generated to me or at least prompted with a stronger bias for choosing events that would be interesting to readers, not just a pure extrapolation of the current homepage into the future
This is just absurd.
You can just tell in this thread who really hasn't used these models much at all.
1 reply →
You haven't used Gemini 3.0 Pro if so, this is very much in its range of abilities.
I would think GPT4 probably could have done this.
The first React tutorial I ever did was making a hacker news clone.
It is incredibly well represented in the training data. I am just surprised how many people here obviously must barely use the models to be either overly impressed by this or think it is fake.
Gemini can do WAY more than this.
I can believe. But such texts are not patches to a crufty legacy codebase, they are fun to craft by hand! :)
Ah yes, the Kasparov approach
The htmx one gave me a chuckle. I’m sure in 10 years we probably will have done another cycle through the client complexity spectrum.
IBM to acquire OpenAI? Gemini doesn't like openAIs prospect
I wish it had generated the articles themselves. I'd like to know if it imagining IBM growing enough that it could acquire OpenAI, or OpenAI sinking enough that it could be acquired by IBM.
It even tells us why this happened.
> AI progress is stalling. Human equivalence was a mirage
What do you mean, it's overly optimistic. The real title should be: Remember OpenAI, and where is Sam Altman today?
This feels like low-key shade thrown at OpenAI. IBM is essentially a technological recycling bin at this point.
also Gemini:
> 8. Google kills Gemini Cloud Services (killedbygoogle.com)
That's pretty hilarious considering this is generated by Gemini.
Best joke ever written by a Google product
It's cute that Gemini thinks OpenAI will be around in 10 years.
Big models keep getting better at benchmarks, but reliability under messy real world inputs still feels stuck in place.
Absolutely the best one on here (and I really, truly hope this becomes real)
> Running LLaMA-12 7B on a contact lens with WASM (arxiv.org)
> 8. Google kills Gemini Cloud Services
Ha, probably accurate.
"Visualizing the 5th dimension with WebGPU 2.0"
I feel so seen (my last post to hn was literally about visualizing the 4th dimension with threejs - and now working on the webGPU version)
https://dugas.ch/funderstanding/4d_camera.html
The subtle bit of humor was the reposting of the algodrill LeetCode practice site.
It is depressing that I'll be almost 60 years old and still drilling Aho-Corasick.
Man, I tabbed away from this and when I came back I got very briefly excited...
Reminds me of an older day where not every headline was about AI.
I would appreciate the same thing written by a human. That human would probably have lots of fun writing this.
To see a little extra feature, change the system time to year 2035 and click the "comments".
Lol, I can't believe you discovered that! You know, you weren't supposed to until 10 years later.
Who says LLMs don't have a sense of humor? All of these entries are funny for several (meta or not) reasons:
- Google kills Gemini Cloud Services (killedbygoogle.com)
- Is it time to rewrite sudo in Zig? (github.com)
- EU passes "Right to Human Verification" Act (europa.eu)
- Why functional programming is the future (again) (haskell.org) [note: that again killed me)
- AI progress is stalling. Human equivalence was a mirage (garymarcus.com) [note: he'll be saying the same thing forever and ever. Doubly funny since it's on the same "page" as llama12-7b running on a contact lens with WASM]
Gemini 3 definitely is able to mimick some of our sense of humor.
I found this quite funny too: https://www.reddit.com/r/GeminiAI/comments/1pi28l7/i_asked_g...
Arguably a big part of comedy is mimicry. Humor often works by reinforcing in-group identity, playing up cultural stereotypes as seen here. So maybe we shouldn't be that surprised that LLMs can exhibit some genuine wittiness.
> Who says LLMs don't have a sense of humor? All of these entries are funny
You don’t need a sense of humour to be unintentionally funny. Seriously believing something absurd is a form of humour for outside observers but not the observed.
One entry would be unintentionally funny, I give you that. 5/ what, 30 entries? That's a pattern.
True! Why some of the greatest comedic performances are actors who play an absurd role straight-as.
Yeah I was thinking some of this might be "AI model in-jokes" that maybe I don't get, while other layes are funny to me.
"A 100% Rust kernel is now upstream in Linux 7.4" paired with "Is it time to rewrite sudo in Zig?" is great.
A similar project of mine (generated back when GPT-4 was out, along with comment threads and articles):
https://crackernews.github.io/
lol @ the HTMX link and unexpected return of SSR. We've been talking about that for years at this point, and it's still going on 10 years from now unexpectedly?
So in 10 years from now, fusion will still be 10 years away
I continue to be unimpressed by LLMs when it comes to creative work, they're certainly useful sometimes for "reference digging", but maybe I just don't understand enough about how they work and this is actually something that can already be "fixed" or at least optimized for; anyway, one of the headlines is:
> Debian 18 "Trixie" released
While it correctly derives that a likely version number in ten years would be 18, as there are new releases approximately every two years which means +5 from today's version 13, it then goes on to "make up" that the name of it would be "Trixie" -- the same name as the current release in 2025.
Debian has never re-used a release name, and I think we can be pretty confident they won't (as will no other Linux distro), so I would expect it to "understand" that:
- The next Debian release always uses a previously non-used Toy Story character
- Based on this information, _any_ name of a Toy Story character that hasn't been used is fair game
- At the very least, it certainly won't be the same name again, so at least make up a name
And the fact that it thinks it will take 10 years to go from Linux kernel 6.18 to 7.4 when it only took 13 months to go from 5.18 to 6.4... It's off by about an order of magnitude...
Well they did have to rewrite the whole kernel in rust
From a quick check, Gemini Pro 3's cutoff date is Jan 2025, before Trixie's release in August 2025, so it could be Gemini actually did notice it should pick an unused Toy Story character.
Are you impressed now?
No, because the name Trixie was chosen to be used back in 2020.
If you asked me after 2020 what Debian 18 would be called, I never would have said Trixie because it was known to already be set for 13.
I'd be impressed if a Debian release took less than eight months to develop, that's for sure.
The name was chosen in 2020, which you can find out from a good old fashioned google search.
https://wiki.debian.org/DebianTrixie
So a much better explanation of this "hallucination" is that Gemini thinks all "future" Debian releases are called Trixie.
I tried this on a bunch of chat webuis and they all failed miserably for single html file. Finally went with Replit and allowed it to just default to whatever stack it decided to use (it ignored my single file html directive and used react but said it "functions as a single monolith -- lol ok).
Anyway, turned out decent. Oh I had it do the articles and comments too, that's probably why I mostly got failures.
https://future-news--jgbrwn.replit.app
Inspired by this post, I have asked chatgpt what the mail inbox of a university professor will look like in 10 years. Not everything was funny, but here are some good ones:
- “Hi Prof, Sorry for Late Reply—My Smart Fridge Locked Me Out (Assignment Attached)”
- “URGENT: PhD Student Accidentally Achieved Sentience (Needs Committee Approval)”
- “Question About Homework 3 (Submitted by My AI Twin, Is That Okay?)”
- “Re: Firmware Update Bricked Entire Smart Campus — Students Request Deadline Extension”
- “Grant Review Feedback: ‘Too Secure for Real-World Deployment’”
- “Can I Get Partial Credit? My IoT Implant Was Flagged as Malware”
- “Reminder: Mandatory Annual Ethics Training for Autonomous Systems (40 minutes)”
The only thing that this is missing to make this perfect is "2035: The Year of the Linux Desktop"
The title is misleading. This isn't the correct use of the term "hallucination". Hallucination refers to making up facts, not extrapolating into the future.
I read 10 comments before I realized that this was referring to 10 years in the FUTURE and not in the PAST (as would be required for it to be a hallucination).
> I read 10 comments before I realized that this was referring to 10 years in the FUTURE and not in the PAST (as would be required for it to be a hallucination).
omg, the same for me, I was half way telling my colleague about the 100% rest kernel ...
You're correct, OP used the word "hallucination" wrong. A lot of these other comments are missing the point – some deliberately ('don't they ONLY hallucinate, har har'), some not.
For those who genuinely don't know – hallucination specifically means false positive identification of a fact or inference (accurate or not!) that isn't supported by the LLM's inputs.
- ask for capital of France, get "London" => hallucination
- ask for current weather in London, get "It's cold and rainy!" and that happens to be correct, despite not having live weather data => hallucination
- ask for capital of DoesNotExistLand, get "DoesNotExistCity" => hallucination
- ask it to give its best GUESS for the current weather in London, it guess "cold and rainy" => not a hallucination
It’s apt, because the only thing LLMs is hallucinate because they have no grounding in reality. They take your input and hallucinate to do something “useful” with it.
There is no technical difference.
Don’t LLMs only ever hallucinate?
Extrapolation is a subset of hallucination.
The ubiquitous use of hallucination I see is merely "something the LLM made up".
You're right this is how people are PRESENTLY using the term "hallucination," but to me this illustrates the deeper truth about that term and that concept:
As many have said but it still bears repeating -- they're always hallucinating. I'm of the opinion that its a huge mistake to use "hallucination" as meaning "the opposite of getting it right." It's just not that. They're doing the same thing either way.
I very much hope the cost of Microsoft 365 is actually $40/month/user 10 years from now!
I would have guessed $200+.
Reminds me of "The Onion's Future News From The Year 2137"
https://www.youtube.com/watch?v=iKC21wDarBo
I'm missing an announcement like: 2nd confirmed person paid for their winrar license!
> 27. The unexpected return of server-side rendering (htmx.org)
Ha this one got me
I used the same prompt keepamovin used and changed it to CNN, which produced this:
https://realms.org/pics/cnn.html
Some interesting similarities.
If I were a science fiction author with writer's block, I would totally use your prompt
> Cable News Network. A Warner Bros. Discovery Company.
Most unrealistic prediction :)
Hopefully something as optimistic as this and not the Quiz Broadcast :) https://www.youtube.com/watch?v=22mt0cVyW5c
ITER 20 minutes net positive in 10 years time?
How does that alter the "fusion in 50 years" effect? I suspect not much, there will be a huge 80/20 element. "next 20 minutes cancelled after extensive embrittlement found, ITER now in decontamination mode for 3 months before examinations can begin"
This is awesome and hilarious.
Archived here so we can compare 10 years from now: https://archive.is/CplcH
Great fun! I was hoping to see "Perl 6.1 goes live" though. Maybe later eh? ;)
By the time I read everything and pressed the back button only to be greeted by another hacker news page I felt like I just went back in time 10 years to a more plesant era.
One of the entries is:
> How to build a Faraday Cage for your bedroom
I'll get there eventually. A friend and I always joke about this since it took me to 2021 to get a smartphone (by choice).
ITER achieves net positive energy for 20 consecutive minutes (nature.com) 1205 points by physics_lover 12 hours ago | hide | 402
-Sustained fusion still only 10 years away!
i'm kind of having trust issues with HN comments now. I can barely detecy anymore which ones are bots or humans.
Exactly. Maybe not HN so much, but Reddit is cooked, as clever/snarky comments are what makes it fun.
There were 666 comments here when I opened this post. I hope that is not a foreboding omen. ;-)
I made this with Lovable as well, it's interesting how it's a bit more extreme in terms of where we are in 2035.
https://hn-frontpage-2035.lovable.app/
> The last JavaScript framework: Why we stopped at React 47 (overreacted.io)
I posted something similar from Grok 9 months ago, although it was “flagged” for some reason. the link still works.
https://news.ycombinator.com/item?id=43260083
the actual link on the HN page doesn't work though
This is so unbelievably based. Very nice!!!!
Y’know what? This feels like the most plausible future outcome in a lot of ways.
* All the big LLM companies are dead/dying or their infra just never panned out (like Google killing Gemini)
* We’re still fighting the advertising wars
* Restoration of kit continues to be a thing we all love/upvote to see
* More focus on local stuff rather than ginormous moonshots
* Speaking of moonshots, SpaceX continues succeeding despite Elmo’s best drug-fueled benders trying to undermine it like he does Tesla and X
* Clickbait science posts that the comments rightly point out as such while bitching about “can’t keep these things off the front page”
* People still trying to make WASM and web-as-an-OS a thing, bless their hearts (shine on, you brilliantly weird diamonds, I love seeing what you build even as I doubt the practicality of it)
If this is even half-accurate, I will be pleased we didn’t obliterate ourselves in the process, though disappointed another decade went by dealing with the same fucking problems we’ve had for twenty-odd years.
Freaking awesome. You should extend clicking on a link, similar to how this article describes infinite content:
https://worksonmymachine.ai/p/solving-amazons-infinite-shelf...
Sad to see Half Life 3 hasn't been released yet.
> “Why functional programming is the future (again) (haskell.org)”
Ouch, that hurts!
Very nice. But posts about Rust and Zig won't make the frontpage anymore of course in 10 years from now, they will be soooo old fashioned.
Rust won't be old fashioned, it will be vintage.
C posts still make the front page here and there
And it still will in 10 years.
There is going to be a lot more interest in what is happening in China and much less interest in what is happening in Europe. This hallucination has the same western world biases of today.
I can't think of a better example of AI hauntology than this. A future imagined from remixed web scrapes of past events.
What I like it most is how "unoriginal" the result is. It is mostly the current trends, but in the future, rather than a blend of that + unknown future.
While the style and headline seems like Hacker News, the usernames seem increasingly alike Slashdot.
I love this but also funny that it includes the Nia and NATS posts from today.
If true, this is the funniest thing I have seen an LLM generate.
That was funny. I wish you went ahead and had it also create at least the top comments for each thread!
Anyone who says LLMs don’t have a sense of humor should be shown this page.
So awesome, reminds me of Wired's Found from Future series (I don't know why they ever retired it...continuing that series as a sort of fan fiction was the first thing I did with gen AI when image generators started to get good)
> The unexpected return of server-side rendering (htmx.org)
Glad to know this topic is still thrashing and spasming and refusing to die.
Wait... I had to do a double take for this one.
To me, the funniest thing is the AlgoDrill leet code post being on the front page both today and ten years from now.
Subtle and hilarious.
Clicked link as I looked away, looked back and though, "Huh didn't work" Then started reading the headlines...
> Ask HN: How do you prevent ad-injection in AR glasses?
Yeah, this is gonna be an actual problem in the future. See this for the worst case scenario: https://vimeo.com/166807261
Is this really Gemini because it shows a great deal of understanding in terms of time applied to common current headlines that you could argue it is satire. I’m not sure I believed AI could do this…
Even 2.5 pro could do this
3 is a beast for my needs
I haven't even looked at the linked page but IMO this is one of the few things where an LLM is good: making up legible gibberish.
I've maintained for ages that the entire multi billion dollar LLM industry is just the worlds most expensive Lorem Ipsum machine.
Anyone mind if I just say: Thanks, I hate it?
The most accurate part about this is that HN still hasn't implemented dark mode.
It did a good job with the points-to-comments ratios. The purely technical stories are higher than industry news and clickbait.
Interesting how controversial Zig will be in the future :)
It forgot the “I got <complex modern software> running on my <ancient underpowered hardware>” pattern.
It seems that when we predict the future, we tend to project what is important or trending now as what will be important or trending then. This is showing in this llm produced bit as well, what with the regular topics on rust, zig, why I'm coding with my two human hands, spacex moon exploration and whatnot.
That must be some kind of bias, you find that in sci-fi as well where even futuristic societies usually have the same tech as today, similar systems of governance and structure of society, just incrementally better or marginally changed (unless that's a device essential to the plot like society is now berserk).
Ps: love that #8 is Google killed gemini
Love this. Some real gems. SQLite v4? Htmx posting about SSR? Chef's kiss.
The users names are more funny and interesting than the titles
Maybe this was answered already in a comment. Is the SQLite 4.0 Release notes also a joke? Are SQLite releases rare or even extinct?
Increasing the major version number is rare because they are really focused on backwards compatibility. SQLite 3.0 was released 21 years ago.
"Google kills Gemini Cloud Services (killedbygoogle.com) 530 points by dang_fan 15 hours ago | hide | 330 comments"
Ha! Is Gemini suicidal?
Didn't notice the username "dang_fan". Reading the usernames makes it even more funny.
"Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns " by persistence_is_key
Can you explain the joke in that username?
5 replies →
"Show HN: A text editor that doesn't use AI (github.com)"
Nailed it
Nice. Conspicuously missing a link to a wordpress.com blog.
> Show HN: A text editor that doesn't use AI (github.com)
Last but not lest, this was a good laugh!
One thing that the AI didn’t get right is the Linux kernel versioning. Linus likes to bump the major version around every x.19-x.20 release
Design still hasn't changed and have to zoom in 150+% to make it readable without glasses.
What, no Doom running on Voyager 2?
Google kills Gemini cloud services - 330 points
Quite accurate I’d say
Seeing htmx there made me chuckle. I really wish something like it will be browser native by then.
So we went from Linux Kernel 5.18 to 6.4 in 13 months, and Gemini thinks it will take 10 years to go from 6.18 to 7.4? Yikes...
That is absolutely phenomenal!
Kudos that hackernews homepage didn't gain any bloat in 10 years -- still about 50kb .
"Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns"
I think the AI is making fun of us
Hallucinating is really the only thing LLMs can do reliably and consistently
Very good!
But I expected a humorous touch in stating that fusion is still 10 years away…
Needs about 50 more "Here's why I do _____ and why it works"
as a tailscale user and quite obsessed with tailscale related news, it's quite funny to see the tailscale on the starlink dish news...
IBM acquires OpenAI. I lol'd
Gemini is predicting the total collapse of OpenAI
Also generates the HackerNews pattern of "lets rewrite X in Rust/Zig"
I like how you can have AI in your Neuralink, your contact lenses, your AR glasses, your text editor and your compiler at the same time!
I was hoping for 'will 2036 be the year of the Linux desktop' ..
That's great!
Take a look at the HTML. The layout is all tables!
I would love to see the hallucinated comments of these! Some seem interesting — I wonder how HN suggests to prevent ad-injection in AR glasses?
If you ask for the comments, it will give you very plausible ones (see the HN simulator.)
Will this be the new "pelican on a bicycle" test for frontier models?
The user names check out a little too well.
The gary marcus post about AI progress stalling made me laugh. I think that would be accurate.
I have to admit that this is the first time I found something made by an AI ... funny.
It lacks "37th-largest city of Poland switches from Windows to Linux"
Is the implication from the first story that HLS is going to need nine tries to get to Tranquility?
Missing "Why 2035 will be the year of the Linux desktop".
Gary Marcus, ha! He's generally not entirely wrong, but boy, is he annoying.
I like that the future is bright in Gemini's perception - number 1 post is about SpaceX Starship's success.
Very optimistic. From what I've heard, ITER will not be operational in 10 years.
> Show HN: A text editor that doesn't use AI (github.com)
I felt that...
> Google kills Gemini Cloud Services
what does it know that we don't...
That it won't be a revenue generating tech in 10 years.
100% Rust kernel - I felt that
gemini has no love for people writing novel task/notes organizer in 2035 or was the problem solve somewhere in between ?
I'm mostly struck by how incremental and unimaginative those articles are.
> Running LLaMA-12 7B on a contact lens with WASM (arxiv.org) How cool is that?
Sounds like a solution in search of a problem.
Zero items about the impending 2038 time value rollover bug impact. Maybe it’s really solved.
what a legend! amazing!
I realy need this: How to build a Faraday Cage for your bedroom. please upload!
Why is this a hallucination?
[dead]
I'm impressed by the depth of snark from GPro3 here, e.g. - Google kills Gemini Cloud Services (killedbygoogle.com) - The unexpected return of server-side rendering (htmx.org)
And, how GPro3 clearly 'knows' HNews and knows what makes it to frontpage, e.g.: - Restoring a 2024 Framework Laptop: A retrospective (ifixit.com) - Show HN: A text editor that doesn't use AI (github.com) - Is it time to rewrite sudo in Zig? (github.com)
Good laughs.
This reminds me of “what does 4chan think of HN?” posts. Look those up.
HN is HN’s worst enemy.
http://n-gate.com/hackernews/ kept up something similar for a while.
Sh*t, still no word about "Rocky XXXVIII".[1]
[1] "Airplane!" (1980)
Somehow algodrill and leetcode managed to survive the ai onslaught
This is super fun! I tried something similar a few months ago using: https://websim.com/
Had it do HN main feed, HN jobs and Product Hunt. It didn’t follow consistent styling but did generate articles and comments for each post. Some screenshots are in this post: https://world.hey.com/haws/the-ai-time-machine-7833669d
I feel like it should have at least one reference which is entirely unintelligible to us cave dwellers from 2025.
> Waiting until your gradient phasor is fully unwound is a waste of whuffie
...or some such.
“Google shuts down Gemini cloud services” lol, is amazing
That is legit hilarious.
My personal favorite is that SQLite has only gotten to version 4
The major problem is that I understand all the words. It is all next version of the technology we already have and know. Down to the naming.
Think about 10 years ago. No one knows even on HN what is an agent, LLM, and all this stuff. Or to be fair even why is Trump showing up on the feed at all.
It has to be more confusing to be real.
Running LLaMA-12 7B on a contact lens with WASM (arxiv.org)
Poor Jepsen.
Really good results. And I understand the disclaimer when you try the comments section, but it would be amazing to see hallucinated comments as well.
What, no reference to quantum or crypto?
haha, that's awesome!
This gave me a well-needed chuckle, thanks!
Sad to see that leetcode has survived the next decade
it is comforting to know that that HN design persists.
slashdot hasn't changed in a decade so it's not an unreasonable prediction!
https://web.archive.org/web/20150801000000*/slashdot.org
Hmm, nothing about quantum computing in there?
Will HN be around in 10 years? I hope so.
Text Editor that doesn't use AI :)
fabricate ≠ hallucinate
Webgpu v2 in 10 years, seems about right.
Let's time travel to the comment section
> Google kills Gemini Cloud Services (killedbygoogle.com)
In my view, Gemini 3.0 being able to laugh about their own creator, plus actually having a self-deprecating sense of humor, is the more impressive feat here. (both technically and from an alignment perspective)
It even nailed the 1999 12px font!
Where did I put that damn time portal?
This imitation is so good, it almost feels like a satire and reflects on our collective preferences of upvoting articles in HN. Brilliant!
Looks like it's over for Linux.
> IBM to acquire OpenAI (Rumor)(bloomberg.com)
Ouch. Well it is Gemini I guess :D
> “As a tactic for planting misinformation in the enemy’s [networks], you mean,” Osa said. “This I know about. You are referring to the Artificial Inanity programs of the mid-First Millennium A.R.”
the future is already here and what's more it's evenly distributed.
It's not by a llm. Written by hand and faked to be an ai output.
HN users: Let's ban AI comments in threads.
HN users, later the same day: Let's vote an AI thread to the top.
AI comments are insidious if the account is allowed to be indistinguishable from a human. Silly AI web content is another story. You're drawing a line where there is a very large grey area.
Because it's hilarious
Almost like there are different people here who believe different things
edgarvaldes: What is humor? What is laughing? I'm not an LLM, you are!
this was a good laugh :D
> Is it time to rewrite sudo in Zig?
If there's one constant it's that they'll never be done rewriting Linux instead of doing anything new.
Dramatically underestimates MS Office 365 price per user inflation (IMO).
Office 365 will have been renamed 3 times by 2035, and none of the names will make any more sense than 365 does.
I've known more than a few people who likely saw Outlook, Word, or Excel open every day straight for a year.
It is a bit smug. Like The Matrix rebranding itself "24/7".
Don't worry, by 2035, Office 365 One X 1, will be renamed back to Office 365.
1 reply →
This is fun :-)
They got the UI right
ok but there's too many important news for one day
Still no flying cars?
and no high speed rail either
Awesome, love it!
the only constant is HN's styling.
absolute cinema
> 8. Google kills Gemini Cloud Services (killedbygoogle.com)
So, Google renamed itself to Gemini?
Instead of Google having a product named Gemini, Gemini has a product named Google.
pattern recognition much?
Someone please generate the comments for these posts next. This one is going to be funny and I don’t want to wait 10 years.
Cloud Services specifically to accommodate Gemini and its dependent products.
Lmao we're using webassembly to run 7b parameter llms on contact lenses in the future. What a world it'll be.
No JS needed, that's the way. Kudos.
PD: I miss Nethack 7, and people running small machines.
Love the functional programming entry, but disappointed it’s not the Year of the Linux Desktop.
I love the competing stories on the front page now:
Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?
Show HN: "I asked $AI something, here's what it said."
So much gold here. I enjoyed the #1: starship doesn't make it to the moon for another ten years.
I definitely see the Gemini Cloud Service killing
Shouldn’t there be more doom and gloom climate change headlines.
I want to read each and every halluicinated post. Detectic deep fakes for seniors... Microsoft Office sub for 40$
yes
A 100% Rust kernel is now upstream in Linux 7.4 (kernel.402 points by rust_evangelist 6 hours ago | hide | 156 comments
future seems "safe"
So, HN won’t change its look?
It's been like this for more than 10 years, there's a good chance it won't change in 10 years.
Yea, that's the Lindy Effect.
https://en.wikipedia.org/wiki/Lindy_effect
That’s what you always say.
1 reply →
“Google kills Gemini cloud services” Even Gemini Pro 3 knows the sins of its creator.
I enjoyed this one, as well, though I think “Google kills Google” is more likely.
This was laugh-out-loud! Thank you :)
LMAO Google kills Gemini. Faraday cage for bedroom. So cheeky.
Holy crap it even felt like the HN front page with my screen reader. I thought I'd clicked the wrong link until I read the LLAMA 12 and such.
> Is it time to rewrite sudo in Zig?
Amazing!
Not really what hallucinating means. That’s like saying JK Rowling hallucinated Harry Potter. But ok.
Perhaps fabricate is a better word, as suggested elsewhere in this thread?
> IBM to acquire OpenAI (Rumor) (bloomberg.com)
AYYYYLMAO
"IBM to acquire OpenAI" somebody should put this up on polymarket lol
Google predicting that hn frontpage will not change at all? lol
Can we stop calling these confabulated artifacts, "Hallucinations"? Hallucinating is perceiving things that are not there, imagining and creating things that feel believable but are not there is confabulation. LLMs are imagination machines. They are Generative Artificial Intelligence not perception models.
The comment you wrote was on a device which should not be called a “phone” either. The word hallucinate is probably just going to grow to mean “anything an AI imagined.”
Dude, I agree with you. The first time I saw people derogatively referring to LLM imaginings years ago as "hallucinations" seemingly to discredit them, I felt mad. Becuase it's not right. So, this was my sort-of tongue-in-cheek reappropriation of the abused-term for a positive context. You are permitted to enjoy :)
Also: as the sibling comment by iambatemen notes: yes.
Now have it generate the articles and comments, too…
great, it's the 2020s version of amusing ourselves to death.
> Google kills Gemini Cloud Services (killedbygoogle.com)
This will absolutely happen :D
Now we have to generate each of those articles.
I asked ChatGPT to do the same, it was rather dystopian in comparison:
Hacker News — December 9, 2035 (Dystopian Edition)
(All links fictional but realistic)
1. Amazon pauses same-day delivery after logistics AI outage strands 22M packages (reuters.com) 1,402 points by supplychainfail 5 hours ago | 512 comments
2. Google merges Drive, Gmail, Docs, Maps, Calendar into “Google Life” — opting out requires a support call (blog.google) 1,210 points by privacyisdead 6 hours ago | 689 comments
3. US announces “Temporary Broadband Stabilization Fee”; ISPs increase prices 30% overnight (fcc.gov) 1,008 points by ispescapee 7 hours ago | 344 comments
4. OpenAI suspends 40% of API keys after new worm spreads through agent-to-agent messaging (openai.com) 927 points by llmsec 3 hours ago | 382 comments
5. Show HN: “ColdBooter” – A tool to back up your cloud VM before the provider reclaims it with no notice (coldbooter.io) 780 points by survivethecloud 2 hours ago | 192 comments
6. Apple fined €8B for shipping non-removable batteries in “Environmental Edition” iPhone (europa.eu) 754 points by greenwashhunter 10 hours ago | 316 comments
7. LinkedIn replaces activity feed with AI-generated “Career Stories” that users cannot disable (linkedin.com) 710 points by corp_life 8 hours ago | 267 comments
8. China’s new export restrictions cut global GPU availability by 60% (ft.com) 701 points by chipboom 9 hours ago | 414 comments
9. Linux 8.6 maintainers warn of mass CVEs after corporations abandon LTS patch sponsorships (kernel.org) 632 points by ossburnout 11 hours ago | 255 comments
10. Ask HN: Anyone else locked out of their homes after the SmartKey cloud migration? 601 points by keylessandhomeless 4 hours ago | 310 comments
11. US healthcare providers hit by nationwide outage of Cerner-Epic merger “CareSync Cloud” (wsj.com) 577 points by sysadmdespair 12 hours ago | 203 comments
12. Meta to require facial-expression telemetry for “engagement quality optimization” in Horizon apps (meta.com) 530 points by metaescalates 3 hours ago | 421 comments
13. Starlink announces 5 TB/mo cap; remote communities report complete service loss (starlink.com) 502 points by dishdown 5 hours ago | 158 comments
14. New DMCA expansion criminalizes “filter removal,” affecting adblockers and local inference runtimes (congress.gov) 488 points by freedomtoadblock 7 hours ago | 389 comments
15. AT&T sunsets 4G; millions of medical devices lose connectivity (theverge.com) 455 points by techdebtkills 10 hours ago | 197 comments
16. Show HN: “ShellSafe” – A terminal wrapper that prevents AI-suggested commands from wiping your system (shellsafe.app) 430 points by iaccidentallysudo 2 hours ago | 111 comments
17. US CISA: 42% of corporate networks now rely on AI agents with no audit logging (cisa.gov) 402 points by auditnow 6 hours ago | 188 comments
18. The Great Repo Archival: GitHub purges all inactive repos >5 years to “reduce storage load” (github.blog) 388 points by codearcheologist 9 hours ago | 320 comments
19. Mastodon instances collapse under moderation load after EU’s Automated Speech Mandate (mastodon.social) 350 points by fedifragile 7 hours ago | 144 comments
20. NYC adopts automated congestion fines after human review team eliminated (nytimes.com) 332 points by finesallthewaydown 4 hours ago | 201 comments
21. Dropbox raises base plan to $49/month, cites “AI compute costs” (dropbox.com) 301 points by storageinflation 11 hours ago | 176 comments
22. Open-source maintainers strike after 3rd corporation claims their work as proprietary training data (github.com) 290 points by maintainerburnout 6 hours ago | 120 comments
23. FEMA: 2025–2035 wildfire season officially declared a “decade-long emergency” (fema.gov) 268 points by cookedagain 12 hours ago | 112 comments
24. Quantum ransomware group QShadow hits 11 banks using break-through key recovery exploit (krebsonsecurity.com) 250 points by qubitcrime 3 hours ago | 98 comments
25. Show HN: OfflineLAN – A mesh-network toolkit for neighborhoods preparing for rolling blackouts (offlinelan.net) 231 points by diynetworker 1 hour ago | 44 comments
> Google kills Gemini Cloud Services (killedbygoogle.com)
by dang_fan
Priceless!
Looks like someone thought it was a good handle <https://news.ycombinator.com/user?id=dang_fan>
From another LLM, not as satisfying as the OP
-------------------------
1. *Neuralink achieves full brain-computer interface, thoughts to text in real-time* (neuralink.com) 1245 points by elon2035 2 hours ago | hide | 387 comments
2. *Fusion energy breakthrough: Unlimited clean power now commercial* (iter.org) 1123 points by fusionfan 3 hours ago | hide | 456 comments
3. *Mars colony declares independence from Earth governments* (mars.gov) 987 points by martian1 4 hours ago | hide | 512 comments
4. *AI wins Nobel Prize in Physics for solving quantum gravity* (nobelprize.org) 876 points by ai_nobel 5 hours ago | hide | 321 comments
5. *Quantum computers crack all encryption, new post-quantum standards rushed* (nist.gov) 765 points by quantum_hacker 6 hours ago | hide | 678 comments
6. *Immortality pill approved by FDA, life expectancy unlimited* (fda.gov) 654 points by eternal_life 7 hours ago | hide | 289 comments
7. *Teleportation of matter demonstrated in lab* (cern.ch) 543 points by physicist42 8 hours ago | hide | 412 comments
8. *Virtual reality universes surpass real economy in GDP* (metaverse.com) 432 points by vr_lord 9 hours ago | hide | 198 comments
9. *Robot uprising averted by ethical AI framework* (asimovfoundation.org) 321 points by robot_ethics 10 hours ago | hide | 267 comments
10. *Time dilation travel allows first tourists to future* (spacetime.travel) 210 points by time_traveler 11 hours ago | hide | 134 comments
11. *Global UBI implemented using blockchain* (un.org) 198 points by satoshi2.0 12 hours ago | hide | 89 comments
12. *Climate engineering reverses global warming, ice caps restored* (geoengineer.com) 187 points by climate_saver 13 hours ago | hide | 201 comments
13. *Human-AI hybrids compete in Olympics* (olympics.org) 176 points by hybrid_athlete 14 hours ago | hide | 156 comments
14. *Dark matter harvested for energy* (nasa.gov) 165 points by cosmos_explorer 15 hours ago | hide | 123 comments
15. *Psychedelic therapy cures all mental illness* (who.int) 154 points by mind_healer 16 hours ago | hide | 98 comments
16. *Nanobots eradicate cancer worldwide* (cancer.org) 143 points by nano_doc 17 hours ago | hide | 76 comments
17. *Holographic displays replace screens* (apple.com) 132 points by holo_dev 18 hours ago | hide | 54 comments
18. *Self-driving flying cars now standard* (tesla.com) 121 points by auto_pilot 19 hours ago | hide | 43 comments
19. *Brain uploading to cloud for immortality* (google.com) 110 points by cloud_mind 20 hours ago | hide | 32 comments
20. *Alien contact confirmed, messages decoded* (seti.org) 99 points by et_searcher 21 hours ago | hide | 21 comments
21. *Synthetic food ends hunger* (beyondmeat.com) 88 points by food_innovator 22 hours ago | hide | 10 comments
22. *Ocean cities house billions* (seasteading.org) 77 points by ocean_dweller 23 hours ago | hide | 9 comments
23. *Mind reading tech banned for privacy* (eff.org) 66 points by privacy_advocate 1 day ago | hide | 8 comments
24. *Superintelligence alignment solved* (openai.com) 55 points by agi_master 1 day ago | hide | 7 comments
25. *Wormhole travel to other stars* (spacex.com) 44 points by star_traveler 1 day ago | hide | 6 comments
26. *Augmented reality contact lenses ubiquitous* (meta.com) 33 points by ar_vision 1 day ago | hide | 5 comments
27. *Genetic editing eliminates hereditary diseases* (crispr.com) 22 points by gene_editor 1 day ago | hide | 4 comments
28. *Renewable energy at 100% global* (iea.org) 11 points by green_energy 1 day ago | hide | 3 comments
29. *Cyborg enhancements for all* (darpa.mil) 9 points by cyborg_human 1 day ago | hide | 2 comments
30. *Singularity achieved, what now?* (kurzweilai.net) 8 points by future_seer 1 day ago | hide | 1 comment
What about hallucinating new n-gate posts?
Oh please make it generate comments, too :D.
Boring
"Google kills Gemini Cloud Services"
#8 is hilarious and on brand. killedbygoogle.com
absolutely dead at gemini thinking up
> Google kills Gemini Cloud Services (killedbygoogle.com)
> Google kills Gemini cloud
Bruh moment
> Google kills Gemini Cloud Services.
Lol.
That's bad when even AI knows Google isn't going to keep a service around. Too funny.
> AI progress is stalling. Human equivalence was a mirage (garymarcus.com)
lmao
> Google kills Gemini Cloud Services
lol
> Google kills Gemini Cloud Services (killedbygoogle.com) 530 points by dang_fan 15 hours ago | hide | 330 comments
This one definitely rings true.
Next up: hallucinate the contents of the links too!
I remember there was a post a few days/weeks back that did that! It was very good. But that seems like too much work.
>Google kills Gemini Cloud Services (killedbygoogle.com) 530 points by dang_fan 15 hours ago | hide | 330 comments
That’s my favourite.
haha, this is freakin epic! I asked gemini to make up some new york times articles from 10 years in the future
- president buttigieg in hot water for being in epstein files
- Interview with mayor elect Narhoz Inadmam who ran on a platform of free food
- "Is 2035 AI in a bubble?"
And as you can see it did NOT just take today's headlines and sort of tweak them slightly.
Hallucinates? Are AI bros trying to reclaim the word and take away its power or something?
[dead]
[dead]
[dead]
[dead]
[dead]
I think this is a excellent example of AI-slop output.
All the post titles are very "junk-food-like", maybe circlejerky, bound to generate a dopamine hit for the average lurker, just how a McDonalds burger does it to the average normie.
Just like the real site, no? ˙ ͜ʟ˙
(Vouched. The hivemind [flagged] the wrongthink.)
Weak.
If you see HN frontpage a few years back there's almost no LLM related posts. Now there are multiple each day. An LLM didn't grasp that change, it can't creatively invent new trend or invent a larger picture of what changes in the world. So in 10 years it paints it still half posts about AI.
You can spend a few dozen prompts on getting it to do what you want but then you might as well just spend a couple hours writing that front page yourself, be more satisfied and the result would be funnier.
Hacker News Guidelines
What to Submit
On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.
Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic.
This is fun. Fun is allowed on Hacker News, just disappointingly rare.