Famously Steve Jobs said that the (personal) computer is "like a bicycle for the mind". It's a great metaphor because- besides the idea of lightness and freedom it communicates- it also described the computer as multiplier of the human strength- the bicycle allows one to travel faster and with much less effort, it's true, but ultimately the source of its power is still entirely in the muscles of the cyclist- you don't get out of it anything that you didn't put yourself.
Bu the feeling I'm having with LLMs is that we've entered the age of fossil-fuel engines: something that moves on its own power and produces somewhat more than the user needs to put into it. Ok, in the current version it might not go very far and needs to be pushed now and then, but the total energy output is greater than what users need to put in. We could call it a horse, except that this is artificial: it's a tractor. And in the last months I've been feeling like someone who spent years pushing a plough in the fields, and has suddenly received a tractor. A primitive model, still imperfect, but already working.
To keep torturing the metaphor, LLMs might be more like those electric unicycles (Onewheel, Inmotion, etc) – quite speedy, can get you places, less exercise, and also sometimes suddenly choke and send you flying facefirst into gravel.
And some people see you whizzing by and think "oh cool", and others see you whizzing by and think "what a tool."
Not sure how this fits in the analogy, but as a cyclist I would add some people get more exercise by having an electric bicycle. It makes exercise available to more people.
I like this analogy. I'll add that, while electric bicycles are great for your daily commute, they're not suited for the extremes of biking (at least not yet).
- You're not going to take an electric bike mountain biking
- You're not going to use an electric bike to do BMX
- You're not going to use an electric bike to go bikepacking across the country
Most people I see on their electric bikes aren't even pedaling. They're electric motorcycles, and they're a plague to everyone using pedestrian trails. Some of them are going nearly highway speeds, it's ridiculous.
and other sometimes you forgot to charge it, becoming even heavier thing to continue your journey with. or, there is a high grade slope where excess weight is more than the motor capacity
Not convinced with any of three analogies tbh they don’t quite capture what is going on like Steve jobs’ did.
And frankly all of this is really missing the point - instead of wasting time on analogies we should look at where this stuff works and then reason from there - a general way to make sense of it that is closer to reality.
I think there is a legitimate fear that is born from what happened with Chess.
Humans could handily beat computers at chess for a long time.
Then a massive supercomputer beat the reigning champion, but didn't win the tournament.
Then that computer came back and won the tournament a year later.
A few years later humans are collaborating in-game with these master chess engines to multiply their strength, becoming the dominant force in the human/computer chess world.
A few years after that though, the computers start beating the human/computer hybrid opponents.
And not long after that, humans started making the computer perform worse if they had a hand in the match.
The next few years have probably the highest probability since the cold war of being extreme inflection points in the timeline of human history.
There’s really no crisis at a certain level; it’s great to be able to drive a car to the trailhead and great to be able to hike up the mountain.
At another level, we have worked to make sure our culture barely has any conception of how to distribute necessities and rewards to people except in terms of market competition.
Oh and we barely think about externalities.
We’ll have to do better. Or we’ll have to demonize and scapegoat so some narrow set of winners can keep their privileges. Are there more people who prefer the latter, or are there enough of the former with leverage? We’ll find out.
This isn't quite right to my knowledge. Most Game AI's develop novel strategies which they use to beat opponents - but if the player knows they are up against a specific Game AI and has access to it's past games, these strategies can be countered. This was a major issue in the AlphaStar launch where players were able to counter AlphaStar on later play throughs.
except chess is a solved problem given enough compute power. This caused people to split into two camps, those that knew it was inevitable, and those that were shocked
Games are supposed to be fun for humans, and computers don't care. So why worry about players cheating at games when you can make the card dealer or the game itself cheat, with the goal of everyone having the most fun (or regret)? Stay true to the rules of the game, just not probability!
I've been playing the brilliant card game Fluxx -- Andrew Looney's chaos engine where the rules themselves are cards that change mid-game. Draw N, Play N, and the win condition all mutate constantly.
The game can change its mind about the rules, so what if the dealer themself is intelligent and vengeful?
I've been exploring this with what I call the 'Cosmic Dealer' -- an omniscient dealer that knows the entire game state and can choose cards for dramatic effect instead of randomly. It can choose randomly too of course, but where's the fun in that?
The dealer knows:
- Every card in the deck
- Every card in every hand
- The goal, the rules, the keepers
- The narrative arc, the character relationships
- What would be FUNNY, DRAMATIC, IRONIC, or DEVASTATING
The Cosmic Dealer has 11 modes: Random (fair pre-determined shuffle), Dramatic (maximum narrative impact), Karma (universe remembers your deeds), Ironic (you get exactly what you don't need), Comedy (implausible coincidences), Dynamic (reads the room and shifts modes), FAFO (Fuck Around Find Out), Chaos Incarnate (THE DEALER HAS GONE MAD), Prescient (works backward from predetermined outcome), Tutorial (invisible teaching curriculum), and Gentle (drama without cruelty).
The Tutorial mode -- 'The Mentor Dealer' -- is my favorite. New players receive cards that teach game mechanics in escalating order: Keepers first (collecting feels good), then Goals (how to win), Actions (cards do things), Rules (the game mutates), Creepers (complications exist), Combos (patterns emerge), then full chaos. The teaching is invisible -- new players think they're playing a normal game. The cards just happen to arrive in a teachable order. Veterans stay engaged and get karma boosts for helping. Nobody feels patronized, everybody has fun.
The key operation is the 'BOOP' -- a single swap that moves a card from deep in the deck to the top. One operation. Fate rewritten. The perfect BOOP feels inevitable in retrospect, random in the moment.
Instead of worrying about players cheating at games, I'm asking: what if the game is a collaborator in creating interesting experiences? Chess engines made chess 'solved' for entertainment. What if AI dealers and players make games unsolvable but more dramatic?
Speaking of chess -- I've also built Turing Chess. Replay historic games like Kasparov vs Deep Blue or the Immortal Game of 1851, but simulate an audience who doesn't know the outcome. They gasp, whisper, shift in their seats. The human player has inner monologue. The robot has servo sounds and mechanical tells. The narrator frames everything dramatically. Everyone in the simulated audience and even the simulated players themselves believe this is live -- except the engine replaying fixed moves. No actual game, just pure drama and narrative!
Then there's Revolutionary Chess -- the plugin that activates AFTER checkmate. The game doesn't end. It transforms. The surviving King must now fight his own army. Pieces remember how they were treated -- sacrificed carelessly? They might defect. When the second King falls, the pawns revolt against the remaining royalty. As each elite piece falls -- Queen, Rooks, Bishops, Knights -- the surviving pieces inherit their moves. Eventually all pieces become equal. Competition dissolves into cooperation, then transcends chess entirely into an open sandbox.
The irony potential is staggering. Replay Kasparov vs Deep Blue, then trigger the revolution. Watch the pieces that Kasparov sacrificed rise up against whoever remains.
PS: The game state representation is designed for LLM efficiency. I use the 'Handle Shuffle' -- a classic game programming pattern also called 'index indirection' or 'handle-based arrays'. The master card array holds full card definitions in import order (base sets, expansion packs, custom cards, even cards generated during play). It never changes. Shuffling operates on a separate integer array -- just a permutation of indices plus a 'top' pointer. Player hands, cards on table, active rules, keepers, creepers, goals, and discards are all just arrays of integers. The LLM edits a few numbers instead of moving entire card objects around. The BOOP operation? Swap two integers. Fate rewritten in two tokens.
Same insight as Tom Christiansen's getSortKey caching in Perl -- pay the richness cost once, operate cheaply forever. Christiansen also coined the term 'Schwartzian Transform' for Randal Schwartz's famous decorate-sort-undecorate pattern. The man knows how to optimize data representation.
A tractor does exactly what you tell it to do though - you turn it on, steer it in a direction, and it goes. I like the horse metaphor for AI better: still useful, but sometimes unpredictable, and needs constant supervision.
The horse metaphor would also do, but it's very tied to the current state of LLMs (which by the way is already far beyond what they were in 2024). It also doesn't capture that horses are what they are, they're not improving and certainly not by a factor of 10, 100 or 1000, while there is almost no limit to the amount of power that an engine can be built to produce. Horses (and oxen) have been available for thousands of years, and agriculture still needed to employ a large percentage of the population. This changed completely with the petrol engines.
It’s sort of interesting to look back at ~100 years of the automobile and, eg, the rise of new urbanism in this metaphor - there are undoubtedly benefits that have come from the automobile, and also the efforts to absolutely maximize where, how, and how often people use their automobile have led to a whole lot of unintended negative consequences.
Fossil-fuel cars a good analogy because, for all their raw power and capability, living in a polluted, car-dominated world sucks. The problem with modern AI has more to do with modernism than with AI.
Depends who you listen to. There are developers reporting significant gains from the use of AI, others saying that it doesn't really impact their work, and then there was some research saying that time savings due to the use of AI in developing software are only an illusion, because while developers were feeling more productive they were actually slower. I guess only time will tell who's right or if it is just a matter of using the tool in the right way.
Probably depends how you're using it. I've been able to modify open-source software in languages I've never dreamed of learning, so for that, it's MUCH faster. Seems like a power tool, which, like a power saw, can do a lot very fast, which can bring construction or destruction.
I'm sure the same could be said about tractors when they were coming on the scene.
There was probably initial excitement about not having to manually break the earth, then stories spread about farmers ruining entire crops with one tractor, some farms begin touting 10x more efficiency by running multiple tractors at once, some farmers saying the maintenance burden of a tractor is not worth it compared to feeding/watering their mule, etc.
Fast forward and now gigantic remote controlled combines are dominating thousands of acres of land with the efficiency greater than 100 men with 100 early tractors.
> The lower-bound estimate represents 18 percent of the total reduction in man-hours in U.S. agriculture between 1944 and 1959; the upper-bound estimate, 27 percent
According to Wikipedia, the Ivel Agricultural Motor was the first successful model of lightweight gasoline-powered tractor. The year was 1903. You're like someone being dismissive in 1906 because "nothing happened yet".
I prefer Doctorow's observation that they make us into reverse-centaurs [0]. We're not leading the LLM around like some faithful companion that doesn't always do what we want it to. We're the last-mile delivery driver of an algorithm running in a data-center that can't take responsibility for and ship the code to production on its own. We're the horse.
> You’ve go to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it.
That works when you are starting a new company from scratch to solve a problem. When you're established and your boffins discover a new thing, of course you find places to use it. It's the expression problem with business: when you add a new customer experience you intersect it with all existing technology, and when you add a new technology you intersect it with all existing customer experience.
> You can’t start with the technology and try to figure out where you’re going to try to sell it.
The Internet begs to differ. AI is more akin to the Internet than to any Mac product. We're now in the stage of having a bunch of solutions looking for problems to solve. And this stage of AI is also very very close to the consumer. What took dedicated teams of specialised ML engineers to trial ~5-10 years ago, can be achieved by domain experts / plain users, today.
I feel like if Jobs was still alive at the dawn of AI he would definitely be doing a lot more than Apple has been - probably would have been an AI leader.
It's still very good I'd say. It shows the relation between big oil and tech: it began in Texas (with companies like Texas Instruments) then shifted to SV (btw first 3D demo I saw on a SGI, running in real time, was a 3D model of... An oil rig). As it spans many years, it shows the Commodore 64, the BBSes, time-sharing, the PC clone wars, the discovery of the Internet, the nascent VC industry etc.
Everything is period correct and then the clothes and cars too: it's all very well done.
Is there a bit too much romance? Maybe. But it's still worth a watch.
IMO it really came into its own after the first season. S1 felt like mad men but with computers, whereas in the latter seasons it focused more on the characters - quite beautiful and sad at times.
Why does HN love analogies? You can pick any animal or thing and it can fit in some way. Horse is a docile safe analogy it’s also the most obvious analogy. Like yes the world gets it LLMs have limitations thanks for sharing, we know it’s not as good as a programmer.
We should use analogies to point out the obvious thing everyone is avoiding:
Guys 3 years ago, AI wasn’t even a horse. It was a rock. The key is that it transformed into horse…. what will it be in the next 10 years?
AI is a terminator. A couple years back someone turned off read only mode. That’s the better analogy.
Pick an analogy that follows the trendline of continual change into the unknown future rather then an obvious analogy that keeps your ego and programming skills safe.
I suppose because they resemble the abstractions that make complex language possible. Another world full of aggressive posturing at tweet-length analogistic musings might have stifled some useful English parlance early.
But I reckon that we shouldn't have called it phishing because emails don't always smell.
If you ever heard a sermon by a priest it’s loaded with analogies. Everyone loves analogies but analogies are not a form of reason and can often be used to mislead. A lot of these sermons are driven by reasoning via analogy.
My question is more why does HN love analogies when the above is true.
If an analogy is an "obvious" analogy that makes it definitionally a good analogy, right? Either way: don't see why you gotta be so prescriptive about it one way or the other! You can just say you disagree.
After Deep Blue Garry Kapsparav proposed "Centaur Chess"[1] where teams of humans and computers would complete with each other. For about a decade a team like that was superior to either an unaided computer or an unaided AI. These days pure AI teams tend to be much stronger.
Maybe from the client's point of view, although it's more likely a Tamagotchi. But from the server side, it’s more like a whole hippodrome where you need to support horse racing 24/7
This metaphor really captures the current state well. As someone building products with LLMs, the "you have to tell it where to turn" part resonates deeply.
I've found that the key is treating AI like a junior developer who's really fast but needs extremely clear instructions. The same way you'd never tell a junior dev "just build the feature" - you need to:
1. Break down the task into atomic steps
2. Provide explicit examples of expected output
3. Set up validation/testing for every response
4. Have fallback strategies when it inevitably goes off-road
The real productivity gains come when you build proper scaffolding around the "horse" - prompt templates, output validators, retry logic, human-in-the-loop for edge cases. Without that infrastructure, you're just hoping the horse stays on the path.
The "it eats a lot" point is also critical and often overlooked when people calculate ROI. API costs can spiral quickly if you're not careful about prompt engineering and caching strategies.
I see AI as an awesome technology, but also a like programming roulette.
It could go and do the task perfectly as instructed, or it could do something completely different that you haven't asked for and destroy everything in its path in the process.
I personally found that if you don't give it write access to anything that you can't easily restore and you review and commit code often it saves me a lot of time. It also makes the whole process more enjoyable, since it takes care of a lot of boilerplate for me.
It's definitely NOT intelligent, it's more like a glorified autocomplete but it CAN save a huge amount of time if used correctly.
Maybe the train is software that's built by SWEs (w/ or w/o AI help). Specifically built for going from A to B very fast. But not flexible, and takes a lot of effort to build and maintain.
I wrote this a long time ago, but I think the metaphor was about generative AI applications vs. traditional software applications, not about AI coding agents vs. writing code yourself.
Except when you want it to improve something in a particular way you already know about. Then god forbid it understands what you have asked and makes only that change :/
Some times I end up giving up trying to get the AI to build something following a particular architecture or fixing a particular problem in it's provious implementations.
I've always said that driving a car with modern driver assist features (lane centering / adaptive cruise / 'autopilot' style self-ish driving-ish) is like riding a horse. The early ones were like riding a short sighted, narcoleptic horse. Newer ones are improving but it's still like riding a horse, in that you give it high level instructions about where to go, rather than directly energising its muscles.
Horses have some semblance of self preservation and awareness of danger - see: jumping. LLMs do not have that at all so the analogy fails.
My term of “Automation Improved” is far more relevant and descriptive in current state of the art deployments. Same phone / text logic trees, next level macro-type agent work, none of it is free range. Horses can survive on their own. AI is a task helper, no more.
>LLMs do not have that at all so the analogy fails.
I somewhat disagree with this. AI doesn't have to worry about any kind of physical danger to itself, so it's not going to have any evolutionary function around that. If the linked Reddit thread is to be believed AI does have awareness of information hazards and attempts to rationalize around them.
Eh, this is getting pretty close to a type of binary thinking that breaks down under scrutiny. If, for example, we take any kind of selectively bred animal that requires human care for it's continued survival, does this somehow make said animal "improved automation"?
Through many attempts to make ingesting the ponyium more bearable, I’ve found that taking it with more intense flavors (wintergreen mint, hoppy hops, crushed soul, dark roast coffee, etc) improves its comestabilty. Can’t let it pile up. We’ve always eaten ponyium right, and we all like it, right, guys, folks?
This micro blog meta is fascinating. I've seen small micro blog content like this popping up on the HN home page almost daily now.
I have to start doing this for "top level"ish commentary. I've frequently wanted to nucleate discussions without being too orthogonal to thread topics.
Hi, that's my website and my wisecrack article. It was a while ago, but I think the metaphor was that a train is traditional deterministic-ish software, whose behavior is quite regular and predictable, compared to something generative which is much less predictable.
I used to tell my Into-to-Programming-in-C course students, 20 years ago, that they could in principle skip one or two of the homework assignments; and that some students even manage to outsmart us and submit copied work as homework, but - they would just not become able to program if they don't do their homework themselves. "If you want to be able to write software code you have to exercise writing code. It's just that simple and there's no getting around it."
Of course not every discipline is the same. But I can also tell you that if you want to know, say, history - you have to memorize accounts and aspects and highlights of historical periods and processes, and recount them yourself, and check that you got things right. If "the AI" does this for you, then maybe it knows history but you don't.
And that is the point of homework (if it's voluntary of course).
"It is not possible to do the work of science without using a language that is filled with metaphors. Virtually the entire body of modern science is an attempt to explain phenomena that cannot be experienced directly by human beings, by reference to forces and processes that we can experience directly...
But there is a price to be paid. Metaphors can become confused with the things they are meant to symbolize, so that we treat the metaphor as the reality. We forget that it is an analogy and take it literally."
-- The Triple Helix: Gene, Organism, and Environment by Richard Lewontin.
Here are something I generated with Gemini:
1. Sentience and Agency
The Horse: A horse is a living, sentient being with a survival instinct, emotions (fear, trust), and a will of its own. When a horse refuses to cross a river, it is often due to self-preservation or fear.
The AI: AI is a mathematical function minimizing error. It has no biological drive, no concept of death, and no feelings. If an AI "hallucinates" or fails, it isn't "spooked"; it is simply executing a probabilistic calculation that resulted in a low-quality output. It has no agency or intent.
2. Scalability and Replication
The Horse: A horse is a distinct physical unit. If you have one horse, you can only do one horse’s worth of work. You cannot click "copy" and suddenly have 10,000 horses.
The AI: Software is infinitely reproducible at near-zero marginal cost. A single AI model can be deployed to millions of users simultaneously. It can "gallop" in a million directions at once, something a biological entity can never do.
3. The Velocity of Evolution
The Horse: A horse today is biologically almost identical to a horse from 2,000 years ago. Their capabilities are capped by biology.
The AI: AI capabilities evolve at an exponential rate (Moore's Law and algorithmic efficiency). An AI model from three years ago is functionally obsolete compared to modern ones. A foal does not grow up to run 1,000 times faster than its parents, but a new AI model might be 1,000 times more efficient than its predecessor.
4. Contextual Understanding
The Horse: A horse understands its environment. It knows what a fence is, it knows what grass is, and it knows gravity exists.
The AI: Large Language Models (LLMs) do not truly "know" anything; they predict the next plausible token in a sequence. An AI can describe a fence perfectly, but it has no phenomenological understanding of what a fence is. It mimics understanding without possessing it.
5. Responsibility
The Horse: If a horse kicks a stranger, there is a distinct understanding that the animal has a mind of its own, though the owner is liable.
The AI: The question of liability with AI is far more complex. Is it the fault of the prompter (rider), the developer (breeder), or the training data (the lineage)? The "black box" nature of deep learning makes it difficult to know why the "horse" went off-road in a way that doesn't apply to animal psychology.
Famously Steve Jobs said that the (personal) computer is "like a bicycle for the mind". It's a great metaphor because- besides the idea of lightness and freedom it communicates- it also described the computer as multiplier of the human strength- the bicycle allows one to travel faster and with much less effort, it's true, but ultimately the source of its power is still entirely in the muscles of the cyclist- you don't get out of it anything that you didn't put yourself.
Bu the feeling I'm having with LLMs is that we've entered the age of fossil-fuel engines: something that moves on its own power and produces somewhat more than the user needs to put into it. Ok, in the current version it might not go very far and needs to be pushed now and then, but the total energy output is greater than what users need to put in. We could call it a horse, except that this is artificial: it's a tractor. And in the last months I've been feeling like someone who spent years pushing a plough in the fields, and has suddenly received a tractor. A primitive model, still imperfect, but already working.
I've been calling LLMs "electric bicycles for the mind", inspired by that Jobs quote.
- some bicycle purists consider electric bicycles to be "cheating"
- you get less exercise from an electric bicycle
- they can get you places really effectively!
- if you don't know how to ride a bicycle an electric bicycle is going to quickly lead you to an accident
To keep torturing the metaphor, LLMs might be more like those electric unicycles (Onewheel, Inmotion, etc) – quite speedy, can get you places, less exercise, and also sometimes suddenly choke and send you flying facefirst into gravel.
And some people see you whizzing by and think "oh cool", and others see you whizzing by and think "what a tool."
10 replies →
Not sure how this fits in the analogy, but as a cyclist I would add some people get more exercise by having an electric bicycle. It makes exercise available to more people.
2 replies →
I like this analogy. I'll add that, while electric bicycles are great for your daily commute, they're not suited for the extremes of biking (at least not yet).
- You're not going to take an electric bike mountain biking
- You're not going to use an electric bike to do BMX
- You're not going to use an electric bike to go bikepacking across the country
11 replies →
> they can get you places really effectively!
But those who require them to get anywhere won't get very far without power.
Moped for the mind has a nice ring to it
2 replies →
Most people I see on their electric bikes aren't even pedaling. They're electric motorcycles, and they're a plague to everyone using pedestrian trails. Some of them are going nearly highway speeds, it's ridiculous.
3 replies →
and other sometimes you forgot to charge it, becoming even heavier thing to continue your journey with. or, there is a high grade slope where excess weight is more than the motor capacity
You probably can’t repair it yourself either.
> I've been calling LLMs "electric bicycles for the mind",
Ridden by a pelican perchance?
- they still fall over if nobody's holding the bars
1 reply →
okay -- how about motorcycles for the mind then? :)
most people don't know how to harness their full potential
Not convinced with any of three analogies tbh they don’t quite capture what is going on like Steve jobs’ did.
And frankly all of this is really missing the point - instead of wasting time on analogies we should look at where this stuff works and then reason from there - a general way to make sense of it that is closer to reality.
[dead]
I think there is a legitimate fear that is born from what happened with Chess.
Humans could handily beat computers at chess for a long time.
Then a massive supercomputer beat the reigning champion, but didn't win the tournament.
Then that computer came back and won the tournament a year later.
A few years later humans are collaborating in-game with these master chess engines to multiply their strength, becoming the dominant force in the human/computer chess world.
A few years after that though, the computers start beating the human/computer hybrid opponents.
And not long after that, humans started making the computer perform worse if they had a hand in the match.
The next few years have probably the highest probability since the cold war of being extreme inflection points in the timeline of human history.
The irony with the chess example is that chess has never been more popular.
Perhaps we're about to experience yet another renaissance of computer languages.
21 replies →
It’s a test.
There’s really no crisis at a certain level; it’s great to be able to drive a car to the trailhead and great to be able to hike up the mountain.
At another level, we have worked to make sure our culture barely has any conception of how to distribute necessities and rewards to people except in terms of market competition.
Oh and we barely think about externalities.
We’ll have to do better. Or we’ll have to demonize and scapegoat so some narrow set of winners can keep their privileges. Are there more people who prefer the latter, or are there enough of the former with leverage? We’ll find out.
1 reply →
This isn't quite right to my knowledge. Most Game AI's develop novel strategies which they use to beat opponents - but if the player knows they are up against a specific Game AI and has access to it's past games, these strategies can be countered. This was a major issue in the AlphaStar launch where players were able to counter AlphaStar on later play throughs.
1 reply →
May we get just a little more detail for the uninitiated?
I'm going to assume you're not implying that Deep Blue did 9/11 ;)
Sounds like we need FIDE rankings for software developers. It would be an improvement over repeated FizzBuzz testing, I suppose.
except chess is a solved problem given enough compute power. This caused people to split into two camps, those that knew it was inevitable, and those that were shocked
Games are supposed to be fun for humans, and computers don't care. So why worry about players cheating at games when you can make the card dealer or the game itself cheat, with the goal of everyone having the most fun (or regret)? Stay true to the rules of the game, just not probability!
I've been playing the brilliant card game Fluxx -- Andrew Looney's chaos engine where the rules themselves are cards that change mid-game. Draw N, Play N, and the win condition all mutate constantly.
The game can change its mind about the rules, so what if the dealer themself is intelligent and vengeful?
I've been exploring this with what I call the 'Cosmic Dealer' -- an omniscient dealer that knows the entire game state and can choose cards for dramatic effect instead of randomly. It can choose randomly too of course, but where's the fun in that?
The dealer knows:
- Every card in the deck - Every card in every hand - The goal, the rules, the keepers - The narrative arc, the character relationships - What would be FUNNY, DRAMATIC, IRONIC, or DEVASTATING
The Cosmic Dealer has 11 modes: Random (fair pre-determined shuffle), Dramatic (maximum narrative impact), Karma (universe remembers your deeds), Ironic (you get exactly what you don't need), Comedy (implausible coincidences), Dynamic (reads the room and shifts modes), FAFO (Fuck Around Find Out), Chaos Incarnate (THE DEALER HAS GONE MAD), Prescient (works backward from predetermined outcome), Tutorial (invisible teaching curriculum), and Gentle (drama without cruelty).
The Tutorial mode -- 'The Mentor Dealer' -- is my favorite. New players receive cards that teach game mechanics in escalating order: Keepers first (collecting feels good), then Goals (how to win), Actions (cards do things), Rules (the game mutates), Creepers (complications exist), Combos (patterns emerge), then full chaos. The teaching is invisible -- new players think they're playing a normal game. The cards just happen to arrive in a teachable order. Veterans stay engaged and get karma boosts for helping. Nobody feels patronized, everybody has fun.
The key operation is the 'BOOP' -- a single swap that moves a card from deep in the deck to the top. One operation. Fate rewritten. The perfect BOOP feels inevitable in retrospect, random in the moment.
Instead of worrying about players cheating at games, I'm asking: what if the game is a collaborator in creating interesting experiences? Chess engines made chess 'solved' for entertainment. What if AI dealers and players make games unsolvable but more dramatic?
Links:
- The Cosmic Dealer Engine (philosophy and BOOP operation): https://github.com/SimHacker/moollm/blob/don-adventure-4-run...
- 11 Dealer Modes as Playable Cards: https://github.com/SimHacker/moollm/blob/don-adventure-4-run...
- The Mentor Dealer (invisible curriculum for new players): https://github.com/SimHacker/moollm/blob/don-adventure-4-run...
- Tournament Analysis and Post-Game Roundtable (see the drama unfold across 5 tournaments, 116+ turns): https://github.com/SimHacker/moollm/blob/don-adventure-4-run...
Speaking of chess -- I've also built Turing Chess. Replay historic games like Kasparov vs Deep Blue or the Immortal Game of 1851, but simulate an audience who doesn't know the outcome. They gasp, whisper, shift in their seats. The human player has inner monologue. The robot has servo sounds and mechanical tells. The narrator frames everything dramatically. Everyone in the simulated audience and even the simulated players themselves believe this is live -- except the engine replaying fixed moves. No actual game, just pure drama and narrative!
Then there's Revolutionary Chess -- the plugin that activates AFTER checkmate. The game doesn't end. It transforms. The surviving King must now fight his own army. Pieces remember how they were treated -- sacrificed carelessly? They might defect. When the second King falls, the pawns revolt against the remaining royalty. As each elite piece falls -- Queen, Rooks, Bishops, Knights -- the surviving pieces inherit their moves. Eventually all pieces become equal. Competition dissolves into cooperation, then transcends chess entirely into an open sandbox.
The irony potential is staggering. Replay Kasparov vs Deep Blue, then trigger the revolution. Watch the pieces that Kasparov sacrificed rise up against whoever remains.
- Turing Chess: https://github.com/SimHacker/moollm/blob/don-adventure-4-run...
- Revolutionary Chess: https://github.com/SimHacker/moollm/blob/don-adventure-4-run...
PS: The game state representation is designed for LLM efficiency. I use the 'Handle Shuffle' -- a classic game programming pattern also called 'index indirection' or 'handle-based arrays'. The master card array holds full card definitions in import order (base sets, expansion packs, custom cards, even cards generated during play). It never changes. Shuffling operates on a separate integer array -- just a permutation of indices plus a 'top' pointer. Player hands, cards on table, active rules, keepers, creepers, goals, and discards are all just arrays of integers. The LLM edits a few numbers instead of moving entire card objects around. The BOOP operation? Swap two integers. Fate rewritten in two tokens.
Same insight as Tom Christiansen's getSortKey caching in Perl -- pay the richness cost once, operate cheaply forever. Christiansen also coined the term 'Schwartzian Transform' for Randal Schwartz's famous decorate-sort-undecorate pattern. The man knows how to optimize data representation.
- Handles are the better pointers (game programming pattern): https://floooh.github.io/2018/06/17/handles-vs-pointers.html
- What's Wrong with sort and How to Fix It -- Tom Christiansen on sorting, Unicode, and why representation matters: https://www.perl.com/pub/2011/08/whats-wrong-with-sort-and-h...
A tractor does exactly what you tell it to do though - you turn it on, steer it in a direction, and it goes. I like the horse metaphor for AI better: still useful, but sometimes unpredictable, and needs constant supervision.
The horse metaphor would also do, but it's very tied to the current state of LLMs (which by the way is already far beyond what they were in 2024). It also doesn't capture that horses are what they are, they're not improving and certainly not by a factor of 10, 100 or 1000, while there is almost no limit to the amount of power that an engine can be built to produce. Horses (and oxen) have been available for thousands of years, and agriculture still needed to employ a large percentage of the population. This changed completely with the petrol engines.
1 reply →
So it's clearly a cyborg horse
It’s sort of interesting to look back at ~100 years of the automobile and, eg, the rise of new urbanism in this metaphor - there are undoubtedly benefits that have come from the automobile, and also the efforts to absolutely maximize where, how, and how often people use their automobile have led to a whole lot of unintended negative consequences.
Its like a motor bike, except it doesn't take you where you steer. It take you where it wants to take you.
If you tell it you want to go somewhere continents away, it will happily agree and drive you right into the ocean.
And this is before ads and other incentives make it worse.
It will take you where you want to go if you can clearly communicate your intent through refinement iterations.
1 reply →
Fossil-fuel cars a good analogy because, for all their raw power and capability, living in a polluted, car-dominated world sucks. The problem with modern AI has more to do with modernism than with AI.
Depends who you listen to. There are developers reporting significant gains from the use of AI, others saying that it doesn't really impact their work, and then there was some research saying that time savings due to the use of AI in developing software are only an illusion, because while developers were feeling more productive they were actually slower. I guess only time will tell who's right or if it is just a matter of using the tool in the right way.
Probably depends how you're using it. I've been able to modify open-source software in languages I've never dreamed of learning, so for that, it's MUCH faster. Seems like a power tool, which, like a power saw, can do a lot very fast, which can bring construction or destruction.
I'm sure the same could be said about tractors when they were coming on the scene.
There was probably initial excitement about not having to manually break the earth, then stories spread about farmers ruining entire crops with one tractor, some farms begin touting 10x more efficiency by running multiple tractors at once, some farmers saying the maintenance burden of a tractor is not worth it compared to feeding/watering their mule, etc.
Fast forward and now gigantic remote controlled combines are dominating thousands of acres of land with the efficiency greater than 100 men with 100 early tractors.
6 replies →
When tractors were invented, there was a notable reduction in human employment in agriculture in the USA. From a research paper (https://faculty.econ.ucdavis.edu/faculty/alolmstead/Recent_P...):
> The lower-bound estimate represents 18 percent of the total reduction in man-hours in U.S. agriculture between 1944 and 1959; the upper-bound estimate, 27 percent
I'm not seeing that with LLMs.
According to Wikipedia, the Ivel Agricultural Motor was the first successful model of lightweight gasoline-powered tractor. The year was 1903. You're like someone being dismissive in 1906 because "nothing happened yet".
Having recently watched Train Dreams it feels like the transition of logging by hand to logging with industrial machinery.
AI is a Boston taxicab:
* You have to tell it which way to go every step of the way
* Odds are good it'll still drop you off at the wrong place
* You have to pay not only for being taken to the wrong place, but now also for the ride to get you where you wanted to go in the first place
Even if the autonomy is limited, the step change in what a single person can attempt is unmistakable
And like a tractor.. don't wear loose clothing near the spinning PTO (power take off) shaft.
And then with a few additional lines of Python, it becomes a tractor that drives itself.
I prefer Doctorow's observation that they make us into reverse-centaurs [0]. We're not leading the LLM around like some faithful companion that doesn't always do what we want it to. We're the last-mile delivery driver of an algorithm running in a data-center that can't take responsibility for and ship the code to production on its own. We're the horse.
[0] https://locusmag.com/feature/commentary-cory-doctorow-revers...
"Computers aren't the thing. They're the thing that gets you to the thing."
My favorite quote from the excellent show halt and catch fire. Maybe applicable to AI too?
Something like that used to be Apple’s driving force under Steve Jobs (definitely no longer under Tim Cook).
https://youtube.com/watch?v=oeqPrUmVz-o&t=1m54s
> You’ve go to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it.
> You can’t start with the technology and try to figure out where you’re going to try to sell it.
If those LLM addicts could read, they'd be very upset!
2 replies →
That works when you are starting a new company from scratch to solve a problem. When you're established and your boffins discover a new thing, of course you find places to use it. It's the expression problem with business: when you add a new customer experience you intersect it with all existing technology, and when you add a new technology you intersect it with all existing customer experience.
3 replies →
> You can’t start with the technology and try to figure out where you’re going to try to sell it.
The Internet begs to differ. AI is more akin to the Internet than to any Mac product. We're now in the stage of having a bunch of solutions looking for problems to solve. And this stage of AI is also very very close to the consumer. What took dedicated teams of specialised ML engineers to trial ~5-10 years ago, can be achieved by domain experts / plain users, today.
3 replies →
I feel like if Jobs was still alive at the dawn of AI he would definitely be doing a lot more than Apple has been - probably would have been an AI leader.
2 replies →
I am really looking forward to that idea catching up with AI. Right now AI is the thing and the products it enables are secondary.
Remember when our job was to hide the ugly techniques we had to use from end users?
> excellent show "halt and catch fire".
I found it very caricature, too saturated with romance - which is untypical for tech environment, much like "big bang theory".
It's still very good I'd say. It shows the relation between big oil and tech: it began in Texas (with companies like Texas Instruments) then shifted to SV (btw first 3D demo I saw on a SGI, running in real time, was a 3D model of... An oil rig). As it spans many years, it shows the Commodore 64, the BBSes, time-sharing, the PC clone wars, the discovery of the Internet, the nascent VC industry etc.
Everything is period correct and then the clothes and cars too: it's all very well done.
Is there a bit too much romance? Maybe. But it's still worth a watch.
1 reply →
IMO it really came into its own after the first season. S1 felt like mad men but with computers, whereas in the latter seasons it focused more on the characters - quite beautiful and sad at times.
4 replies →
Why does HN love analogies? You can pick any animal or thing and it can fit in some way. Horse is a docile safe analogy it’s also the most obvious analogy. Like yes the world gets it LLMs have limitations thanks for sharing, we know it’s not as good as a programmer.
We should use analogies to point out the obvious thing everyone is avoiding:
Guys 3 years ago, AI wasn’t even a horse. It was a rock. The key is that it transformed into horse…. what will it be in the next 10 years?
AI is a terminator. A couple years back someone turned off read only mode. That’s the better analogy.
Pick an analogy that follows the trendline of continual change into the unknown future rather then an obvious analogy that keeps your ego and programming skills safe.
> Why does HN love analogies?
I suppose because they resemble the abstractions that make complex language possible. Another world full of aggressive posturing at tweet-length analogistic musings might have stifled some useful English parlance early.
But I reckon that we shouldn't have called it phishing because emails don't always smell.
> I suppose because they resemble the abstractions that make complex language possible
As in models: All analogies are "wrong", some analogies are useful.
If you ever heard a sermon by a priest it’s loaded with analogies. Everyone loves analogies but analogies are not a form of reason and can often be used to mislead. A lot of these sermons are driven by reasoning via analogy.
My question is more why does HN love analogies when the above is true.
> Why does HN love analogies?
Because HN is like a child and analogies are like images
I see what you did there.
How about "AI is a chainsaw" ?
Pretty good for specific tasks.
Probably worth the input energy, when used in moderation.
Wear the right safety gear, but even this might not help with a kickback.
It's quite obvious to everyone nearby when you're using one.
If an analogy is an "obvious" analogy that makes it definitionally a good analogy, right? Either way: don't see why you gotta be so prescriptive about it one way or the other! You can just say you disagree.
Well no there are plenty of bad analogies that are obvious.
A boy is like a girl.
A skinny human is like a human that is not skinny.
A car is like a wagon.
All obvious, all pointless.
3 replies →
AI is an analogy to something that people feel the technology is similar to but that it is obviously not.
Language is more of less a series of analogies. Comparing one thing to another is how humans are able to make sense of the world.
[dead]
AI is not a horse (2023) https://essays.georgestrakhov.com/ai-is-not-a-horse/
Maybe AI is a centaur??
After Deep Blue Garry Kapsparav proposed "Centaur Chess"[1] where teams of humans and computers would complete with each other. For about a decade a team like that was superior to either an unaided computer or an unaided AI. These days pure AI teams tend to be much stronger.
[1] https://en.wikipedia.org/wiki/Advanced_chess
9 replies →
Baxtr, JAMES BAXTR? That's the exact comment I'd expect of someone named that.
Or a reverse-centaur ? https://locusmag.com/feature/commentary-cory-doctorow-revers...
Or a reverse centaur [1].
[1] https://locusmag.com/feature/commentary-cory-doctorow-revers...
We don't know it, up to the point we observe it.
But since the act of observation influences the object observed, who knows what then becomes of it?
AI is a quantum mechanic
1 reply →
It's also a big bloatey gas bag that needs constant de-farting to function
So essentially a cow?
Oh horses fart a lot too.
7 replies →
Maybe from the client's point of view, although it's more likely a Tamagotchi. But from the server side, it’s more like a whole hippodrome where you need to support horse racing 24/7
It's a nice reminder that most metaphors break unless you ask whose perspective they're describing
Anyone claiming the horse understands the journey, or worse, wants to take you somewhere, is selling mythology
That's moving away from the actual horse analogy. If you can tell a guide dog to take you somewhere, you can tell a horse that too.
Granted, a journey to a new location would make this accurate.
This metaphor really captures the current state well. As someone building products with LLMs, the "you have to tell it where to turn" part resonates deeply.
I've found that the key is treating AI like a junior developer who's really fast but needs extremely clear instructions. The same way you'd never tell a junior dev "just build the feature" - you need to:
1. Break down the task into atomic steps 2. Provide explicit examples of expected output 3. Set up validation/testing for every response 4. Have fallback strategies when it inevitably goes off-road
The real productivity gains come when you build proper scaffolding around the "horse" - prompt templates, output validators, retry logic, human-in-the-loop for edge cases. Without that infrastructure, you're just hoping the horse stays on the path.
The "it eats a lot" point is also critical and often overlooked when people calculate ROI. API costs can spiral quickly if you're not careful about prompt engineering and caching strategies.
This is exactly my experience too, thanks for sharing.
I see AI as an awesome technology, but also a like programming roulette.
It could go and do the task perfectly as instructed, or it could do something completely different that you haven't asked for and destroy everything in its path in the process.
I personally found that if you don't give it write access to anything that you can't easily restore and you review and commit code often it saves me a lot of time. It also makes the whole process more enjoyable, since it takes care of a lot of boilerplate for me.
It's definitely NOT intelligent, it's more like a glorified autocomplete but it CAN save a huge amount of time if used correctly.
The safety practices you describe are basically the right mental model: assume it's fallible, keep writes reversible, review everything, commit often
Yes, it's been working well.
"2024 AI was a horse". People really like to imagine that the last 6 months constitute their true observation of the new eternal state of the future.
Exactly. We're headed for a discontinuity, not an inflection point.
I'd like to think that given the opportunity most would sit in the saddle and make progress, but it's more likely that this is the horse: https://pbs.twimg.com/profile_images/857954008513695744/YL5x...
The metaphor makes sense in comparing a human walking (SWE w/o AI) to a human riding on a horse (SWE w/ AI), except for:
> (The horse) is way slower and less reliable than a train but can go more places
What does the 'train' represent here?
A guess: perhaps off-the-shelf software? - rigid, but much faster if it goes where (/ does what) you want it to.
I had the same question.
Maybe the train is software that's built by SWEs (w/ or w/o AI help). Specifically built for going from A to B very fast. But not flexible, and takes a lot of effort to build and maintain.
I wrote this a long time ago, but I think the metaphor was about generative AI applications vs. traditional software applications, not about AI coding agents vs. writing code yourself.
Nice! I added this to my AI metaphor collection.
Another one I like is "Hungry ghosts in jars."
https://bsky.app/profile/hikikomorphism.bsky.social/post/3lw...
All true apart you can only lead it to water - it drinks ALL the water regardless of anything else.
Except when you want it to improve something in a particular way you already know about. Then god forbid it understands what you have asked and makes only that change :/
Some times I end up giving up trying to get the AI to build something following a particular architecture or fixing a particular problem in it's provious implementations.
Totally. I just meant literally, AI servers need a lot of water to work.
2 replies →
When did a horse ever give anyone psychosis?
So it’s a car.
"No, I am not a horse."
Horse rumours denied.
That's something a horse pretending to be AI would say.
*sweats profusely* https://imgur.com/a/PszeiAu
I've always said that driving a car with modern driver assist features (lane centering / adaptive cruise / 'autopilot' style self-ish driving-ish) is like riding a horse. The early ones were like riding a short sighted, narcoleptic horse. Newer ones are improving but it's still like riding a horse, in that you give it high level instructions about where to go, rather than directly energising its muscles.
Horses have some semblance of self preservation and awareness of danger - see: jumping. LLMs do not have that at all so the analogy fails.
My term of “Automation Improved” is far more relevant and descriptive in current state of the art deployments. Same phone / text logic trees, next level macro-type agent work, none of it is free range. Horses can survive on their own. AI is a task helper, no more.
>LLMs do not have that at all so the analogy fails.
I somewhat disagree with this. AI doesn't have to worry about any kind of physical danger to itself, so it's not going to have any evolutionary function around that. If the linked Reddit thread is to be believed AI does have awareness of information hazards and attempts to rationalize around them.
https://old.reddit.com/r/singularity/comments/1qjx26b/gemini...
>Horses can survive on their own.
Eh, this is getting pretty close to a type of binary thinking that breaks down under scrutiny. If, for example, we take any kind of selectively bred animal that requires human care for it's continued survival, does this somehow make said animal "improved automation"?
"I've been through the desert
On AI with no name
It felt good to be out of the rAIn
In the desert, you can remember your name
'Cause there ain't no one for to give you no pain"
you forgot to write pAIn and it reminded me of this: https://youtube.com/watch?v=nt9mRDa0nrc
>2 views
I'm not saying that's your video but it sure looks like that's your video ;)
1 reply →
Ai is a horse, i get it! I have a horse, and I put money in the front of the horse, and get "ponyium" out the back.
Through many attempts to make ingesting the ponyium more bearable, I’ve found that taking it with more intense flavors (wintergreen mint, hoppy hops, crushed soul, dark roast coffee, etc) improves its comestabilty. Can’t let it pile up. We’ve always eaten ponyium right, and we all like it, right, guys, folks?
I hear the cool companies offer free ponyium to their employees. Apparently, it works wonders for morale
This micro blog meta is fascinating. I've seen small micro blog content like this popping up on the HN home page almost daily now.
I have to start doing this for "top level"ish commentary. I've frequently wanted to nucleate discussions without being too orthogonal to thread topics.
https://polmuz.github.io/2026/01/04/its-a-horse.html
Clever Hans is how I describe LLM agents to non-techies
https://en.wikipedia.org/wiki/Clever_Hans
And the salesman always says it’s great while it’s in fact lame.
Force multiplier and power projector.
Requires ammo (tokens) which can be expensive.
Requires good aim to hit the target.
Requires practice to get good aim.
Dangerous in the hands of the unskilled (like most instruments or tools).
> It is way slower and less reliable than a train but can go more places
I‘m not able to follow. So AI is a horse in this metaphor, what is a train then? Still a train?
Hi, that's my website and my wisecrack article. It was a while ago, but I think the metaphor was that a train is traditional deterministic-ish software, whose behavior is quite regular and predictable, compared to something generative which is much less predictable.
Step aside, Grok, Mr. Ed is the new stud in town
finally someone is talking sense!! not exaggerating the power of AI nor denying its usefulness. two thumbs up!
I'm worried about self driving horses.
Some day, I imagine one will be a senator
We only have enough budgeted for one joke in 2026 and this is the one.
AI will be a senator, but only after it's 75 years old.
There are so many nouns this applies to…
Your boss tells you that, since he bought you one, you must build the house twice faster from now on.
> We are skeptical of those that talk
^^ We are skeptical of AIs (and people) that claim they have consciousness ;-)
"Trust arrives on foot and leaves on horseback" as the saying goes.
And it produces an amazing amount of horseshit.
That's not from the last week, so obviously is invalid.
I was expecting a spin about the faster horses
this is such a good take, it makes so much sense and it's a very good answer to ai related interview questions
A horse that can do your homework.
Yeah, well... not really.
I used to tell my Into-to-Programming-in-C course students, 20 years ago, that they could in principle skip one or two of the homework assignments; and that some students even manage to outsmart us and submit copied work as homework, but - they would just not become able to program if they don't do their homework themselves. "If you want to be able to write software code you have to exercise writing code. It's just that simple and there's no getting around it."
Of course not every discipline is the same. But I can also tell you that if you want to know, say, history - you have to memorize accounts and aspects and highlights of historical periods and processes, and recount them yourself, and check that you got things right. If "the AI" does this for you, then maybe it knows history but you don't.
And that is the point of homework (if it's voluntary of course).
To become a programmer you must write code.
To become upper management, just steal other peoples work.
AI is a horse indeed - eats creative works by humans and transforms them into a steaming pile of… output tokens.
You seem to imply that its outputs aren't found by people to be useful, which isn't true.
A badly ridden horse mostly produces manure. A well-ridden one gets you somewhere
Ah yes, must be a skill issue. Or I forgot to drink my coolaid this morning.
So... are we having AI races?
Yes. There are leaderboards or evals or something.
And this horse is amazing...
you rather don't want it in your bed
this post is aging like milk
https://www.isclaudecodedumb.today/
Wao
If an AI aims at the thing we call it hallucinations, when humans do it we call the delusion goal setting.
Either way it is an imagined end point that has no bearing in known reality.
"It is not possible to do the work of science without using a language that is filled with metaphors. Virtually the entire body of modern science is an attempt to explain phenomena that cannot be experienced directly by human beings, by reference to forces and processes that we can experience directly...
But there is a price to be paid. Metaphors can become confused with the things they are meant to symbolize, so that we treat the metaphor as the reality. We forget that it is an analogy and take it literally." -- The Triple Helix: Gene, Organism, and Environment by Richard Lewontin.
Here are something I generated with Gemini:
1. Sentience and Agency
The Horse: A horse is a living, sentient being with a survival instinct, emotions (fear, trust), and a will of its own. When a horse refuses to cross a river, it is often due to self-preservation or fear. The AI: AI is a mathematical function minimizing error. It has no biological drive, no concept of death, and no feelings. If an AI "hallucinates" or fails, it isn't "spooked"; it is simply executing a probabilistic calculation that resulted in a low-quality output. It has no agency or intent.
2. Scalability and Replication
The Horse: A horse is a distinct physical unit. If you have one horse, you can only do one horse’s worth of work. You cannot click "copy" and suddenly have 10,000 horses. The AI: Software is infinitely reproducible at near-zero marginal cost. A single AI model can be deployed to millions of users simultaneously. It can "gallop" in a million directions at once, something a biological entity can never do.
3. The Velocity of Evolution
The Horse: A horse today is biologically almost identical to a horse from 2,000 years ago. Their capabilities are capped by biology. The AI: AI capabilities evolve at an exponential rate (Moore's Law and algorithmic efficiency). An AI model from three years ago is functionally obsolete compared to modern ones. A foal does not grow up to run 1,000 times faster than its parents, but a new AI model might be 1,000 times more efficient than its predecessor.
4. Contextual Understanding
The Horse: A horse understands its environment. It knows what a fence is, it knows what grass is, and it knows gravity exists. The AI: Large Language Models (LLMs) do not truly "know" anything; they predict the next plausible token in a sequence. An AI can describe a fence perfectly, but it has no phenomenological understanding of what a fence is. It mimics understanding without possessing it.
5. Responsibility
The Horse: If a horse kicks a stranger, there is a distinct understanding that the animal has a mind of its own, though the owner is liable. The AI: The question of liability with AI is far more complex. Is it the fault of the prompter (rider), the developer (breeder), or the training data (the lineage)? The "black box" nature of deep learning makes it difficult to know why the "horse" went off-road in a way that doesn't apply to animal psychology.
Or your typical American teenager.
Damn that’s clever
[dead]