Remind me of a recent discussion we had among Stackoverflow moderator:
> “Think about it,” he continued. “Who discovers the edge cases the docs don’t mention? Who answers the questions that haven’t been asked before? It can’t be people trained only to repeat canonical answers. Somewhere, it has to stop. Somewhere, someone has to think.”
> “Yes,” said the Moderator.
> He leaned back. For a moment, restlessness flickered in his eyes.
> “So why wasn’t I told this at the start?”
> “If we told everyone,” said the Moderator gently, “we’d destroy the system. Most contributors must believe the goal is to fix their CRUD apps. They need closure. They need certainty. They need to get to be a Registered Something—Frontend, Backend, DevOps, Full stack. Only someone who suffered through the abuse of another moderator closing their novel question as a duplicate can be trusted to put enough effort to make an actual contribution”
The metaphor doesn't match very well here because stackoverflow is not selling new tape at a premium but giving them for free and reading a stackoverflow answer is harder than asking an LLM.
Could be that AI companies feeding on stackoverflow are selling tape at a premium, and if they tell you it's only supervised learning from a lot of human experts it's going to destroy the nice bubble they have going on around AGI.
Could also be that you have to do the actual theory / practice / correction work for your basal ganglia to "know" about something without thinking about it (i.e. learn), contrary to the novel where the knowledge is directly inserted in your brain. If everyone use AI to skip the "practice" phase lazily then there's no one to make the AI evolve anymore. And the world is not a Go board where the AI can learn against itself indefinitely.
I've read this a long time ago, when I was a kid. Back then I thought about the education system and how it sometimes inhibits the creativity within the students. But right now, other comparison comes to mind - I don't know how relevant it is, though, so please don't judge it strictly.
Modern "AI" (LLM-based) systems are somewhat similar to the humans in this story who were taped. They may have a lot of knowledge, even a lot of knowledge that is really specialized, but once this knowledge becomes outdated or they are required to create something new - they struggle a lot. Even the systems with RAG and "continuous memory" (not sure if that's the right term) don't really learn something new. From what I know, they can accumulate the knowledge, but they still struggle with creativity and skill learning. And that may be the problem for the users of these systems as well, because they may sometimes rely on the shallow knowledge provided by the LLM model or "AI" system instead of thinking and trying to solve the problem themselves.
Luckily enough, most of the humans in our world can still follow the George's example. That's what makes us different from LLM-based systems. We can learn something new, and learn it deeply, creating the deep and unique networks of associations between different "entities" in our mind, which allows us to be truly creative. We also can dynamically update our knowledge and skills, as well as our qualities and mindset, and so on...
What concerns me is that learning depth is more discouraged than ever. For a long time it's been discouraged, which is natural as we have a preference for simple things rather than difficult/complex things. But we're pushing much harder than ever before. From the way we have influencer education videos to the way people push LLMs ("you can just vibe code, no thinking required"). We've advanced enough that it's easy to make things look good enough but looks can be deceiving. It's impossible to know what's good enough without depth of knowledge, without mastery.
No machine will ever be sufficient to overcome the fundamental problem: a novice is incapable of properly evaluating a system. No human is capable of doing this either, nor can they (despite many believing they can). It's a fundamental information problem. The best we can do is match our human system, where we trust the experts, who have depth. But we even see the limits of that and how frequently they get ignored by those woefully unqualified to evaluate. Maybe it'll be better as people tend to trust machines more. But for the same reason it could be significantly worse. It's near impossible to fix a problem you can't identify.
A very nice story, and an interesting reflection on the education system.
Also, and this is just an aside, but “the protagonist who is too special for the sorting hat” is a bit of a trope in young adult literature at this point. Is this the first real instance of it? 1957. That’s a while ago! I don’t even know if the “sorting hat” trope was established enough to subvert at the time.
Not really an example of the trope, but suspect Asimov might have got some of his ideas from Huxley's Brave New World, where it turns out the occupation-segregated dystopia is actually run by an idealistic type who's committed to the system but finds nonconformists and forbidden literature really interesting study subjects to make it better, and exile to the Falklands is actually a reward, sort of...
As mentioned, the word "trope" dates back to ancient times, although generally meaning rhetorical devices like similes and metaphors rather than in the "reused plot" sense generally used today. But even the ancients still recognized those. Aristotle's Poetics deals with plays in addition to poems, and he discusses what sort of plots work in tragedies.
Sorry, I can’t tell if this is sarcastic. Well I think it has a kernel of truth that overstates it for rhetorical flair.
I’m willing to believe the phrase “trope” wasn’t invented in 1957 if that’s what you are saying. But surely they had the idea of popular little trends in contemporary literature.
The must have known they were writing pulp sci-fi. At least when they got their copies they could feel the texture!
>No one would have recognized any tropes in 1957 beyond Shakespeare.
Nope. Just within science fiction, early issues of Galaxy had many editorials denouncing/mocking science fiction stories with overused tropes, such as Western transposed to space, or babies being killed as aberrant after a nuclear war because they have ten fingers and toe.
This is my favorite Asimov story. It's got a protagonist with compelling motivations, a society that has problems but also convincing reasons why they persist, and a great ending.
mine too, because one of my favourite sff tropes is that the more you regiment society, the more you rely on outsiders and those pushed to the edges for any real innovation.
Two other Asimov stories that are similarly relevant to much of what is discussed on HN for similar reasons are “In a Good Cause—” and “The Dead Past”.
I don’t know of a link for the first. Here’s one for the second.
I am sort of questioning my use of LLMs again after, first reluctantly, starting to use them multiple times a day. This story seems like it was intended to be an allegory for LLM-use though I know it couldn't have been.
It's an allegory about trusting "best practices", standardized bodies of knowledge¹, and "that's the way it's always been done". Not that those things necessarily don't work, they do in the story as well as in real life, but they need to adapt to change and the story illustrates what happens when they harden from best practice into unquestioned dogma.
¹ There's even a BoK for software developers, the SWEBOK, but I've never met anybody who's read it.
There's a similar story about a progression of robot repair devices --- which has to end in a "Master Robot Repairman" profession which is the folks who repair the robots which repair other robots.
Blanking on author and title, but read it a _long_ while ago, and it had a distinctly golden age feel --- maybe Murray Leinster?
There's something a little like this in Strata by Pratchett (which is lightly sending up Niven's Ringworld and a non-robot-related but similar idea there).
Is this still in print, maybe as part of a collection? I tried to find it but couldn't. Many of his other works seem to be available as paperback, including a bunch of story collections.
Dr Antonelli said, “Or do you believe that studying some subject will bend the brain cells in that direction, like that other theory that a pregnant woman need only listen to great music persistently to make a composer of her child. Do you believe that?”
Apparently, Asimov was an early critic of the “Mozart in the womb” movement.
It isn't to make a composer out of a baby but to expose a growing brain to complex music. We have no proof it benefits brain development, but we also have no proof it does not.
I studied classical music and came from a challenged background which to be honest is a rarity in that field. Almost everyone I studied with has parents who specifically encouraged music education and had the means to help make that happen. I got mine from some gifted vinyl as a child and fell in love with the orchestra. If I was in this story I'd probably not have been recommended to be a Professional Composer (if social expectations were the equivalent of what Asimov is saying here.)
I don’t think you can assign that meaning here one way or another. The context in the story at that point (IIRC) is that he’s sort of lying to the protagonist, or at least misleading him.
This story is set thousands of years in the future, and yet their social norms are broadly those of 1960s America, conspicuously minus the racism. Their notion of gender equality, for instance, is to segregate, but add "(and women)" after every few "men" (respectively "(and husbands)" after "wives"). Stubby Trevelyan smokes, and litters the cigarette butts. This has to be deliberate on the part of the author. I wonder what Ladislas Ingenescu, Registered Historian, has to say about the matter?… if, indeed, he has any original thoughts to share.
one of asimov's finest , a metaphor that continues to find relevance in my day to day existence - that the conclusions we so readily come to are assumptions made in the absence of the awareness of something more
Previously in the story it is mentioned that George as a child was curious about the etymology of the Olympics event and asked his father, only to be dismissed.
The callback at the end symbolizes his renewed curiosity. He is no longer ashamed of the way his mind works and if it makes him look different.
Perhaps you should review the "Please don't complain about tangential annoyances", "Avoid generic tangents." and related sections of the HN guidelines. They're linked at the bottom of the page.
The page linked has some more information available, but its author (abelard?) cites from "Mein Kampf" later, naming the books author as "Adolph" (sic!).
Caution is advised.
He is very odd. The name is presumably a reference to Peter Abelard who was not a nice man (very clever, of course).
Nothing wrong per se with citing what someone you are writing about said about themselves. He has some very odd historical, economic and political theories, but a lot of them are rooted in common misconceptions.
Remind me of a recent discussion we had among Stackoverflow moderator:
> “Think about it,” he continued. “Who discovers the edge cases the docs don’t mention? Who answers the questions that haven’t been asked before? It can’t be people trained only to repeat canonical answers. Somewhere, it has to stop. Somewhere, someone has to think.”
> “Yes,” said the Moderator.
> He leaned back. For a moment, restlessness flickered in his eyes.
> “So why wasn’t I told this at the start?”
> “If we told everyone,” said the Moderator gently, “we’d destroy the system. Most contributors must believe the goal is to fix their CRUD apps. They need closure. They need certainty. They need to get to be a Registered Something—Frontend, Backend, DevOps, Full stack. Only someone who suffered through the abuse of another moderator closing their novel question as a duplicate can be trusted to put enough effort to make an actual contribution”
What does “destroy the system” mean here?
The metaphor doesn't match very well here because stackoverflow is not selling new tape at a premium but giving them for free and reading a stackoverflow answer is harder than asking an LLM.
Could be that AI companies feeding on stackoverflow are selling tape at a premium, and if they tell you it's only supervised learning from a lot of human experts it's going to destroy the nice bubble they have going on around AGI.
Could also be that you have to do the actual theory / practice / correction work for your basal ganglia to "know" about something without thinking about it (i.e. learn), contrary to the novel where the knowledge is directly inserted in your brain. If everyone use AI to skip the "practice" phase lazily then there's no one to make the AI evolve anymore. And the world is not a Go board where the AI can learn against itself indefinitely.
If you have to ask, you aren't ready to know the answer. There are some things you have to figure out on your own. This is one of them.
Use it to train an "AI"? :)
Probably not the OPs intent though. I suspect there are a lot of ways to destroy the system.
I've read this a long time ago, when I was a kid. Back then I thought about the education system and how it sometimes inhibits the creativity within the students. But right now, other comparison comes to mind - I don't know how relevant it is, though, so please don't judge it strictly.
Modern "AI" (LLM-based) systems are somewhat similar to the humans in this story who were taped. They may have a lot of knowledge, even a lot of knowledge that is really specialized, but once this knowledge becomes outdated or they are required to create something new - they struggle a lot. Even the systems with RAG and "continuous memory" (not sure if that's the right term) don't really learn something new. From what I know, they can accumulate the knowledge, but they still struggle with creativity and skill learning. And that may be the problem for the users of these systems as well, because they may sometimes rely on the shallow knowledge provided by the LLM model or "AI" system instead of thinking and trying to solve the problem themselves.
Luckily enough, most of the humans in our world can still follow the George's example. That's what makes us different from LLM-based systems. We can learn something new, and learn it deeply, creating the deep and unique networks of associations between different "entities" in our mind, which allows us to be truly creative. We also can dynamically update our knowledge and skills, as well as our qualities and mindset, and so on...
That's what I'm hoping for, at least.
What concerns me is that learning depth is more discouraged than ever. For a long time it's been discouraged, which is natural as we have a preference for simple things rather than difficult/complex things. But we're pushing much harder than ever before. From the way we have influencer education videos to the way people push LLMs ("you can just vibe code, no thinking required"). We've advanced enough that it's easy to make things look good enough but looks can be deceiving. It's impossible to know what's good enough without depth of knowledge, without mastery.
No machine will ever be sufficient to overcome the fundamental problem: a novice is incapable of properly evaluating a system. No human is capable of doing this either, nor can they (despite many believing they can). It's a fundamental information problem. The best we can do is match our human system, where we trust the experts, who have depth. But we even see the limits of that and how frequently they get ignored by those woefully unqualified to evaluate. Maybe it'll be better as people tend to trust machines more. But for the same reason it could be significantly worse. It's near impossible to fix a problem you can't identify.
Link to the story without ads
https://www.inf.ufpr.br/renato/profession.html
Thanks - the OP’s site was a truly horrible experience
I dunno I just copied it into emacs. Another free short story to keep in my digital collection.
1 reply →
For some reason Safari's reader view skips a part of the page.
I haven't seen any ads on the site - I guess AdNauseum works well :)
A very nice story, and an interesting reflection on the education system.
Also, and this is just an aside, but “the protagonist who is too special for the sorting hat” is a bit of a trope in young adult literature at this point. Is this the first real instance of it? 1957. That’s a while ago! I don’t even know if the “sorting hat” trope was established enough to subvert at the time.
Not really an example of the trope, but suspect Asimov might have got some of his ideas from Huxley's Brave New World, where it turns out the occupation-segregated dystopia is actually run by an idealistic type who's committed to the system but finds nonconformists and forbidden literature really interesting study subjects to make it better, and exile to the Falklands is actually a reward, sort of...
> who is too special
"Fans are slans."
No one would have recognized any tropes in 1957 beyond Shakespeare. Even Joseph Campbell wasn’t popularized until decades later.
As mentioned, the word "trope" dates back to ancient times, although generally meaning rhetorical devices like similes and metaphors rather than in the "reused plot" sense generally used today. But even the ancients still recognized those. Aristotle's Poetics deals with plays in addition to poems, and he discusses what sort of plots work in tragedies.
Sorry, I can’t tell if this is sarcastic. Well I think it has a kernel of truth that overstates it for rhetorical flair.
I’m willing to believe the phrase “trope” wasn’t invented in 1957 if that’s what you are saying. But surely they had the idea of popular little trends in contemporary literature.
The must have known they were writing pulp sci-fi. At least when they got their copies they could feel the texture!
2 replies →
>No one would have recognized any tropes in 1957 beyond Shakespeare.
Nope. Just within science fiction, early issues of Galaxy had many editorials denouncing/mocking science fiction stories with overused tropes, such as Western transposed to space, or babies being killed as aberrant after a nuclear war because they have ten fingers and toe.
This is my favorite Asimov story. It's got a protagonist with compelling motivations, a society that has problems but also convincing reasons why they persist, and a great ending.
mine too, because one of my favourite sff tropes is that the more you regiment society, the more you rely on outsiders and those pushed to the edges for any real innovation.
People stuck following the rules are going to struggle to deal with, or come up with solutions too, problems that are outside the rules.
Two other Asimov stories that are similarly relevant to much of what is discussed on HN for similar reasons are “In a Good Cause—” and “The Dead Past”.
I don’t know of a link for the first. Here’s one for the second.
https://xpressenglish.com/our-stories/dead-past/
I am sort of questioning my use of LLMs again after, first reluctantly, starting to use them multiple times a day. This story seems like it was intended to be an allegory for LLM-use though I know it couldn't have been.
It's an allegory about trusting "best practices", standardized bodies of knowledge¹, and "that's the way it's always been done". Not that those things necessarily don't work, they do in the story as well as in real life, but they need to adapt to change and the story illustrates what happens when they harden from best practice into unquestioned dogma.
¹ There's even a BoK for software developers, the SWEBOK, but I've never met anybody who's read it.
It's also about hyperspecialization. A concept that was beginning to be noticed at the time.
There's a similar story about a progression of robot repair devices --- which has to end in a "Master Robot Repairman" profession which is the folks who repair the robots which repair other robots.
Blanking on author and title, but read it a _long_ while ago, and it had a distinctly golden age feel --- maybe Murray Leinster?
There's something a little like this in Strata by Pratchett (which is lightly sending up Niven's Ringworld and a non-robot-related but similar idea there).
I thought this post from Kyla Scanlon[0] did a good job of explaining that eventually the algorithms replace knowledge. Which is not a good thing.
0: https://kyla.substack.com/p/the-four-phases-of-institutional
Alt link of text only - no cruft http://employees.oneonta.edu/blechmjb/JBpages/m360/Professio...
Is this still in print, maybe as part of a collection? I tried to find it but couldn't. Many of his other works seem to be available as paperback, including a bunch of story collections.
I have it in print. As part of Isaac Asimov: The Complete Stories Volume 1 (Published by Harper Voyager)
Thanks, just went and bought it!
1 reply →
Though it doesn't directly answer your question, isfdb.org is a great reference for publication history of SF: https://www.isfdb.org/cgi-bin/title.cgi?55700
It's collected in Nine Tomorrows, most recently reprinted in 1989 per Wikipedia. Used copies may be found online.
Such a great ending. Really makes one wonder about the current AI hype of getting the machines to take over our work.
What motivates you all to learn when you know that information about anything is easily accessible from anywhere ?
Ah, I remember that story. Brilliant. Asimov was a wonderful writer.
Dr Antonelli said, “Or do you believe that studying some subject will bend the brain cells in that direction, like that other theory that a pregnant woman need only listen to great music persistently to make a composer of her child. Do you believe that?”
Apparently, Asimov was an early critic of the “Mozart in the womb” movement.
It isn't to make a composer out of a baby but to expose a growing brain to complex music. We have no proof it benefits brain development, but we also have no proof it does not.
I studied classical music and came from a challenged background which to be honest is a rarity in that field. Almost everyone I studied with has parents who specifically encouraged music education and had the means to help make that happen. I got mine from some gifted vinyl as a child and fell in love with the orchestra. If I was in this story I'd probably not have been recommended to be a Professional Composer (if social expectations were the equivalent of what Asimov is saying here.)
So yeah, I'm pro 'play Mozart to your baby' :)
I don’t think you can assign that meaning here one way or another. The context in the story at that point (IIRC) is that he’s sort of lying to the protagonist, or at least misleading him.
This story is set thousands of years in the future, and yet their social norms are broadly those of 1960s America, conspicuously minus the racism. Their notion of gender equality, for instance, is to segregate, but add "(and women)" after every few "men" (respectively "(and husbands)" after "wives"). Stubby Trevelyan smokes, and litters the cigarette butts. This has to be deliberate on the part of the author. I wonder what Ladislas Ingenescu, Registered Historian, has to say about the matter?… if, indeed, he has any original thoughts to share.
Another, less optimistic view of this same future is the short story "Pump Six" by Paolo Bacigalupi.
https://windupstories.com/books/pump-six-and-other-stories/
one of asimov's finest , a metaphor that continues to find relevance in my day to day existence - that the conclusions we so readily come to are assumptions made in the absence of the awareness of something more
What the hell that was a good read. Ending was great (though the last line did confuse me)
Previously in the story it is mentioned that George as a child was curious about the etymology of the Olympics event and asked his father, only to be dismissed.
The callback at the end symbolizes his renewed curiosity. He is no longer ashamed of the way his mind works and if it makes him look different.
[dead]
[flagged]
Perhaps you should review the "Please don't complain about tangential annoyances", "Avoid generic tangents." and related sections of the HN guidelines. They're linked at the bottom of the page.
Go create something original instead of tryingto destroy the greatness "created by a white guy" in the past.
The page linked has some more information available, but its author (abelard?) cites from "Mein Kampf" later, naming the books author as "Adolph" (sic!). Caution is advised.
He is very odd. The name is presumably a reference to Peter Abelard who was not a nice man (very clever, of course).
Nothing wrong per se with citing what someone you are writing about said about themselves. He has some very odd historical, economic and political theories, but a lot of them are rooted in common misconceptions.