Comment by staticshock
21 hours ago
The eloquence with which this point gets (repeatedly) made is continuing to improve each next time I read it. However, I still feel like we haven't nailed it. That is, we are not yet at the "aphorism" stage of the discourse (e.g. "the medium is the message", "you ship your org chart", "9 mothers can't make a baby in a month"), in which the most pointed version of this critique packs a punch in just a few words that resonate with the majority of people. That kind of epistemological chiseling takes years, if not decades. And AI certainly won't do it for us, because we don't know how to RL meaning-making.
Edit: 9 babies → 9 mothers
> "can't make 9 babies in a month"
It's "9 women can't make a baby in one month".
In fairness, 9 women can't make 9 babies in a month either
No idea why you were dv'd.
It still takes roughly nine months to make a human baby, regardless of how many women or babies are involved!
12 replies →
Hah, right, I mixed it up!
> That is, we are not yet at the "aphorism" stage of the discourse
we learn by doing
Put differently: you get good at what you actually do, not what you think you're doing.
If you're not coding anymore, but using AI tools, you're developing skills in using those AI tools, and your code abilities will atrophy unless exercised elsewhere.
I’ve also seen along those lines “there is no compression algorithm for experience” - a nice summary of the hn posts from today.
It seems overly pessimistic about education. Book learning isn't everything, but a physics textbook could be seen as the compression of centuries of experience.
3 replies →
I don't know. Growing up and seeing life and people around me I firmly believe that if you have enough brain power and intuition for $TOPIC you can speed-run it. At the same time, with time and experience and doing/re-doing it, you will learn or master $TOPIC [1] even with less brain power.
[1] Depending on the topic and the level of knowledge of it.
4 replies →
There clearly is though. You don’t remember every detail of every moment that constitutes the experience.
... or by textbooks, Stack Overflow, senior engineers, code review. How many engineers today got their start by building Minecraft mods or even MySpace?
I do think that these pieces sometimes smuggle in a nostalgic picture of how engineers "really" learn which has only ever been partly true.
I'm using "don't bring a forklift to the gym".
How about "Intelligence amplification, not artificial intelligence"?
Also could be shortened to "IA, not AI", and gets even more fun when you translate it to Spanish: "AI, no IA".
"Bicycle of the Mind" has been cited to death.
The problem is that it was coined so early that we are way past the aphorism stage now.
Isn't it the vehicle metaphor about bicycles for the mind? Not fully crystallized yet but I feel like someone will
>the medium is the message
If you asked 100 Americans what this aphorism means, I strongly doubt a single one could capture McLuhan's original meaning.
More worrisome is that the speech which that came from went on to prophetically observe that for each extension of human capability afforded by technology, there was a matching amputation in human skill/facility --- heretofore, computers have largely fit in with Steve Jobs' vision of them as "bicycles of the mind", making human effort more efficient --- the cognitive engine of LLMs looks to be dumbing down human reasoning to a least common denominator/mean:
https://publichealthpolicyjournal.com/mit-study-finds-artifi...
You're right. ive struggled to understand what exactly this means, in large part perhaps it's so often misused?
I think it means something like we're trapped in the constraints of the medium. Tweets say more about the environment of twitter than whatever message happened to be sent.
but i think im off on that, ill look this person up and find out!
Some examples.
Firstly, Twitter has an upper bound on the complexity of thoughts it can carry due to its character limit (historically 180, now somewhat longer but still too short).
Secondly, a biased or partial platform constrains and filters the messages that are allowed to be carried on it. This was Chomsky's basic observation in Manufacturing Consent where he discussed his propaganda model and the four "filters" in front of the mass media.
Finally, social media has turned "show business [into] an ordinary daily way of survival. It's called role-playing." [0] The content and messages disseminated by online personas and influencers are not authentic; they do not even originate from a real person, but a "hyperreal" identity (to take language from Baudrillard) [0]:
Emphasis mine. Influencers have been sepia-tinted by the profit orientation of the medium and their messages do not correspond to a position authentically held. You must now look and act a certain way to appease the algorithm, and by extension the audience.
If nothing else, one should at least recognize that people primarily identify through audiovisual media now, when historically due to lack of bandwidth, lack of computing and technology, etc. it was far more common for one to represent themselves through literate media - even as recently as IRC. You can come to your own conclusions on the relative merits and differences between textual vs. audiovisual media, I will not waffle on about this at length here.
The medium itself is reshaping the ways people represent, think about, and negotiate their own self-concept and identity. This is beyond whatever banal tweets (messages) about what McSandwich™ your favourite influencer ate for lunch, and it's this phenomena that is important and worth examining - not the sandwich.
[0] Marshall McLuhan in Conversation with Mike McManus, 1977. https://www.tvo.org/transcript/155847
It's confusing because "message" is not using its lay meaning, and decades of "medium" and "media" meaning drift meant that it isn't either.
For "the medium is the message", "medium" refers to any tool that acts as an extension of yourself. TV is an extension of your community, even things like light bulbs (extends your vision) are included in his meaning.
McLuhan argued that all forms of media like that carry a message that's more than just their content. "The message" in that argument refers to the message the medium itself brings rather than its content. For example, the airplane is "used for" speeding up travel over long distance, but the the message of its medium itself is to "dissolve the railway form of city, politics, and association, quite independently of what the airplane is used for."
You can see it happening via online media that extend ourselves across the internet. Think of how, once easy video creation via Youtube became uniform, web comics stopped becoming a popular medium for comedy online. It's not like the web comics faded because they got worse; it's that they faded into a niche format because people didn't want to communicate via static images anymore. Or how, once short form videos on TikTok got big, you saw other platforms shift to copy the paradigm. McLuhan's point is that it's not just the content of those short form videos that matters; it's the message of the format itself. Peoples' attention spans grow shorter because of the format, and before too long, we saw the tastes and expectations of the masses change. Reddit's monosite-with-subcommunities format and dopamine triggering voting feedback mechanism were its message more than any actual content posted there, and it's why traditional forums are niche and dwindling.
If you want to get a pretty good understanding of it, just read the first chapter from his book Understanding Media. It's short and relatively straight forward.
AI is Augmenting (Actual) Intelligence.
Taste/judgement cannot an AI beget
Meaning is abstract. We can't express meaning: we can only signify it. An expression (sign) may contain the latent structure of meaning (the writer's intention), but that structure can only be felt through a relevant interpretation.
To maintain relevance, we must find common ground. There is no true objectivity, because every sign must be built up from an arbitrary ground. At the very least, there will be a conflict of aesthetics.
The problem with LLMs is that they avoid the ground entirely, making them entirely ignorant to meaning. The only intention an LLM has is to preserve the familiarity of expression.
So yes, this kind of AI will not accomplish any epistemology; unless of course, it is truly able to facilitate a functional system of logic, and to ground that system near the user. I'm not going to hold my breath.
I think the great mistake of "good ole fashioned AI" was to build it from a perspective of objectivity. This constrains every grammar to the "context-free" category, and situates every expression to a singular fixed ground. Nothing can be ambiguous: therefore nothing can express (or interpret) uncertainty or metaphor.
What we really need is to recreate software from a subjective perspective. That's what I've been working on for the last few years... So far, it's harder than I expected; but it feels so close.
LLM's are a mediocre map, but they're a great compass, telescope, navigation tools and what have ye
Yes. The main problem is that they can only lead you to familiar territory. The next problem is that it's hard to notice when you are back where you started.
This has been my main struggle using LLMs to soundboard my new idea. They can write an eloquent interpretation of the entire concept, but as soon as we get to implementation, it stumbles right into creating the very system I intend to replace.
So I would say it's the other way around: LLMs are an excellent map, but a terrible compass. Good enough if you want to explore familiar territory, but practically unusable on an adventure.
> What we really need is to recreate software from a subjective perspective.
What does "subjective" mean here? Are you talking about just-in-time software? That is, software that users get mold on the fly?
That's a feature that could be implemented by a subjective framework.
Traditionally, we use definition as the core primitive for programming. The programming language grammar defines the meaning of every possible expression, precisely and exhaustively. This is useful, because intention and interpretation are perfectly matched, making the system predictable. This is the perspective of objectivity.
The problem with objectivity is that it is categorically limited. A programming language compiler can only interpret using the predetermined rules of its grammar. The only abstract concepts that can be expressed are the ones that are implemented as programming language features. Ambiguity is unspeakable.
The other problem is that it is tautologically stagnant. The interpretation that you are going to use has already been completely defined. The programming language grammar is its own fundamental axiom: a tautology that dictates how every interpretation will be grounded. You can't choose a different axiom. Every programming language is its own silo of expression, forever incompatible with the rest. Sure, we have workarounds, like FFIs or APIs, but none of them can solve the root issue.
A subjective perspective would allow us to write and interpret ambiguous expression, which could be leveraged to (weakly) solve natural language processing. It would also allow us to change where our interpretations are grounded. That would (weakly) solve incompatibility. Instead of refactoring the expression, you would compose a new interpreter.
Because code is data, we can objectify our interpreters. We can apply logical deduction to choose the most relevant one, like a type system chooses the right polymorphic function. We can also compose interpreters like combinators, and decompose them by expressing their intentions. This way, we could have an elegant recursive self-referential system that generates relevant interpreters.
Any adequately described algorithm or data structure could be implemented to be perfectly compatible with any adequately interpreted system, all wrapped in whatever aesthetics the user chooses. On the fly. That's the dream, anyway.
> Meaning is abstract. We can't express meaning: we can only signify it. An expression (sign) may contain the latent structure of meaning (the writer's intention), but that structure can only be felt through a relevant interpretation.
I'm reminded immediately of the Enochian language which purportedly had the remarkable property of having a direct, unambiguous, 1-to-1 correspondence with the things being signified. To utter, and hear, any expression in Enochian is to directly transfer the author's intent into the listener's mind, wholly intact and unmodified:
The Tower of Babel is an allegory for the weak correspondence between human natural language and the things it attempts to signify (as opposed to the supposedly strong 1-to-1 correspondence of Enochian). The tongues are confused, people use the same words to signify different referents entirely, or cannot agree on which term should be used to signify a single concept, and the society collapses. This is similar to what Orwell wrote about, and we have already implemented Orwell's vision, sociopolitically, in the early 21st century, through the culture war (nobody can define "man" or "woman" any more, sometimes the word "man" is used to refer to a "woman," etc).
LLMs just accelerate this process of severing any connection whatsoever between signified and signifier. In some ways they are maximally Babelian, in that they maximize confusion by increasing the quantity of signifiers produced while minimizing the amount of time spent ensuring that the things we want signified are being accurately represented.
Speaking more broadly, I think there is much confusion in the spheres of both psychology and religion/spirituality/mysticism in their mutual inability to "come to terms" and agree upon which words should be used to refer to particular phenomenological experiences, or come to a mutual understanding of what those words even mean (try, for instance, to faithfully recreate, in your own mind, someone's written recollection of a psychedelic experience on erowid).
[0] https://archive.org/details/truefaithfulrela00deej/page/92/m...
That's always been a fun idea. Even a thousand years ago, when most people couldn't read or write, we yearned for more. Even without a description of the problem and its domain, it's immediately obvious that perfect communication would be magic.
The problem is that it's impossible. Even if you could directly copy experience from one mind to the other, that experience would be ungrounded. Experience is just as subjective as any expression: that's why we need science.
> through the culture war (nobody can define "man" or "woman" any more, sometimes the word "man" is used to refer to a "woman," etc).
That's a pretty mean rejection of empathy you've got going on there. People are doing their best to describe their genuine experiences, yet the only interpretations you have bothered to subject their expression to are completely irrelevant to them. Maybe this is a good opportunity to explore a different perspective.
> LLMs just accelerate this process of severing any connection whatsoever between signified and signifier.
That's my entire point. There was never any connection to begin with. The sign can only point to the signified. The signified does not actually interact with any semantics. True objectivity can only apply to the signified: never the sign. Even mathematics leverage an arbitrary canonical grammar to model the reality of abstractions. The semantics are grounded in objectively true axioms, but the aesthetics are grounded in an arbitrary choice of symbols and grammar.
The words aren't our problem. The problem is relevance. If we want to communicate effectively, we must find common ground, so that our intentions can be relevant to each others' interpretations. In other words, we must leverage empathy. My goal is to partially automate empathy with computation.
Outsource manual labor, not your brain.
Both?
I read pretty dense philosophy and the longer I live, the more I think the writers were just bad writers with good ideas. LLMs can convert poorly written sentences into clear sentences with examples.
This concept won't reach that point because when you chisel too hard it crumbles. There are countless lower level tasks that typical programmers no longer learn how to do. Our capacity for knowledge is not unlimited so we offload everything we can to move to the next level of abstraction.
AI coding isn’t an abstraction, though. You can’t treat a prompt like source code because it will give you a different output every time you use it. An abstraction lets you offload cognitive capacity while retaining knowledge of “what you are doing”. With AI coding either you need to carefully review outputs and you aren’t saving any cognitive capacity, or you aren’t looking at the outputs and don’t know what you’re doing, in a very literal sense.
Non-determinism is not as much of a problem as the lack of spec. C++ has the C++ norm, Python has its manual. One can refer to it to predict reliably how the program will behave without thinking of the generated assembly. LLMs have no spec.
1 reply →
"You can’t treat a prompt like source code because it will give you a different output every time you use it"
But it seems we are heading there. For simple stuff, if I made a very clear spec - I can be almost sure, that every time I give that prompt to a AI, it will work without error, using the same algorithms. So quality of prompt is more valuable, than the generated code
So either way, this is what I focus my thinking on right now, something that always was important and now with AI even more so - crystal clear language describing what the program should do and how.
That requires enough thinking effort.
10 replies →
It's staggering to me how many times I've heard this argument that LLMs are just the next level of abstraction. Some people are even comparing them to compilers.
3 replies →
> AI coding isn’t an abstraction
Isn't it an abstraction similar to how an engineering or product manager is? Tell the (human or AI coder) what you want, and the coder writes code to fulfill your request. If it's not what you want, have them modify what they've made or start over with a new approach.
2 replies →
That's true, but I think it's beside the point. The flip side of that argument, which is equally true, goes something like, "not doing cognitive push-ups leads to cognitive atrophy."
There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators), primarily because those are well bounded domains, so we understand the nature of the codependency we're signing up for. AI is an amorphous and still growing domain. It is not a specific rung in the abstraction hierarchy; it is every rung simultaneously, but at different fidelity levels.
> There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators)
I'd argue these are not at all OK to lose. You live in an earthquake zone? You sure better know which way is north and where you have to walk to get back home when all the lines are down after a big one. You need to do a quick mental check if a number is roughly where it should be? YOu should be able to do that in your head.
There might be better examples that support your point more effectively e.g. cursive writing
1 reply →
> "not doing cognitive push-ups leads to cognitive atrophy" This is one of the points being made in the post, at least in reference to people who already have some mastery of their craft. If they outsource their thinking without elevating it, they aren't exercising that metaphoric muscle between their ears.
I get your point, I just wonder how accurate it is. We basically never look at the output of the compiler, so I agree that tool allows one to operate at a higher level than assembly. But I always have to wade through the output from AI so I’m not sure I got to move to the next level of abstraction. But maybe that’s just me.
Are compilers deterministic?
3 replies →
The idea that a tool intended to replace all human cognitive work is the next level of abstraction is so fundamentally flawed, that I'm not sure it's made in good faith anymore. The most charitable interpretation I can think of is that it's a coping mechanism for being made redundant.
Nevermind the fact that these tools are nowhere near as capable as their marketing suggests. Once companies and society start hitting the brick wall of inevitable consequences of the current hype cycle, there will be a great crash, followed by industry correction. Only then will actually useful applications of this technology surface, of which there are plenty. We've seen how this plays out a few times before already.