Comment by hamstergene
18 hours ago
Because the most important parts of the expertise are coming from their internal "world model" and are inseparable from it.
An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities. The request to take a dev and "communicate" their expertise to another is based on this belief. And because this belief is wrong, the attempt to communicate expertise never fully succeeds.
Factual knowledge can be transferred via words well, that's why there is always at least partial success at communicating expertise. But solidified interconnected world model of what all your knowledge adds up to, cannot. AI can blow you out of the water at knowing more facts, but it doesn't yet utilize it in a way that allows surprisingly often having surprisingly correct insights into what more knowledge probably is. That mysterious ability to be right more often is coming out of "world model", that is what "expertise" is. That part cannot be communicated, one can only help others acquire the same expertise.
Communicating expertise is a hint where to go and what to learn, the reader still needs to put effort to internalize it and they need to have the right project that provides the opportunity to learn what needs to be learnt. It is not an act of transfer.
A non-trivial part of the big difference between the juniors that seem talented and "get it", and those that don't is precisely their ability to form accurate enough world models quickly. You can tell who is going at the "physics" of software and applying them, and who is just writing down recipes, and doesn't try to understand the nature of any of the steps.
It's especially noticeable when teaching functional programming to people trained in OO: Some people's model just breaks, while others quickly see the similarities, and how one can translate from a world of vars to a world of monads with relative ease. The bones of how computation works aren't changing, just how one puts together the pieces.
I was even as a junior the kind, who tried to understand the nature of the steps. I failed many times, but I learned from them all the time. I remember my mutable public static variables, and terrible small JavaScript apps. But every time when I did something like that, I tried to understand it. I knew that I failed. Sometimes it took me a year or more (like when I first encountered React about a decade ago, I immediately knew why some of my apps failed with architecture previously).
However, I've seen developers who were in this field for decades, and they still followed just recipes without understanding them.
So, I'm not entirely sure, that the distinction is this clear. But of course, it depends how we define "senior". Senior can be developers who try to understand the underlying reasons and code for a while. But companies seem to disagree.
Btw regarding functional programming. When I first coded in Haskell, I remember that I coded in it like in a standard imperative languages. Funnily, nowadays it's the opposite: when I code in imperative languages, it looks like functional programming. I don't know when my mental model switched. But one for sure, when I refactor something, my first todo is to make the data flow as "functional" as possible, then the real refactoring. It helps a lot to prevent bugs.
What really broke my mind was Prolog. It took me a lot to be able to do anything more than simple Hello World level things, at least compared to Haskell for example.
I had to learn Prolog for a university paper and I have to agree; out of the dozen-ish languages I've had to learn, something just didn't "click" with Prolog.
No real value is this comment, I'm just happy to share a moment over the brain-fuck that is Prolog (ironically Brainfuck made a whole lot more sense).
I wouldn't really try to equate arbitrary job titles awarded based on tenure with actual expertise; titles aren't consistently applied across the industry, or awarded on conditions other than actual merit.
1 reply →
This happened at an old employer of mine. We started to go down the FP road, veering off the standard OOP of the day. About 25% of the people picked it up immediately. About 50% got it well enough. And 25% just thought it was arcane wizardry.
Between that latter group and the bottom portion of the middle it sparked a big culture war. Eventually leading to leadership declaring that FP was arcane wizardry, and should be eradicated.
I've always had excellent model building functionality for abstractions and got the "physics" of a subject rather quickly, be it economics, biology, certain mathematical subjects and more.
Then, I met software and computer science abstractions, they all seemed so arbitrary to me, I often didn't even understand what the recipe was supposed to cook. And though I have gotten better over time (and can now write good solutions in certain domains), to this day I did not develop a "physics" level understanding of software or computer science.
It feels really strange and messes with your sense of intelligence. Wondering if anyone here has a similar experience and was able to resolve it.
your "physics" grounding is exactly why it feels so odd - software is by its nature anti-physicalist
math and logic are closer to a basis for software abstraction - but they were scary to business people so a "fake language" was invented atop them - you have "objects" that don't actually exist as objects, they are just "type based dispatch/selection mechanism for functions", "classes" that are firstly "producers of things and holders of common implementation" and only secondarily also work to "group together classes of objects"
1 reply →
I have the opposite experience. Goes to show the difference between people.
I've always had trouble internalizing the "physics" of physics or chemistry, as if it were all super arbitrary and there was no order to it.
Computation and maths on the other hand just click with me. Philosophy as well btw.
I guess I deal better with handling completely abstract information and processes and when they clash with the real world I have a harder time reconciling.
>teaching functional programming to people trained in OO: Some people's model just breaks, while others quickly see the similarities, and how one can translate from a world of vars to a world of monads with relative ease.
Besides OO -> Functional this applies everywhere else in Computer Science. If you understood the fundamentals no new framework, language or paradigm can shock you. The similarities are clear once you have a fitting world model.
Indeed. Understand the principles, you can work with just about any tool
This resonates. Tips on how to build this skill?
Put yourself in a position where it is your problem/responsibility, where you cannot depend on another to do it for you. You'll be learning every day.
1 reply →
Fail, and try to understand why. Don't be quick with the answer. Sometimes it takes years. But it's crucial to want to improve, and recognize when the answer is in front of you.
Read why programming languages have the structures what they have. Challenge them. They are full with mistakes. One infamous example is the "final" keyword in Java. Or for example, Python's list comprehension. There are better solutions to these. Be annoyed by them, and search for solutions. Read also about why these mistakes were made. Figure out your own version which doesn't have any of the known mistakes and problems.
The same with "principles" or rule of thumbs. Read about the reasons, and break them when the reasons cannot be applied.
And use a ton of programming languages and frameworks. And not just Hello World levels, but really dig deep them for months. Reach their limits, and ask the question, why those limits are there. As you encounter more and more, you will be able to reach those limits quicker and quicker.
One very good language for this, I think, is TypeScript. Compared to most other languages its type inference is magic. Ask why. The good thing of it is that its documentation contains why other languages cannot do the same. Its inference routinely breaks with edge cases, and they are well documented.
Also Effective C++ and Effective Modern C++ were my eye openers more than a decade ago for me. I can recommend them for these purposes. They definitely helped me to loose my "junior" flavor. They explain quite well the reasons as far as I remember.
1 reply →
No who you replied to, but practice. Deliberate practice; not just writing the same apps over and over, but instead challenging yourself with new projects. Build things from scratch, from documentation or standards alone. Force yourself to understand all the little details for one specific problem.
By complete coincidence, yesterday I came across this link to an article Peter Naur wrote in 1985 (https://pages.cs.wisc.edu/~remzi/Naur.pdf) which I haven't been able to stop thinking about.
I've been doing this for coming up on thirty years now, mostly at one large company, and I spent a significant number of hours every week fielding questions from people who are newer at it who are having trouble with one thing or another. Often I can tell immediately from the question that the root of the problem is that their world model (Naur would call it their Theory) is incomplete or distorted in some way that makes it difficult for them to reason about fixing the problem. Often they will complain that documentation is inadequate or missing, or that we don't do it the way everyone else does, or whatever, and there's almost always some truth to that.
The challenge then is to find a way to represent your own theory of whatever the thing is into some kind of symbolic representation, usually some combination of text and diagrams which, shown to a person of reasonable experience and intelligence, would conjure up a mental model in the reader which is similar to your own. In other words you want to install your theory into the mind of another person.
A theory of the type Naur describes can't be transplanted directly, but I think my job as a senior developer is to draw upon my experience, whether it was in the lecture hall or on the job, to figure out a way of reproducing those theories. That's one of the reasons why communication skills are so critical, but its not just that; a person also needs to experience this process of receiving a theory of operation from another person many times over to develop instincts about how to do it effectively. Then we have to refine those intuitions into repeatable processes, whether its writing documents, holding classes, etc.
This has become the most rewarding part of my work, and a large part of why I'm not eager to retire yet as long as I feel I'm performing this function in a meaningful way. I still have a great deal to learn about it, but I think that Naur's conception of what is actually going on here makes it a lot more clear the role that senior engineers can play in the long term function of software companies if its something they enjoy doing.
Isn't that interesting? The job of exploring a theory or model to such an extent that it can be expressed in computer code always seems to fall on the shoulders of a software developer. Other people can write specifications and requirements all day long, but until a software developer has tackled the problem, the theory probably hasn't been explored well enough yet to express clearly in computer code. It feels like software developers are scientists who study their customers' knowledge domains.
> It feels like software developers are scientists who study their customers' knowledge domains.
I agree so much with this. It's why I feel so stifled when an e.g. product manager tries to insulate and isolate me from the people who I'm trying to serve -- you (or a collective of yous) need to have access to both expertise in the domain you're serving, and expertise in the method of service, in order to develop an appropriate and satisfactory solution. Unnecessary games of telephone make it much harder for anyone to build an internal theory of the domain, which is absolutely essential for applying your engineering skills appropriately.
5 replies →
Agree 100%.
Even the most verbose specifications too often have glaring ambiguities that are only found during implementation (or worse, interoperability testing!)
In theory, it's the same as in practice.
In practice, it isn't.
Sorry this is just the interior trapped nonsense that engineers find themselves in. Please touch grass
Product designers have to intuit the entire world model of the customer. Product managers have to intuit the business model that bridges both. And on and on.
Why do engineers constantly have these laughably mind blowing moments where they think they are the center of the universe.
3 replies →
Regarding the tension between symbolic representation and Naur "theory", I'd actually say they come from two different traditions, each providing two different theses. When writing them out I think it becomes a bit clearer how they interact and that they're not actually contradictory.
Thesis A is something like: the value of the programmer comes from their practical ability to keep developing the codebase. This ability is specific to the codebase. It can only be obtained through practice with that codebase, and can't be transferred through artefacts, for the same reason you can't learn to play tennis by reading about it (a "Mary's Room" argument).
This ability is what Naur calls "theory". I think the term is a bit confusing (to me, the word is associated with "theoretical" and therefore to things that can be written down). I feel like in modern discourse we would usually refer to this as a "mental model", a "capability", or "tacit knowledge".
Then there's Thesis B, which comes more from a DDD lineage, and which is something like: the development of a codebase requires accumulation of specific insights, specific clarifying perspectives about problem-domain knowledge. The ability for programmers to build understanding is tied to how well these insights are expressed as artefacts (codebase structure, documentation, communication documents).
I feel like some disagreements in SWE discourse come from not balancing these two perspectives. They're actually not contradictory at all and the result of them is pretty common-sensical. Thesis A explains the actual mechanism for Thesis B, which is that providing scaffolding for someone learning the codebase obviously helps, and vice-versa, because the learned mental model is an internally structured representation that can, with work, be externalised (this work is what "communication skills" are).
It's interesting that the way you describe it, the world model itself is _not_ just a collection of words in our minds, and I have a small theory of my own that "thoughts" in our brains aren't actually words at all (otherwise animals which don't talk wouldn't be able to make complex decisions?), and the words that we "hear" in our heads and which we perceive as our thoughts are just a rough translation of these thoughts into words, they aren't thoughts themselves. It is also why it's sometimes really hard to put complex (but correct) thoughts into words, and especially hard to adequately compare complex ideas during a regular conversation: on the surface a lot of ideas (especially in software engineering) "sound" good, but they're actually terrible. And there's no better way to communicate ideas than to put them into words, which is probably what makes good software engineering extremely difficult.
Or maybe I'm just a little bit insane. Or both.
Obligatory link to a great podcast that has a great episode covering this paper: https://pca.st/episode/dfc024c8-31f8-4387-b301-7a4f77132b74
Everyone should subscribe to the Future of Coding (recently renamed to the Feeling of Computing) podcast if you haven't already: https://feelingof.com/
hey thanks!!
I keep saying this is the single most important article to consider when talking about AI assisted software building. Everyone should read it. The question should always be: is a human building a theory of the software, or is does only AI understand it? If it's the latter, it is certainly slop.
(Second, albeit more theoretical, would be A Critique of Cybernetics by Jonas)
>their world model (Naur would call it their Theory) is incomplete or distorted in some way that makes it difficult for them to reason about fixing the problem
Of course the model is incomplete compared to reality. That's in the definition of a model, isn't it? And what is deemed a problem in one perspective might be conceived as a non problem in an other, and be unrepresentable in an other.
I think that this is actually a good thing. If everyone had the same internal world model, we would have very little innovation.
I try to train and mentor those that are junior to me. I try to show them what is possible, and patterns that result in failure. This training is often piecemeal and incomplete. As much as I can, I communicate why I do the things I do, but there are very few things I tell them not to do.
I am often surprised at the way people I have trained solve problems, and frequently I learn things myself.
Training is less successful for those who aren’t interested in their own contributions, and who view the job only as a means to get paid. I am not saying those people are wrong to think that way, but building a world view of work based on disinterest isn’t going to let people internalize training.
I agree. It's pretty easy to train based on facts, and even experiences. And learners can often take things in unexpected directions.
I think it becomes difficult to train the next layer up though, which is a sum-total of life experience. And I think this is what the parent poster was referring to.
For example, I read a lot of Agatha Christie growing up. At school I participated in problem-solving groups, focusing on ways to "think" about problems. And I read Mark Clifton's "Eight keys to Eden".
All of that means I approach bug-fixing in a specific mental way. I approach it less as "where is the bug" and more like "how would I get this effect if I was wanting to do it". It's part detective novel, part change in perspective, part logical progression.
So yes, training is good, and I agree that needs to be one. But I can not really teach "the way I think". That's the product of a misspent youth, life experience, and ingrained mental patterns.
Yeah, you can't get it out in "one session of conversation", but you definitely can under a different... context.
"Seeing the work reveals what matters. Even if the master were a good teacher, apprenticeship in the context of on-going work is the most effective way to learn. People are not aware of everything they do. Each step of doing a task reminds them of the next step; each action taken reminds them of the last time they had to take such an action and what happened then. Some actions are the result of years of experience and have subtle reasons; other actions are habit and no longer have a good justification. Nobody can talk better about what they do and why they do it than they can while in the middle of doing it."
Is this a quote from somewhere?
Yes, it's from this textbook: https://hci.stanford.edu/courses/cs147/2022/au/readings/rest...
> An average unaware person believes that anything can be put in words and once the words are said, they mean to reader what the sayer meant, and the only difficulty could come from not knowing the words or mistaking ambiguities.
"Transmissionism" is a term I've seen to describe this
https://andymatuschak.org/books/
https://en.wikipedia.org/wiki/Tacit_knowledge
this is why I only communicate in poetry
complexity is
not what you believe it is
please try listening
So cool. One reading is “complexity is not what you believe it is”. Another is “complexity is”… “not what you believe it is”. Seems similar but the difference is subtle. Even the “please try listening” line changes in both versions. One is confrontational, the other is empathetic.
Agreed. "complexity is" as a full sentence followed by "not what you believe it is" has a fundamentally different meaning.
Very cool
There is complexity
that can only be moved around,
not eliminated.
Sometimes it's better
To keep it all in a clump
Than spread it about
Reminded me of a colleague who wrote his email replys as haiku. It got old pretty quickly.
Like an old colleague
Who wrote emails in haiku
It got old quickly
....
Sorry, I couldn't resist!!
I'd say, on averaged, it's 50% what you say and 50% communication issues.
Most smart juniors have no problem with learning. Perceptual exposure and deliberate practice works almost mechanically. However, if someone can't tell you what examples you should be exposed to, you'll learn crap.
good things llms solve this problem by assuming everything can be put into words and then convincing the world this is true.
You might be encouraged by this then -- it seems some leading AI researchers agree with you: https://www.technologyreview.com/2026/01/22/1131661/yann-lec...
That is 100% NOT what he is doing.
My guy LeCun believes in deterministic systems describing reality even more than LLMs. He is literally a symbolic logic die hard.
This is surprisingly close to a personal theory I've been working on. I've been describing how to use AI to people as engaging the world model in their head, organization, or software.
I'd love to talk more live. I think I have some ideas you'd be interested in. Find me in my profile.
Correct. One just has to realize that the cost of communication (and the context/memory lost along the way to train that understanding) is often just far higher than anyone has patience for. To fully understand the expert, they must become the expert. (or at least a hell of a lot closer than they were)
This is also why average people with little time to commit find it hard to realize the importance and depth of AI. It's a full on university education exploring those.
Another part of the equation is practice.
Long before the discussion of the morality of AI went mainstream, I ran into a problem with making what appeared to be ethical choices in automation, and then went on a journey of trying to figure this all ethics thing out (took courses in university, read some books...)
I made an unexpected discovery reading Jonathan Haid's... either Righteous Mind or the Happiness Hypothesis. He claimed that practicing ethics, as is common in religious societies is an integral and important part of being a good person. This is while secular societies often disregard this aspect and imagine ethics to be something you learn exclusively by reading books or engaging in similar activity that has exclusively the descriptive side, but no practice whatsoever.
I believe this is the same with expertise. Part of it is gained through practice, and that is an unskippable part. Practice will also usually require more time than the meta-discussion of the subject.
To oversimplify it, a novice programmer who listened to every story told by a senior, memorized and internalized them, but sill can't touch-type will be worse at everyday tasks pertaining to their occupation. It's not enough to know touch-typing exists, one must practice it and become good at it in order to benefit from it. There are, of course, more, but less obvious skills that need practice, where meta-knowledge simply can't be used as a substitute. There are cues we learn to pick up by reading product documentation which will tell us if the product will work as advertised, whether the product manufacturer will be honest or fair with us, will the company making the product go out of business soon or will they try to bait-and-switch etc.
When children learn to do addition, it's not enough to describe to them the method (start counting with first summand, count the number of times of the second summand, the last count is the result), they actually must go through dozens of examples before they can reliably put the method to use. And this same property carries over to a lot of other activities, even though we like to think about ourselves as being able to perform a task as soon as we understand the mechanism.
Just wanted to say thanks for this.
Great thread.
“Cursive knowledge”, as an old boss told me. Was incredibly ironic when he leaned into my misunderstanding.
> AI can blow you out of the water at knowing more facts
Yea, but, I have a search engine that contains all the original uncompressed training data, so I'm back on top. How we collectively forgot this is amazing to me.
> and they need to have the right project that provides the opportunity to learn what needs to be learnt.
It takes _time_. I solve problems the way I do because I've had my fair share of 2am emergency calls, unexpected cost blowups, and rewrite failures in my career. The weariness is in my bones at this point.
That’s very well put.
I'm going to get downvoted to hell for this, but you described the exact reason why education is a waste of time.
I'll bite: is education not about starting with theoretical summary of the knowledge in the domain, and then applying it in practice and really feeling it work, be challenging, or not work?
The best educators I had had exactly that approach: you sometimes start with theory, but other times with challenges which make you feel the difficulty, and understand the value of the theory you are co-developing with the educator (they just have the benefit of knowing exactly where we'll end up, but when time allows, they do let you take a wrong turn too). Even if you start with theory, diving into a challenge where you are allowed not to apply the learnings should quickly tell you why the theoretical side makes sense.
As with everything in life, great educators are few but once you have them, you can apply the same approach yourself even if the educator is unable to steer you the right way.
If you never received this type of education, then what you received could arguably be called a waste of time.
yep, as I was exploring in https://danieltan.weblog.lol/2026/05/dunning-kruger-and-the-... , the expert pays the "communication tax" to dumb down concepts that the listener can understand. There is a gap between domain understanding and what is being conveyed that is similar for human-llm interactions as well.
Great points. Words allow one to communicate an approximation of part of what one knows.
Agree about expertise being inseparable from the 'world model'. When someone tells us something, they're assuming that we know a certain amount of background knowledge but, in reality, we never have exactly the missing pieces that the speaker is assuming we have because our world model is different. It can lead to distortions and misunderstandings.
Even if someone repeats back to us variants of what we've told them at a later time, it doesn't mean that they've internalized the exact same knowledge. The interpretation can be different in subtle and surprising ways. You only figure out discrepancies once you have a thorough debate. But unfortunately, a lot of our society is built around avoiding confrontation, there is a lot of self-censorship, so actually people tend to maintain very different world models even though the surface-level ideas which they communicate appear to be similar.
Individuals in modern society have almost complete consensus over certain ideas which we communicate and highly divergent views concerning just about everything else which we don't talk about... And as our views diverge more, it narrows down the set of topics which can be discussed openly.
Well here's an engineering problem figure out how to mentor 10x the number of juniors
Mentor 3 who need to mentor/"pair with" 3 each? ;-)
This sounds like a whole lot of copium from devs who don't want to bother with the effort of just writing stuff down, ie good documentation practices...
Actually, maybe even worse (not directed at parent) - I think some "seniors" have a stick so far up their err keyboard, and think they are so wise beyond words that they refuse to share their "all knowing expertise" with anyone else as a form of gatekeeping or perhaps fear of being "found out" (that they are not actually keyboard "Gods").
Really though, just wright shit down even if the first draft isn't great. Write it down, check it into the codebase.
I believe you are responding to a concern you are facing in your career with bad documentation (I would guess bad code too), but projecting that onto an unrelated topic: I believe both could be independently true or not.
*write stuff. Siri dictation can’t be overhauled soon enough.
[dead]
[dead]
[dead]