Comment by jqpabc123
2 days ago
He wants educators to instead teach “how do you think and how do you decompose problems”
Ahmen! I attend this same church.
My favorite professor in engineering school always gave open book tests.
In the real world of work, everyone has full access to all the available data and information.
Very few jobs involve paying someone simply to look up data in a book or on the internet. What they will pay for is someone who can analyze, understand, reason and apply data and information in unique ways needed to solve problems.
Doing this is called "engineering". And this is what this professor taught.
In undergrad I took an abstract algebra class. It was very difficult and one of the things the teacher did was have us memorize proofs. In fact, all of his tests were the same format: reproduce a well-known proof from memory, and then complete a novel proof. At first I was aghast at this rote memorization - I maybe even found it offensive. But an amazing thing happened - I realized that it was impossible to memorize a proof without understanding it! Moreover, producing the novel proofs required the same kinds of "components" and now because they were "installed" in my brain I could use them more intuitively. (Looking back I'd say it enabled an efficient search of a tree of sequences of steps).
Memorization is not a panacea. I never found memorizing l33t code problems to be edifying. I think it's because those kinds of tight, self-referential, clever programs are far removed from the activity of writing applications. Most working programmers do not run into a novel algorithm problem but once or twice a career. Application programming has more the flavor of a human-mediated graph-traversal, where the human has access to a node's local state and they improvise movement and mutation using only that local state plus some rapidly decaying stack. That is, there is no well-defined sequence for any given real-world problem, only heuristics.
Memorizing is a super power / skill. I work in a ridiculously complex environment and have to learn and know so much. Memorizing and spaced repetition are like little islands my brain can start building bridges between. I used to think memorizing was anti-first principles, but it is just good. Our brains can memorize so much if we make them. And then we can connect and pattern matching using higher order thinking.
There's this [1] which is one of my favorite articles on that topic; definitely worth a read.
[1] https://www.pearlleff.com/in-praise-of-memorization
Recognizing the patterns and applying patterned solutions is where I see success in my niche of healthcare interoperability. So much of my time is spent watching people do things,process and how they use data. It's amazing how much people remember to do their job, but me coming in and be able to bridge the doctor and the lab to share data easier is like Im an alchemist. It's really not a problem I've been able to see ai solve without suggesting solutions that are too simple or too costly and in that goldilocks zone everyone will be happy with
What's even better about memorization is that you have an objective method to test your own understanding. It is so easy to believe you understand something when you don't! But, at least with math, I think if you can reproduce the proof from memory you can be very confident that you aren't deluding yourself.
In education, I have heard it called “fluency”.
Hmmm... It's the other way around for me. I find it hard to memorise things I don't actually understand.
I remember being given a proof of why RSA encryption is secure. All the other students just regurgitated it. It made superficial sense I guess.
However, I could not understand the proof and felt quite stupid. Eventually I went to my professor for help. He admitted the proof he had given was incomplete (and showed me why it still worked). He also said he hadn't expected anyone to notice it wasn't a complete proof.
> Hmmm... It's the other way around for me. I find it hard to memorise things I don't actually understand.
I think you two are agreeing. GP said that they found they couldn't memorize something until they actually understood it
2 replies →
>> I realized that it was impossible to memorize a proof without understanding it!
> I find it hard to memorise things I don't actually understand.
Isn't it the parent's point?
1 reply →
> I remember being given a proof of why RSA encryption is secure
With what assumptions?
1 reply →
But, is it proven that RSA is secure? Wouldn't that also prove P != NP?
1 reply →
During my elementary school years, there was a teacher who told me that I didn't need to memorize it as long as I understand them. I taught he was the coolest guy ever.
Only when I got late twenties, I realized how wrong he was. Memorization and understanding go hand in hand, but if one of them has to come first than it's memorization. He probably said that because that was what kids (who were forced to do rote memorization) wanted to hear.
You could argue this is just moving the memorization to meta-facts, but I found all throughout school that if you understand some slightly higher level key thing, memorization at the level you're supposed to be working in becomes at best a slight shortcut for some things. You can derive it all on the fly.
Sort of like how most of the trigonometric identities that kids are made to memorize fall out immediately from e^iθ = cosθ+isinθ (could be taken as the definitions of cos,sin), e^ae^b=e^(a+b) (a fact they knew before learning trig), and a little bit of basic algebraic fiddling.
Or like how inverse Fourier transforms are just the obvious extension of the idea behind writing a 2-d vector as a sum of its x and y projections. If you get the 2d thing, accept that it works the exact same in n-d (including n infinite), accept integrals are just generalized sums, and functions are vectors, and I guess remember that e^iwt are the basis you want, you can reason through what the formula must be immediately.
2 replies →
Probably. I hated memorization when I was a student too, because it was boring. But as soon as I did some teaching, my attitude changed to, "Just memorize it, it'll make your life so much easier." It's rough watching kids try to multiply when they don't have their times tables memorized, or translate a language when they haven't memorized the vocabulary words in the lesson so they have to look up each one.
6 replies →
As with most things, it depends. If you truly do understand something, then you can derive a required result from first principles. _Given sufficient time_. Often in an exam situation you are time-constrained, and having memorized a shortcut cut be beneficial. Not to mention retaining is much easier when you understand the topic, so memorization becomes easier.
Probably the best example of this I can think of (for me at least) from mathematics is calculating combinations. I have it burned into my memory that (n choose r) = (n permute r) / (r permute r), and (n permute r) = n! / (n - r)!
Can I derive these from first principles? Sure, but after not seeing it for years, it might take me 10+ minutes to think through everything and correct any mistakes I make in the derivation.
But if I start with the formula? Takes me 5 seconds to sanity check the combination formula, and maybe 20 to sanity check the permutation formula. Just reading it to myself in English slowly is enough because the justification kind of just falls right out of the formula and definition.
So, yeah, they go hand in hand. You want to understand it but you sure as heck want to memorize the important stuff instead of relying on your ability to prove everything from ZFC...
It is waaaay easier to remember when you understand. The professor had it exactly right - if you learn to understand, you frequently end up remembering. But, memorization does not lead to understanding at all.
I think we memorize the understanding. For me it also works better understanding how something works than memoryzing results. I remember in high school, in maths trigonometrics, there were a list of 20 something formulas derived from a single one. Everyboby was memorizing the whole list of formulas; i just had to memorize a simple formula and the underdtanding of how to derive the others from the fundamental one on the fly.
You don't need to memorize to understand. You can rederive it every time.
You need to memorize it to use it subconsciously while solving more complex problems. Other ways you won't fit more complex solutions into your working memory,vso whole classes of problems will be too hard for you.
Ish? I never ever memorized the multiplication tables. To this day, I don't think I know them fully. I still did quite well in math by knowing how to quiz the various equations. Not just know them, but how to ask questions about moving terms and such.
My controversial education hot take: Pointless rote memorization is bad and frustrating, but early education could use more directed memorization.
As you discovered: A properly structured memorization of carefully selected real world material forces you to come up with tricks and techniques to remember things. With structured information (proofs in your case) you start learning that the most efficient way to memorize is to understand, which then reduces the memorization problem into one of categorizing the proof and understanding the logical steps to get from one step to another. In doing so, you are forced to learn and understand the material.
Another controversial take (for HN, anyway) is that this is what happens when programmers study LeetCode. There’s a meme that the way to interview prep is to “memorize LeetCode”. You can tell who hasn’t done much LeetCode interviewing if they think memorizing a lot of problems is a viable way to pass interviews. People who attempt this discover that there are far too many questions to memorize and the best jobs have already written their own questions that aren’t out of LeetCode. Even if you do get a direct LeetCode problem in an interview, a good interview will expect you to explain your logic, describe how you arrived at the solution, and might introduce a change if they suspect you’re regurgitating memorized answers.
Instead, the strategy that actually works is to learn the categories of LeetCode style questions, understand the much smaller number of algorithms, and learn how to apply them to new problems. It’s far easier to memorize the dozen or so patterns used in LeetCode problems (binary search, two pointers, greedy, backtracking, and so on) and then learn how to apply those. By practicing you’re not memorizing the specific problems, you’re teaching yourself how to apply algorithms.
Side note: I’m not advocating for or against LeetCode, I’m trying to explain a viable strategy for today’s interview format.
Exactly. I agree with the leetcode part. A lot of problems in the world are composite of simpler smaller problems. Leetcode should teach you the basic patterns and how to combine them to solve real world problems. How will you ever solve a real world problem without knowing a few algorithms beforehand. For example, my brother was talking about how a Roomba would map a room. He was imagining 0 to represent free space and 1 as inaccessible points. This quickly reminded me of Number of Islands problem from leetcode. Yeah, there might be a lot of changes required to that problem but one could simple represent it as two problems.
1. Represent different objects in the room as some form of machine understandable form in a matrix 2. Find the number of Islands or find the Islands themselves.
1 reply →
Memorization of, like, multiplication tables gives us a poor view of the more interesting type of memorization. Remembering types of problems we’ve seen. Remembering landmarks and paths, vs just remembering what’s in every cell of a big grid.
I still don’t like leetcode, though.
2 replies →
Fortunately that was not my experience in abstract algebra. The tests and homework were novel proofs that we hadn't seen in class. It was one of my favorite classes / subjects. Someone did tell me in college that they did the memorization thing in German Universities.
Code-wise, I spent a lot of time in college reading other people's code. But no memorization. I remember David Betz advsys, Tim Budd's "Little Smalltalk", and Matt Dillon's "DME Editor" and C compiler.
Another advsys enjoyer! Did you ever write a game with it?
I would wager some folks can memorize without understanding? I do think memorization is underrated, though.
There is also something to the practice of reproducing something. I always took this as a form of "machine learning" for us. Just as you get better at juggling by actually juggling, you get better at thinking about math by thinking about math.
Rote memorization is essentially that, yes.
Interesting I had the same problem and suffered in grades back in school simply because I couldn't memorize much without understanding. However, I seemed to be the only one because every single other student, including those with top grades, were happy to memorize and regurgitate. I wonder how they're doing now.
My abstract algebra class had it exactly backwards. It started with a lot of needless formalism culminating in galois theory. This was boring to most students as they had no clue why the formalism was invented in the first place.
Instead, I wished it showed how the sausage was actually made in the original writings of galois [1]. This would have been far more interesting to students, as it showed the struggles that went into making the product - not to mention the colorful personality of the founder.
The history of how concepts were invented for the problems faced is far more motivating to students to build a mental model than canned capsules of knowledge.
[1] https://www.ams.org/notices/201207/rtx120700912p.pdf
> This was boring to most students as they had no clue why the formalism was invented in the first place.
> The history of how concepts were invented for the problems faced is far more motivating to students to build a mental model than canned capsules of knowledge.
That's something I really like about 3blue1brown, and he says it straight up [0]:
> My goal is for you to come away feeling like you could have invented calculus yourself. That is, cover all those core ideas, but in a way that makes clear where they actually come from, and what they really mean, using an all-around visual approach.
[0]: https://www.youtube.com/watch?v=WUvTyaaNkzM
Depends on the subject - I can remember multiple subjects where the teacher would give you a formula to memorise without explaining why or where it came from. You had to take it as an axiom. The teachers also didn't say - hey, if you want to know why did we arrive to this, have a read here, no, it was just given.
Ofc you could also say that's for the student to find out, but I've had other things on my mind
>Memorization is not a panacea.
It is What you memorize that is important, you can't have a good discussion about a topic if you don't have the facts and logic of the topic in memory. On the other hand using memory to paper over bad design instead of simplifying or properly modularizing it, leads to that 'the worst code I have seen is code I wrote six months ago' feeling.
Your comment about memorizing as part of understanding makes a lot of sense to me, especially as one possible technique to get get unstuck in grasping a concept.
If it doesn’t work for you on l33t code problems, what techniques are you finding more effective in that case?
I was part of an ACM programming team in college. We would review classes of problems based on the type of solution necessary, and learn those techniques for solving them. We were permitted a notebook, and ours was full of the general outline of each of these classes and techniques. Along with specific examples of the more common algorithms we might encounter.
As a concrete example, there is a class of problems that are well served by dynamic programming. So we would review specific examples like Dijkstra's algorithm for shortest path. Or Wagner–Fischer algorithm for Levenshtein-style string editing. But we would also learn, often via these concrete examples, of how to classify and structure a problem into a dynamic programming solution.
I have no idea if this is what is meant by "l33t code solutions", but I thought it would be a helpful response anyway. But the bottom line is that these are not common in industry, because hard computer science is not necessary for typical business problems. The same way you don't require material sciences advancements to build a typical house. Instead it flows the other way, where advancements in materials sciences will trickle down to changing what the typical house build looks like.
>If it doesn’t work for you on l33t code problems, what techniques are you finding more effective in that case?
Memorization of l33t code DOES work well as prep for l33t code tests. I just don't think l33t code has much to do with application programming. I've long felt that "computer science" is physics for computers, low on the abstraction ladder, and there are missing labels for the higher complexity subjects built on it. Imagine if all physical sciences were called "physics" and so in order to get a job as a biologist you should expect to be asked questions about the Schroedinger equation and the standard model. We desperately need "application engineering" to be a distinct subject taught at the university level.
1 reply →
What I understand from the GP is that memorizing l33t code won't help you learn anything useful. Not that understanding the solutions won't help you memorize them.
Is it the memorisation that had the desired effect or the having to come up with the novel proofs? Many schools seem to do the memorising part, but not the creating part.
I find it's helpful to have context to frame what I'm memorizing to help me understand the value.
Indeed, not just math. Biology requires immense amounts of memorization. Nature is littered with exceptions.
> But an amazing thing happened - I realized that it was impossible to memorize a proof without understanding it!
This may be true of mathematical proofs, but it surely must not be true in general. Memorizing long strings of digits of pi probably isn’t much easier if you understand geometry. Memorizing famous speeches probably isn’t much easier if you understand the historical context.
> Memorizing famous speeches probably isn’t much easier if you understand the historical context.
Not commenting on the merits of critical thinking vs memorization either way, but I think it would be meaningfully easier to memorize famous speeches if you understand the historical context.
2 replies →
It's funny, because I had the exact opposite experience with abstract algebra.
The professor explained things, we did proofs in class, we had problem sets, and then he gave us open-book semi-open-professor take-home exams that took us most of a week to do.
Proof classes were mostly fine. Boring, sometimes ridiculously shit[0], but mostly fine. Being told we have a week for this exam that will kick our ass was significantly better for synthesizing things we'd learned. I used the proofs we had. I used sections of the textbook we hadn't covered. I traded some points on the exam for hints. And it was significantly more engaging than any other class' exams.
[0] Coming up with novel things to prove that don't require some unrelated leap of intuition that only one student gets is really hard to do. Damn you Dr. B, needing to figure out that you have to define a third equation h(x) as (f(x) - g(x))/(f(x) + g(x)) as the first step of a proof isn't reasonable in a 60 minute exam.
memorization + application = comprehension. Rinse and repeat.
Whether leet code or anything else.
Mathematics pedagogy today is in a pretty sorrowful state due to bad actors and willful blindness at all levels that require public trust.
A dominant majority in public schools starting late 1970s seems to follow the "Lying to Children" approach which is often mistakenly recognized as by-rote teaching but are based in Paulo Freire's works that are in turn based on Mao's torture discoveries from the 1950s.
This approach contrary to classical approaches leverages torturous process which seems to be purposefully built to fracture and weed out the intelligent individual from useful fields, imposing sufficient thresholds of stress to impose PTSD or psychosis, selecting for and filtering in favor of those who can flexibly/willfully blind/corrupt themselves.
Such sequences include Algebra->Geometry->Trigonometry where gimmicks in undisclosed changes to grading cause circular trauma loops with the abandonment of Math-dependent careers thereafter, similar structures are also found in Uni, for Economics, Business, and Physics which utilize similar fail-scenarios burning bridges where you can't go back when the failure lagged from the first sequence, and you passed the second unrelated sequence. No help occurs, inducing confusion and frustration to PTSD levels, before the teacher offers the Alice in Wonderland Technique, "If you aren't able to do these things, perhaps you shouldn't go into a field that uses it". (ref Kubark Report, Declassified CIA Manual)
Have you been able to discern whether these "patterns" as you've called them aren't just the practical reversion to the classical approach (Trivium/Quadrivium)? Also known as the first-principles approach after all the filtering has been done.
To compare: Classical approaches start with nothing but a useful real system and observations which don't entrench false assumptions as truth, which are then reduced to components and relationships to form a model. The model is then checked for accuracy against current data to separate truth from false in those relationships/assertions in an iterative process with the end goal being to predict future events in similar systems accurately. The approach uses both a priori and a posteriori components to reasoning.
Lying to Children reverses and bastardizes this process. It starts with a single useless system which contains equal parts true and false principles (as misleading assumptions) which are tested and must be learned to competency (growing those neurons close together). Upon the next iteration one must unlearn the false parts while relearning the true parts (but we can't really unlearn, we can only strengthen or weaken) which in turn creates inconsistent mental states imposing stress (torture). This is repeated in an ongoing basis often circular in nature (structuring), and leveraging psychological blindspots (clustering), with several purposefully structured failings (elements) to gatekeep math through torturous process which is the basis for science and other risky subject matter. As the student progresses towards mastery (gnosis), the systems become increasingly more useful. One must repeatedly struggle in their sessions to learn, with the basis being if you aren't struggling you aren't learning. This mostly uses a faux a priori reasoning without properties of metaphysical objectivity (tied to objective measure, at least not until the very end).
If you don't recognize this, an example would be the electrical water pipe pressure analogy. Diffusion of charge in-like materials, with Intensity (Current) towards the outermost layer was the first-principled approach pre-1978 (I=V/R). The Water Analogy fails when the naive student tries to relate the behavior to pressure equations that ends up being contradictory at points in the system in a number of places introducing stumbling blocks that must be unlearned.
Torture being the purposefully directed imposition of psychological stress beyond a individuals capacity to cope towards physiological stages of heightened suggestability and mental breakdown (where rational thought is reduced or non-existent in the intelligent).
It is often recognized by its characteristic subgroups of Elements (cognitive dissonance, a lack of agency to remove oneself and coercion/compulsion with real or perceived loss or the threat thereof), Structuring (circular patterns of strictness followed by leniency in a loop, fractionation), and Clustering (psychological blindspots).
Wait, the electrical pipe water analogy is actually a very good one and it's quite difficult to find edge cases where it breaks down in a way that would confuse a student. There are some (for example, there's no electrical equivalent of Reynold's number or turbulence, and flow resistance varies differently with pipe diameter than wire diameter, and no good equivalent for Faraday's law) but I don't think these are likely to cause confusion. It even captures nuance like inductance, capacitance, and transmission line behaviour.
2 replies →
> Lying to Children reverses and bastardizes this process. It starts with a single useless system which contains equal parts true and false principles (as misleading assumptions) which are tested and must be learned to competency (growing those neurons close together).
Can you provide some concrete examples of it?
7 replies →
It's the core problem facing the hiring practices in this field. Any truly competent developer is a generalist at heart. There is value to be had in expertise, but unless you're dealing with a decade(s) old hellscape of legacy code or are pushing the very limits of what is possible, you don't need experts. You'd almost certainly be better off with someone who has experience with the tools you don't use, providing a fresh look and cover for weaknesses your current staff has.
A regular old competent developer can quickly pick up whatever stack is used. After all, they have to; Every company is their own bespoke mess of technologies. The idea that you can just slap "15 years of React experience" on a job ad and that the unicorn you get will be day-1 maximally productive is ludicrous. There is always an onboarding time.
But employers in this field don't "get" that. For regular companies they're infested by managers imported from non-engineering fields, who treat software like it's the assembly line for baking tins or toilet paper. Startups, who already have fewer resources to train people with, are obsessed with velocity and shitting out an MVP ASAP so they can go collect the next funding round. Big Tech is better about this, but has it's own problems going on and it seems that the days of Big Tech being the big training houses is also over.
It's not even a purely collective problem. Recruitment is so expensive, but all the money spent chasing unicorns & the opportunity costs of being understaffed just get handwaved. Rather spend $500,000 on the hunt than $50,000 on training someone into the role.
And speaking of collective problems. This is a good example of how this field suffers from having no professional associations that can stop employers from sinking the field with their tragedies of the commons. (Who knows, maybe unions will get more traction now that people are being laid off & replaced with outsourced workers for no legitimate business reason.)
> Rather spend $500,000 on the hunt than $50,000 on training someone into the role.
Capex vs opex, that's the fundamental problem at heart. It "looks better on the numbers" to have recruiting costs than to have to set aside a senior developer plus paying the junior for a few months. That is why everyone and their dog only wants to hire seniors, because they have the skillset and experience that you can sit their ass in front of any random semi fossil project and they'll figure it out on their own.
If the stonk analysts would go and actually dive deep into the numbers to look at hiring side costs (like headhunter expenses, employee retention and the likes), you'd see a course change pretty fast... but this kind of in-depth analysis, that's only being done by a fair few short-sellers who focus on struggling companies and not big tech.
In the end, it's a "tragedy of the commons" scenario. It's fine if a few companies do that, it's fine if a lot of companies do that... but when no one wants to train juniors any more (because they immediately get poached by the big ones), suddenly society as a whole has a real and massive problem.
Our societies are driven into a concrete wall at full speed by the financialization of every tiny aspect of our lives. All that matters these days are the gods of the stonk market - screw the economy, screw the environment, screw labor laws, all that matters is appearing "numbers go up" on the next quarterly.
> but when no one wants to train juniors any more (because they immediately get poached by the big ones)
Can we stop pretending that we don't know how to solve this problem? If you hire juniors at $X/year, but they keep getting poached after 2-3 years because now they can get $X*1.5/year (or more!), then maybe you should start promoting and giving raises to them after they've gotten a couple years experience.
Seriously, this is not a hard problem to solve. If the junior has proven themselves, give them the raise they deserve instead of being all Surprised Pikachu when another company is willing to pay them what they've proven themselves worthy of.
2 replies →
> Our societies are driven into a concrete wall at full speed by the financialization of every tiny aspect of our lives. All that matters these days are the gods of the stonk market - screw the economy, screw the environment, screw labor laws, all that matters is appearing "numbers go up" on the next quarterly.
I have been in the various nooks and crannies of the Internet/Software dev industry my whole career (i'm 49). I can't think of any time when the stock market didn't drive software innovation. It's always been either invent something -> go public -> exit or invent something -> increase stock price of existing public corp
1 reply →
> Capex vs opex
That's part of the problem, but I also notice the new hiring managers are incentivized to hire (or replace) employees to make their mark on the company. They then advocate for "their guys" the ones they recruited over the incumbents that are the unwilling dinosaurs in their eyes.
I can’t think of another career where management continuously does not understand the realities of how something gets built. Software best practices are on their face orthogonal to how all other parts of a business operate.
How does marketing operate? In a waterfall like model. How does finance operate? In a waterfall like model. How does product operate? Well you can see how this is going.
Then you get to software and it’s 2 week sprints, test driven development etc. and it decidedly works best not on a waterfall model, but shipping in increments.
Yet the rest of the business does not work this way, it’s the same old top down model as the rest.
This I think is why so few companies or even managers / executives “get it”
> can’t think of another career where management continuously does not understand the realities of how something gets built
All engineering. Also all government and a striking amount of finance.
Actually, this might be a hallmark of any specialist field. Specialists interface with outsiders through a management layer necessarily less competent at the specialty than they are. (Since they’re devoting time and energy to non-specialty tasks.)
While product often does operate in a waterfall model, I think this is the wrong mindset. Good product management should adopt a lot of the same principles as software development. Form a testable hypothesis, work to get it into production and begin gathering data, then based on your findings determine what the next steps are and whether to adjust the implementation, consider the problem solved or try a different approach.
> I can’t think of another career where management continuously does not understand the realities of how something gets built.
This is in part a consequence of how young our field is.
The other comment pointing out other engineering is right here. The difference is that fields like Civil Engineering are millenia old. We know that Egyptian civil engineering was advanced and shockingly modern even 4.5 millenia ago. We've basically never stopped having qualified civil engineers around who could manage large civil engineering projects & companies.
Software Development in it's modern forms has it's start still in living memory. There simply weren't people to manage the young early software development firms as they grew, so management got imported from other industries.
And to say something controversial: Other engineering has another major reason why it's usually better understood. They're held to account when they kill people.
If you're engineering a building or most other things, you must meet safety standards. Where possible you are forced to prove you meet them. E.g. Cars.
You don't get to go "Well cars don't kill people, people kill people. If someone in our cars die when they're hit by a drunk driver, that's not our problem that's the drunkard's fault." No. Your car has to hold up to a certain level of crash safety, even if it's someone else who causes the accent, your engineering work damn better hold up.
In software, we just do not do this. The very notion of "Software kills people" is controversial. Treated as a joke, "of course it can't kill people, what are you on about?". Say, you neglect on your application's security. There's an exploit, a data breach, you leak your users' GPS location. A stalker uses the data to find and kill their victim.
In our field, the popular response is to go "Well we didn't kill the victim, the stalker did. It's not our problem.". This is on some level true; 'Twas the drunk driver who caused the car crash, not the car company. But that doesn't justify the car company selling unsafe cars, why should it justify us selling unsafe software? It may be but a single drop of blood, but it's still blood on our hands as well.
As it stands, we are fortunate enough that there haven't been incidents big enough to kill so many people that governments take action to forcibly change this mindset. It would be wise that Software Development takes up this accountability on it's own accord to prevent such a disaster.
That we talk about "building" software doesn't help.
>For regular companies they're infested by managers imported from non-engineering fields
Someone's cousin, lets leave it at that, someones damn cousin or close friend, or anyone else with merely a pulse. I've had interviews where the company had just been turned over from people that mattered, and you. could. tell.
One couldn't even tell me why the project I needed to do for them ::rolleyes::, their own code boilerplate(which they said would run), would have runtime issues and I needed to self debug it to even get it to a starting point.
Its like, Manager: Oh heres this non-tangential thing that they tell me you need to complete before I can consider you for the positon.... Me: Oh can I ask you anything about it?.... Manager: No
Could not agree more. Whenever I hear monikers like "Java developer" or "python developer" as a job description I roll my eyes slightly.
Isn't that happening already? Half the usual CS curriculum is either math (analysis, linear algebra, numerical methods) or math in anything but name (computability theory, complexity theory). There's a lot of very legitimate criticism of academia, but most of the times someone goes "academia is stupid, we should do X" it turns out X is either:
- something we've been doing since forever
- the latest trend that can be picked up just-in-time if you'll ever need it
I've worked in education in some form or another for my entire career. When I was in teacher education in college . . . some number of decades ago . . . the number one topic of conversation and topic that most of my classes were based around was how to teach critical thinking, effective reasoning, and problem solving. Methods classes were almost exclusively based on those three things.
Times have not changed. This is still the focus of teacher prep programs.
Parent comment is literally praising an experience they had in higher education, but your only takeaway is that it must be facile ridicule of academia.
Was directed at TFA, not parent comment.
In CS, it's because it came out of math departments in many cases and often didn't even really include a lot of programming because there really wasn't much to program.
Right but a looot of the criticism online is based on assumptions (either personal or inherited from other commenters) that haven’t been updated since 2006.
1 reply →
When I was in college the philosophy program had the marketing slogan: “Thinking of a major? Major in thinking”.
Now as a hiring manager I’ll say I regularly find that those who’ve had humanities experience are way more capable and the hard parts of analysis and understanding. Of course I’m biased as a dual cs/philosophy major but it’s very rare I’m looking for someone who can just write a lot of code. Especially juniors as analytical thinking is way harder to teach than how to program.
> Now as a hiring manager I’ll say I regularly find that those who’ve had humanities experience are way more capable and the hard parts of analysis and understanding.
The humanities, especially the classic texts, cover human interaction and communication in a very compact form. My favorite sources are the Bible, Cicero, and Machiavelli. For example Machiavelli says if you do bad things to people do them at once, while good things you should spread out over time. This is common sense. Once you catch the flavor of his thinking it's pretty easy to work other situations out for yourself, in the same why that good engineering classes teach you how to decompose and solve technical problems.
The #1 problem in almost all workplaces is communication related. In almost all jobs I've had in 25-30 years, finding out what needs to be done and what is broken -- is much harder than actually doing it.
We have these sprint planning meetings and the like where we throw estimates on the time some task will take but the reality is for most tasks it's maybe a couple dozen lines of actual code. The rest is all what I'd call "social engineering" and figuring out what actually needs to be done, and testing.
Meanwhile upper management is running around freaking out because they can't find enough talent with X years of Y [language/framework] experience, imagining that this is the wizard power they need.
The hardest problem at most shops is getting business domain knowledge, not technical knowledge. Or at least creating a pipeline between the people with the business knowledge and the technical knowledge that functions.
Anyways, yes I have 3/4 a PHIL major and it actually has served me well. My only regret is not finishing it. But once I started making tech industry cash it was basically impossible for me to return to school. I've met a few other people over the years like me, who dropped out in the 90s .com boom and then never went back.
Yea this is why I’m generally not that impressed by LLMs. They still force you to do the communication which is the hard part. Programming languages are inherently a solve for communicating complex steps. Programming in English isn’t actually that much of a help you just have to reinvent how to be explicit
1 reply →
This is also why I went into the Philosophy major - knowing how to learn and how to understand is incredibly valuable.
Unfortunately in my experience, many, many people do not see it that way. It's very common for folks to think of philosophy as "not useful / not practical".
Many people hear the word "philosophy" and mentally picture "two dudes on a couch recording a silly podcast", and not "investigative knowledge and in-depth context-sensitive learning, applied to a non-trivial problem".
It came up constantly in my early career, trying to explain to folks, "no, I actually can produce good working software and am reasonably good at it, please don't hyper-focus on the philosophy major, I promise I won't quote Scanlon to you all day."
How people see it is based on the probability of any philosophy major producing good working software, not you being able to produce good working software.
Maybe because phylosophy focuses on weird questions (to be or not to be) and weird personas. If it was advertised as more grounded thing, the views would be different.
The way you are perceived by others dependa on your behaviour. If you wamt to be perceived differently, adjust your behaviour, don't demand others to change. They won't.
1 reply →
Many top STEM schools have substantial humanities requirements, so I think they agree with you.
At Caltech they require a total of at least 99 units in humanities or social sciences. 1 Caltech unit is 1 hour of work a week for each week of the term, and a typical class is 9 units consisting of 3 hours of classwork a week and 6 hours of homework and preparation.
That basically means that for 11 of the 12 terms that you are there for a bachelor's degree, you need to be taking a humanities or social sciences class. They require at least 4 of those to be in humanities (English, history, history and philosophy of science, humanities, music, philosophy, and visual culture), and at least 3 to be in social sciences (anthropology, business economics and management, economics, law, political science, psychology, and social science).
At MIT they have similar, but more complicated, requirements. They require humanities, art, and social sciences, and they require that you pick at least one subject in one of those and take more than one course in it.
I worked for someone who I believe was undergrad philosophy and then got a masters in CS.
On a related note, the most accomplished people I've met didn't have degrees in the fields where they excelled and won awards. They were all philosophy majors.
Teaching people to think is perhaps the world's most under-rated skill.
Well, yes but the other 90%+ just need to get a job out of college to support their addiction to food and shelter not to be a “better citizen of the world” unless they have parents to subsidize their livelihood either through direct transfers of money or by letting them stay at home.
I told both of my (step)sons that I would only help them pay for college or trade school - their choice - if they were getting a degree in something “useful”. Not philosophy, not Ancient Chinese Art History etc.
I also told them that they would have to get loans in their own names and I would help them pay off the loans once they graduated and started working gainfully.
My otherwise ordinary school applied the mentality that students must "Learn to learn", and that mix of skills and mindset has never stopped helping me.
I think good historians are on the same foot as philosophers in the arena of "thinking really fucking hard and making an airtight analysis".
I would say you have some bias.
yes, sometimes you need people who can grasp the tech and talk to managers. They might be intermediaries.
But don't ignore the nerdy guys who have been living deeply in a tech ecosystem all their lives. The ones who don't dabble in everything. (the wozniaks)
A professor in my very first semester called "crazy finger syndrome" the attempts to go straight to the code without decomposing the problem from a business or user perspective. It was a long time ago. It was a CS curriculum
I miss her jokes against anxious nerds that just wanted to code :(
Don't forget the rise of boot camps where some educators are not always aligned with some sort of higher ethical standards.
> "crazy finger syndrome" - the attempts to go straight to the code without decomposing the problem from a business or user perspective
Years ago I started on a new team as a senior dev, and did weeks of pair programming with a more junior dev to intro me to the codebase. His approach was maddening; I called it "spray and pray" development. He would type out lines or paragraphs of the first thing that came to mind just after sitting down and opening an editor. I'd try to talk him into actually taking even a few minutes to think about the problem first, but it never took hold. He'd be furiously typing, while I would come up with a working solution without touching a keyboard, usually with a whiteboard or notebook, but we'd have to try his first. This was c++/trading, so the type-compile-debug cycle could be 10's of minutes. I kept relaying this to my supervisor, but after a few months of this he was let go.
I make a point to solve my more difficult problems with pen and paper drawings and/or narrative text before I touch the PC. The computer is an incredibly distracting medium to work with if you are not operating under clear direction. Time spent on this forum is a perfect example.
Memorization and closed book tests are important for some areas. When seconds are counting the ER doctor cannot go look up how to treat a heart attack. That doctor also needs to know now only how to treat the common heart attack, but how to recognize this isn't the common heart attack but the 1 in 10,000 not a heart attack but has exactly the same symptoms as a heart attack case and give it the correct treatment.
However most of us are not in that situation. It is better for us to just look up those details as we need them because it gives us more room to handle a broader variety of situations.
Humans will never outcompete ai in that regard however. Industry will eventually optimize for humans and ai separately: ai will know a lot and think quickly, humans will provide judgement and legal accountability. We’re already on this path.
Speaking with a relative who is a doctor recently it’s interesting how much each of our jobs are “troubleshooting”.
Coding, doctors, plumber… different information, often similar skill sets.
I worked a job doing tech support for some enterprise level networking equipment. It was the late 1990s and we were desperate for warm bodies. Hired a former truck driver who just so happened to do a lot of woodworking and other things.
Great hire.
Everyone going through STEM needs to see the movie Hidden Figures for a variety of reasons, but one bit stands out as poignant: I believe it was Katherine Johnson, who is asked to calculate some rocket trajectory to determine the landing coordinates, thinks on it a bit and finally says, "Aha! Newton's method!" Then she runs down to the library to look up how to apply Newton's method. She had the conceptual tools to find a solution, but didn't have all the equations memorized. Having all the equations in short term memory only matters in a (somewhat pathological) school setting.
My favorite professor in my physics program would say, "You will never remember the equations I teach. But if you learn how the relationships are built and how to ask questions of those relationships, then I have done my job." He died a few years ago. I never was able to thank him for his lessons.
You just did.
Being resourceful is an extremely valuable skill in the real world, and basically shut out of the education world.
Unlike my teachers, none of my bosses ever put me in an empty room with only a pencil and a sheet of paper to solve given problems.
> My favorite professor in engineering school always gave open book tests.
My experience as a professor and a student is that this doesn't make any difference. Unless you can copy verbatim the solution to your problem from the book (which never happens), you better have a good understanding of the subject in order to solve problems in the allocated time. You're not going to acquire that knowledge during your test.
My experience as a professor and a student is that this doesn't make any difference.
Exactly the point of his test methodology.
What he asked of students on a test was to *apply* knowledge and information to *unique* problems and create a solution that did not exist in any book.
I only brought 4 things to his tests --- textbook, pencil, calculator and a capable, motivated and determined brain. And his tests revealed the limits of what you could achieve with these items.
Isn't this an argument for why you should allow open book tests rather than why you shouldn't? It certainly removes some pressure to remember some obscure detail or formula.
Isn't that just an argument for always doing open book tests, then? Seems like there's no downside, and as already mentioned, it's closer to how one works in the real world.
During some of the earlier web service development days, one would find people at F500 skating by in low-to-mid level jobs just cutting and pasting between spreadsheets, things would take them hours could be done in seconds, and with lower error rates, with a proper data interface.
Very anecdotally, but I hazard that most of these types of low-hanging fruit, low-value add roles are much less common since they tended to be blockers for operational improvement. Six-sigma, Lean, various flavors of Agile would often surface these low performers up and they either improved or got shown the door between 2005 - 2020.
Not that everyone is 100% all the time, every day, but what we are left with is often people that are highly competent at not just their task list but at their job.
I had a like minded professor in university, ironically in AI. Our big tests were all 3 day take home assignments. The questions were open ended, required writing code, processing data and analyzing results.
I think the problem with this is that it requires the professor to mentally fully engage when marking assignments and many educators do not have the capacity and/or desire to do so.
Sadly, I doubt 3-day take-home assignments have much future as a means of assessment in the age of LLMs.
Might be true, idk? For all we know that professor now gives a 2.5-day take home assignments where they are allowed to use LLMs, and then assess them in an 1 hour oral exam where they need to explain approach, results and how they ensure that their results are accurate?
I don't think the 3-day take home is the key. It's supporting educators to have the intention, agency and capacity to improvise assessment.
It depends what level the education is happening at. Think of it like students being taught how to do for loops but are just copying and pasting AI output. That isn't learning. They aren't building the skills needed to debug when the AI gets something wrong with a more complicated loop, or understand the trade offs of loops vs recursion.
Finding the correct balance for a given class it hard. Generally, the lower level the education, the more it should be closed books because the more it is about being able to manually solve the smaller challenges that are already well solved so you build up the skills needed to even tackle the larger challenges. The higher the education level, the more it is about being able to apply those skills to then tackle a problem, and one of those skills is being able to pull relevant formulas and such from the larger body of known formulas.
Agreed coming from the ops world also.
I've had a frustrating experience the past few years trying to hire junior sysadmins because of a real lack of problem solving skills once something went wrong outside of various playbooks they memorized to follow.
I don't need someone who can follow a pre-written playbook, I have ansible for that. I need someone that understands theory, regardless of specific implementations, and can problem solve effectively so they can handle unpredictable or novel issues.
To put another way, I can teach a junior the specifics of bind9 named.conf, or the specifics of our own infrastructure, but I shouldn't be expected to teach them what DNS in general is and how it works.
But the candidates we get are the opposite - they know specific tools, but lack more generalized theory and problem solving skills.
Same here! I always like to say that software engineering is 50% knowing the basics (How to write/read code, basic logic) and 50% having great research skills. So much of our time is spent finding documentation and understanding what it actually means as opposed to just writing code.
You cannot teach "how to to think". You have to give students thinking problems to actually train thinking. Those kinds of problems can increasingly be farmed off to AI, or at least certain subproblems in them.
I meam, yes, to an extent you can teach how to think: critical thinking and logic are topics you can teach and people who take their teaching to heart can become better thinkers. However, those topics cannot impart creativity. Critical thinking is called exactly that because it's about tools and skills for separating bad thinking from good thinking. The skill of generating good thinking probably cannot be taught; it can only be improved with problem-solving practice.
> In the real world of work, everyone has full access to all of the available data and information.
In general, I also attend your church.
However, as I preached in that church, I had two students over the years.
* One was from an African country and told me that where he grew up, you could not "just look up data that might be relevant" because internet access was rare.
* The other was an ex US Navy officer who was stationed on a nuclear sub. She and the rest of the crew had to practice situations where they were in an emergency and cut off from the rest of the world.
Memorization of considerable amounts of data was important to both of them.
Each one of us has a mental toolbox that we use to solve problems. There are many more tools that we don’t have in our heads that we can look up if we know how.
The bigger your mental toolbox the more effective you will be at solving the problems. Looking up a tool and learning just enough to use it JIT is much slower than using a handy tool that you already masterfully know how to use.
This is as true for physical tools as for programming concepts like algorithms and data structures. In the worst case you won’t even know to look for a tool and will use whatever is handy, like the proverbial hammer.
People have been saying that since the advent of formal education. Turns out standardized education is really hard to pull off and most systems focus on making the average good enough.
It’s also hard to teach people “how to think” while at the same time teaching them practical skills - there’s only so many hours in a day, and most education is setup as a way to get as many people as possible into shape for taking on jobs where “thinking” isn’t really a positive trait, as it’d lead to constant restructuring and questioning of the status quo
While there’s no reasonable way to disagree with the sentiment, I don’t think I’ve ever met anyone who can “think and decompose problems” who isn’t also widely read, and knows a lot of things.
Forcing kids to sit and memorize facts isn’t suddenly going to make them a better thinker, but much of my process of being a better thinker is something akin to sitting around and memorizing facts. (With a healthy dose of interacting substantively and curiously with said facts)
> Everyone has full access to all of the available data and information
Ahh, but this is part of the problem. Yes, they have access, but there is -so much- information, it punches through our context window. So we resort to executive summaries, or convince ourselves that something that's relevant is actually not.
At least an LLM can take full view of the context in aggregate and peel out signal. There is value there, but no jobs are being replaced
>but no jobs are being replaced
I agree that an LLM is a long way from replacing most any single job held by a human in isolation. However, what I feel is missed in this discussion is that it can significantly reduce the total manpower by making humans more efficient. For instance, the job of a team of 20 can now be done by 15 or maybe even 10 depending on the class of work. I for one believe this will have a significant impact on a large number of jobs.
Not that I'm suggesting anything be "stopped". I find LLM's incredibly useful, and I'm excited about applying them to more and more of the mundane tasks that I'd rather not do in the first place, so I can spend more time solving more interesting problems.
Also, some problems don't have enough data for a solution. I had a professor that gave tests where the answer was sometimes "not solvable." Taking these tests was like sweating bullets because you were not sure if you're just too dumb to solve the problem, or there was not enough data to solve the problem. Good times!
One of my favorite things about Feynman interviews/lectures is often his responses are about how to think. Sometimes physicists ask questions in his lectures and his answer has little to do with the physics, but how they're thinking about it. I like thinking about thinking, so Feynman is soothing.
I agree with the overall message, but I will say that there is still a great deal of value in memorisation. Memorising things gives you more internal tools to think in broader chunks, so you can solve more complicated problems.
(I do mean memorisation fairly broadly, it doesn't have to mean reciting a meaningless list of items.)
Agree, hopefully this insight / attitude will become more and more prevalent.
For anyone looking for resources, may we recommend:
* The Art of Doing Science and Engineering by Richard Hamming (lectures are available on YouTube as well)
* Measurement by Paul Lockhart (for teaching mindset)
Talk is cheap. Good educators cost money, and America famously underpays (and under-appreciates) its teachers. Does he also support increasing taxes on the wealthy?
Even more broadly, it's "critical thinking," which definitely seems to be on the decline (though I'm sure old people have said this for millennia)
Have there been studies about abilities of different students to memorize information? I feel this is under-studied in the world of memorizing for exams
Yeah. Memorization and trivial knowledge is an optimization mechanism.
It is tough though, I'd like to think I learnt how to think analytically and critically. But thinking is hard, and often times I catch myself trying to outsource my thinking almost subconsciously. I'll read an article on HN and think "Let's go to the comment section and see what the opinions to choose from are", or one of the first instincts after encountering a problem is googling and now asking an LLM.
Most of us are also old enough to have had a chance to develop taste in code and writing. Many of the young generation lack the experience to distinguish good writing from LLM drivel.
wanted to chime in on the educational system. in the west, we have the 'banking system' which treat a student as a bank account and knowledge as currency, hence the dump more info into ppl to make them sm0rt attitude.
in developing areas, they actually implement more modern models commonly, as its newer and free to implement newer things.
those newer models focus more on exactly this. teach a person how to go through the process of finding solutions. rather than 'knowing a lot to enable the process of thinking'.
not saying what is better or worse, but reading this comment and article it reminds me of this.
a lot of people i see, they know tons of interesting things, but anything outside of their knowledge is a complete mystery.
all the while ppl from developing areas learn to solve issues. alot of individuals from there also, get out of their poverty and do really well for themselves.
ofcourse, this is a generalization and doesnt hold up in all cases. but i cant help think about it.
a lot of my colleagues dont know how to solve problems simply because they dont RTFM. they rely on knowledge from their education which is already outdated before they even sign up.. i try to teach them to RTFM. it seems hopeless. they look at me , downwards, because i have no papers. but if shit hits the fan, they come to me. solve the prolbem.
a wise guy i met once said (likely not his words). there are 2 type of ppl. those who think in problems, and those who think in solutions.
id related that to education, not prebaked human properties.