Comment by goostavos
12 days ago
I had my first interview last week where I finally saw this in the wild. It was a student applying for an internship. It was the strangest interview. They had excellent textbook knowledge. They could tell you the space and time complexities of any data structure, but they couldn't explain anything about code they'd written or how it worked. After many painful and confusing minutes of trying to get them to explain, like, literally anything about how this thing on their resume worked, they finally shrugged and said that "GenAI did most of it."
It was a bizarre disconnect having someone be both highly educated and yet crippled by not doing.
Sounds a little bit like the stories from Feynman, e.g.: https://enlightenedidiot.net/random/feynman-on-brazilian-edu...
The students had memorized everything, but understood nothing. Add in access to generative AI, and you have the situation that you had with your interview.
It's a good reminder that what we really do, as programmers or software engineers or what you wanna call it, is understanding how computers and computations work.
There's a quote I love from Feynman
I have no doubt he'd be repeating it loudly now, given we live in a time where we developed machines that are optimized to fool ourselves.
It's probably also worth reading Feynman's Cargo Cult Science: https://sites.cs.ucsb.edu/~ravenben/cargocult.html
This the kind of interaction that makes be think that there are only 2 possible futures:
Star Trek or Idiocracy.
Hmmm, I think we're more likely to face an Idiocracy outcome. We need more Geordi La Forges out there, but we've got a lot of Fritos out here vibe coding the next Carl's Jr. locating app instead
we would be lucky to have idiocracy. president camacho had a huge problem and he found the smartest person in the country and got him working on it. if only we can do that
Star Trek illustrated the issue nicely in the scene where Scotty, who we should remember is an engineer, tries to talk to a computer mouse in the 20th century: https://www.youtube.com/watch?v=hShY6xZWVGE
Except that falls apart 2 seconds later when Scotty shocks the 20th-century engineers by being blazing fast with a keyboard.
Lots of theory but no practice.
More like using a calculator but not being able to explain how to do the calculation by hand. A probabilistic calculator which is sometimes wrong at that. The "lots of theory but no practice" has always been true for a majority of graduates in my experience.
Surely, new grads are light on experience (particularly relevant experience), but they should have student projects and whatnot that they should be able to explain, particularly for coding. Hardware projects are more rare simply because they cost money for parts and schools have limited budgets, but software has far fewer demands.
This is exactly the end state of hiring via Leetcode.
Makes me wonder if the hardware engineers look at software engineers and shrug, “they don’t really know how their software really works.”
Makes me wonder if C programmers look at JS programmers and shrug, “they don’t understand what their programs are actually doing.”
I’m not trying to be disingenuous, but I also don’t see a fundamental difference here. AI lets programmers express intent at a higher level of abstraction than ever before. So high, apparently, that it becomes debatable whether it is programming at all, out whether it takes any skill, out requires education or engineering knowledge any longer.
Wait, so they could say, write a linked list out, or bubble sort, but not understand what it was doing? like no mental model of memory, registers, or intuition for execution order, or even conceptual like a graph walk, or something? Like just "zero" on the conceptual front, but could reproduce data structures, some algorithm for accessing or traversing, and give rote O notation answers about how long execution takes ?
Just checking I have that right... is that what you meant?
I think that's what you were implying but it's just want to check I have that right? if so
... that ... is .... wow ...
If I'm understandinf correctly, I don't think what you're saying is quite right. They had a mental model of the algorithms, and then the code they "produced" was completely generated by AI, and they had no knowledge of how the code actually modeled the algorithm.
Knowing the complexity of bubble sort is one skill, being able to write code that performs bubble sort is a second, and being able to look at a function with the signature `void do_thing(int[] items)`and determine that it's bubble sort and the time complexity of it in terms of the input array is a third. It sounds like they had the first skill, used an AI to fake the second, but had no way of doing the third.