Comment by Waterluvian
16 hours ago
I think if your job is to assemble a segment of a car based on a spec using provided tools and pre-trained processes, it makes sense if you worry that giant robot arms might be installed to replace you.
But if your job is to assemble a car in order to explore what modifications to make to the design, experiment with a single prototype, and determine how to program those robot arms, you’re probably not thinking about the risk of being automated.
I know a lot of counter arguments are a form of, “but AI is automating that second class of job!” But I just really haven’t seen that at all. What I have seen is a misclassification of the former as the latter.
A software engineer with an LLM is still infinitely more powerful than a commoner with an LLM. The engineer can debug, guide, change approaches, and give very specific instructions if they know what needs to be done.
The commoner can only hammer the prompt repeatedly with "this doesn't work can you fix it".
So yes, our jobs are changing rapidly, but this doesn't strike me as being obsolete any time soon.
I listened to an segment on the radio where a College Teacher told their class that it was okay to use AI assist you during test provided:
1. Declare in advance that AI is being used.
2. Provided verbatim the questions and answer session.
3. Explain why the answer given by the AI is good answer.
Part of the grade will include grading 1, 2, 3
Fair enough.
It’s better than nothing but the problem is students will figure out feeding step 2 right back to the AI logged in via another session to get 3.
This is actually a great way to foster the learning spirit in the age of AI. Even if the student uses AI to arrive at an answer, they will still need to, at the very least, ask the AI to give it an explanation that will teach them how it arrived to the solution.
9 replies →
Props to the teacher for putting in the work to thoughtfully grade an AI transcript! As I typed that I wondered if a lazy teacher might then use AI to grade the students AI transcript?
That's roughly what we did as well. Use anything you want, but in the end you have to be able to explain the process and the projects are harder than before.
If we can do more now in a shorter time then let's teach people to get proficient at it, not arbitrarily limit them in ways they won't be when doing their job later.
I think it's a bit like the Dunning-Kruger effect. You need to know what you're even asking for and how to ask for it. And you need to know how to evaluate if you've got it.
This actually reminds me so strongly of the Pakleds from Star Trek TNG. They knew they wanted to be strong and fast, but the best they could do is say, "make us strong." They had no ability to evaluate that their AI (sorry, Geordi) was giving them something that looked strong, but simply wasn't.
Oh wow this is a great reference/image/metaphor for "software engineers" who misuse these tools - "the great pakledification" of software
Yep, I've seen a couple of folks pretending to be junior PMs, thinking they can replace developers entirely. The problem is, they can't write a spec. They can define a feature at a very high level, on a good day. They resort to asking one AI to write them a spec that they feed to another.
It's slop all the way down.
People have tried that with everything from COBOL to low code. Its even succeeded in some problem domains (e.g. thing people code with spreadsheet formula) but there is no general solution that replaces programmers entirely.
Agree totally.
My job is to make people who have money think I'm indispensable to achieving their goals. There's a good chance AI can fake this well enough to replace me. Faking it would be good enough in an economy with low levels of competition; everyone can judge for themselves if this is our economy or not.
I mean it sounds to me like a beautiful corporate poison. :)
I don’t think this is the issue “yet”. It’s that no matter what class you are, your CEO does not care. Mediocre AI work is enough to give them immense returns and an exit. He’s not looking out for the unfortunate bag holders. The world has always had tolerance for highly distributed crap. See Windows.
This seems like a purely cynical lacking any substantive analysis.
Despite whatever nasty business practices and shitty UX Windows has foisted on the world, there is no denying the tremendous value that it has brought, including impressive backwards compatibility that rivals some of the best platforms in computing history.
AI shovelware pump-n-dump is an entirely different short term game that will never get anywhere near Microsoft levels of success. It's more like the fly-by-nights in the dotcom bubble that crashed and burned without having achieved anything except a large investment.
You misunderstand me. While I left Windows over a decade ago, I recognize it was a great OS in some aspects. I was referring to the recent AI fueled Windows developments and Ad riddled experiences. Someone decided that is fine, and you won't see orgs or regular users drop it...tolerance.
You are describing tradition (deterministic?) automation before AI. With AI systems as general as today's SOTA LLMs, they'll happily take on the job regardless of the task falling into class I or class II.
Ask a robot arm "how should we improve our car design this year", it'll certainly get stuck. Ask an AI, it'll give you a real opinion that's at least on par with a human's opinion. If a company builds enough tooling to complete the "AI comes up with idea -> AI designs prototype -> AI robot physically builds the car -> AI robot test drives the car -> AI evaluates all prototypes and confirms next year's design" feedback loop, then theoretically this definitely can work.
This is why AI is seen as such a big deal - it's fundamentally different from all previous technologies. To an AI, there is no line that would distinguish class I from II.
This is actually a really good description of the situation. But I will say, as someone that prided myself on being the second one you described, I am becoming very concerned about how much of my work was misclassified. It does feel like a lot of work I did in the second class is being automated where maybe previously it overinflated my ego.
SWE is more like formula 1 where each race presents a unique combination of track, car, driver, conditions. You may have tools to build the thing, but designing the thing is the main issue. Code editor, linter, test runner, build tools are for building the thing. Understanding the requirements and the technical challenges is designing the thing.
The other day I said something along the lines of, "be interested in the class, not the instance" and I meant to try to articulate a sense of metaprogramming and metaanalysis of a problem.
Y is causing Z and we should fix that. But if we stop and study the problem, we might discover that X causes the class of Y problem so we can fix the entire class, not just the instance. And perhaps W causes the class of X issue. I find my job more and more being about how far up this causality tree can I reason, how confident am I about my findings, and how far up does it make business sense to address right now, later, or ever?
is it? I really fail to see the metaphor as an F1 fan. The cars do not change that much; only the setup does, based on track and conditions. The drivers are fairly consistent through the season. Once a car is built and a pecking order is established in the season, it is pretty unrealistic to expect a team with a slower car to outcompete a team with a faster car, no matter what track it is (since the conditions affect everyone equally).
Over the last 16 years, Red Bull has won 8 times, Mercedes 7 times and Mclaren 1. Which means, regardless of the change in tracks and conditions, the winners are usually the same.
So either every other team sucks at "understanding the requirements and the technical challenges" on a clinical basis or the metaphor doesn't make a lot of sense.
4 replies →
Well a lot of managers view their employees as doing the former, but they’re really doing the latter
> I know a lot of counter arguments are a form of, “but AI is automating that second class of job!”
Uh, it's not the issue. The issue is that there isn't that much demand for the second class of job. At least not yet. The first class of job is what feeds billions of families.
Yeah, I'm aware of the lump of labour fallacy.
Discussing what we should do about the automation of labour is nothing new and is certainly a pretty big deal here. But I think you're reframing/redirecting the intended topic of conversation by suggesting that "X isn't the issue, Y is."
It wanders off the path like if I responded with, "that's also not the issue. The issue is that people need jobs to eat."
It depends a lot on the type of industry I would think.