Comment by andrew_lettuce
2 days ago
We all know the AI part is largely meaningless because of the hype and nonsense, but what defines you as an engineer? When you consider that classical engineers are responsible for the correctness of their work, combining it with AI seems like a joke
> "When you consider that classical engineers are responsible for the correctness of their work"
Woah hang on, I think this betrays a severe misunderstanding of what engineers do.
FWIW I was trained as a classical engineer (mechanical), but pretty much just write code these days. But I did have a past life as a not-SWE.
Most classical engineering fields deal with probabilistic system components all of the time. In fact I'd go as far as to say that inability to deal with probabilistic components is disqualifying from many engineering endeavors.
Process engineers for example have to account for human error rates. On a given production line with humans in a loop, the operators will sometimes screw up. Designing systems to detect these errors (which are highly probabilistic!), mitigate them, and reduce the occurrence rates of such errors is a huge part of the job.
Likewise even for regular mechanical engineers, there are probabilistic variances in manufacturing tolerances. Your specs are always given with confidence intervals (this metal sheet is 1mm thick +- 0.05mm) because of this. All of the designs you work on specifically account for this (hence safety margins!). The ways in which these probabilities combine and interact is a serious field of study.
Software engineering is unlike traditional engineering disciplines in that for most of its lifetime it's had the luxury of purely deterministic expectations. This is not true in nearly every other type of engineering.
If anything the advent of ML has introduced this element to software, and the ability to actually work with probabilistic outcomes is what separates those who are serious about this stuff vs. demoware hot air blowers.
You're right in a descriptive manner, but I also think the parent comment's point is about correctness and not determinism.
In other engineering fields correctness-related-guarantees can often be phrased in probabilistic ways, e.g. "This bridge will withstand a 10-year flood event but not a 100-year flood event", but underneath those guarantees are hard deterministic load estimates with appropriate error margins.
And I think that's where the core disagreement between you and the parent comment lies. I think they're trying to say AI generated code-pushers are often getting fuzzy on speccing out the behavior guarantees of their own software. In some ways the software industry has _always_ been bad at this, despite working with deterministic math, surprise software bugs are plentiful, but vibe-coding takes this to another level.
(This is my best-case charitable understanding of what they're saying, but also happens to be where I stand)
> "I think they're trying to say AI generated code-pushers are often getting fuzzy on speccing out the behavior guarantees of their own software."
I agree, and I think that's the root of the years-long argument of whether programmers are "real" engineers, where "real engineering" implies a level of rigor about the existence of and adherence to specifications.
My take on this is though that this unseriousness really has little to with AI and entirely to do with the longstanding culture of software generally. In fact I'd go as far as to say that pre-LLM ML was better about this than the rest of the industry at-large.
I've had the good fortune to be working in this realm since before LLMs became the buzzword - most ML teams had well-quantified model behaviors! They knew their precision and recall! You kind of had to, because it was very hard to get models to do what you wanted, plus companies involved in this space generally cared about outcomes.
Then we got LLMs, when you can superficially produce really impressive results easily, and the dominance of vibes over results. I can't stand it either, and mostly am just waiting for most of these things to go bust so we can go back to probabilistic systems where we give a shit about quantification.
1 reply →
Nicely said, I'm going to borrow some language here. I've talked a little to my coworkers about how it's possible the future of SWE looks more like "build a complex system with AI and test it to death to make sure it fits inside the performance envelope you require".
This seems to me patently absurd, because LLMs are not part of the probabilistic environment of the domain you're engineering; rather, you're injecting new probabilistic inputs into your system. That seems to me to be a wholly different category, and wildly misrepresents how an engineer is supposed to operate and think.
> "because LLMs are not part of the probabilistic environment of the domain you're engineering; rather, you're injecting new probabilistic inputs into your system"
You do this as a process engineer also. You don't have to have a human operator inserting the stator into the motor housing, you could have a robot do it (it would cost a lot more) and be a lot more deterministic.
After the stator is in the housing you don't need to have a human operator close it using a hand tool. You could do it robotically in which case the odds of failure are much lower. That also costs a lot.
You choose to insert probabilistic components into the system because you've evaluated the tradeoffs around it and decided it's worth it.
Likewise you could do sentiment analysis of a restaurant review in a non-probabilistic manner - there are many options! But you choose a probabilistic ML model because it does a better job overall and you've evaluated the failure modes.
These things really aren't that different.
This comment is excellent.
I will be thinking about this comment for a bit. Thanks for this perspective!
Hard to tell what you're even trying to say here. I am obviously responsible for the correctness of my work. "AI Engineer" does not generally mean "AI-Assisted Engineer", thought that was clear from my post.