Comment by xboxnolifes
1 month ago
Most of the world does work this way. Problems are solved within certain conditions and for use over a certain time frame. Once those change, the problem gets revisited.
Most software gets to take it to more of an extreme then many engineering fields since there isn't physical danger. Its telling that the counter examples always use the potentially dangerous problems like medicine or nuclear engineering. The software in those fields are more stringent.
The "certain conditions" is wildly different for software engineers since there are virtually no laws or professional guidelines restricting them.
> Most software gets to take it to more of an extreme then many engineering fields since there isn't physical danger
But there is physical danger. It's just abstracted away from the engineer. The engineer writing a video card driver doesn't see any physical danger, but the video may be used to display a warning that the person is about to be shot by an assailant. That's one example of a billion possible ones, because you do not control what your software will eventually be used for. Thus it's unethical to make decisions based on one's personal interests, as what's at stake is much larger.
> Its telling that the counter examples always use the potentially dangerous problems like medicine or nuclear engineering. The software in those fields are more stringent.
As someone who's worked in those fields: Not really. Submit a form that said you did some black box testing, and whatever software you want (even when you have no idea how it works) gets approved for a medical device. Nuclear is also scarily vulnerable. The software that controls other critical systems is even less robust. Just look at the decades of failures in SCADA, and realize IoT is even worse.