← Back to context

Comment by Mountain_Skies

1 day ago

I look forward to the day when software engineers have the autonomy that licensed engineers have, so they can tell managers no and if the manager goes around the engineer, the manager and the company end up directly liable for the damage they create.

These are in fact the same thing. It is because an engineer can be held liable that results in them being willing to say no. In general, they probably won't be prosecuted, but a common reason for this is that there will be written records of engineers telling management that there are concerning risks. This also results in the job of a Professional Engineer, who is a person who legally puts themselves on the line. They get paid very well and for good reason, they have a lot on the line themselves.

I suspect that a big reason CS is not held to the same standards is due to abstraction and that it is still new. But we do live in a time where bad code can get people killed (control systems are the easiest examples), just as building a faulty bridge will. I just hope we don't need a Tacoma Bridge to cause change. Obviously it is harder to figure out things that are more abstract like Social Media (can provide both good and harm).

But I'd say, you can always say no. If you're not saying "no" now, that's still a choice you've made. A job is very persuasive, and I'm not saying you're bad for just keeping your head down, just that people should consider where they'd draw the line. The line is personal and different for everyone (which is okay!). Having taken traditional engineering courses, I'll note that ethics is frequently discussed and you're likely to be told you should try to define your line before you asked to cross it. If you don't, you'll likely to cross the line without knowing, as you just didn't know what it looked like. You can always redefine the line as you get more resolution (it continuously updates) but it's much harder to say "no" when you haven't given it much thought.

  • The main reason we are at a point we are is that it is possible to build very complex software systems cheaply: both the tools and building blocks are abundant and available to everyone.

    If an engineer tried to build a skyscraper or a bridge, they'd meet challenges other than them having knowledge or professional certification.

    And to your point, if anyone ever asked an engineer to insert another floor between 8th and 9th floor of a 15 story building, they'd laugh at them. In software engineering, this is possible even if hard.

    And finally, because of software living a much different life, it will be hard to define criteria for good software.

    • > And to your point, if anyone ever asked an engineer to insert another floor between 8th and 9th floor of a 15 story building, they'd laugh at them. In software engineering, this is possible even if hard.

      Bingo.

      For building engineers this is chuckle worthy. For software engineers, it's Wednesday.

    • > And to your point, if anyone ever asked an engineer to insert another floor between 8th and 9th floor of a 15 story building, they'd laugh at them. In software engineering, this is possible even if hard.

      Ah yes, another cocktail party idea [1] where a software engineer pretends like they understand civil engineering.

      [1] https://danluu.com/cocktail-ideas/

      8 replies →

    • I think you misinterpreted. I mostly agree. But people do program things like cars, planes, and other things that can literally cost human lives.

      The judgement isn't made on if mistakes happen, but if those that built the thing should have known better. You don't get sued when you legitimately don't know, but you can't be infinitely stupid either. It's about if you should have known. This does include things like not spending enough time or research determining if something is safe, because you can't just avoid answering a question that a reasonable person would have asked. And it has to lead to harm being done.

      It can help to see what the lawsuits are about. Like Takoma Airbags case[0] where they're being charged with knowing issues. It's for knowingly doing something bad. But you can't avoid asking questions, like in the Challenger Shuttle disaster[1] both NASA and Thiokol ignored signs that the O-rings being used were potentially dangerous and ignored concerns from engineers. While they didn't know the O-rings were defective in cold weather they should have known.

      With more abstract stuff like Social Media, yeah, we're not in clear cases that are doing harm. No one is going to be prosecuted for inventing nor even building social media. But you can have knowingly harmful practices like manipulating users feeds without consent to perform testing to see if you can make users more happy or sad[2]. The issue isn't that the experiment happened, but that you're experimenting on humans who did not knowingly give consent. You couldn't do that type of a thing with people offline. Offline you need consent before experiments. And you can't just say they'll subject to any experimentation with no warning and grant this privilege indefinitely. Because you should be asking if your experiments might harm people and there's a reasonable belief that it might cause harm.

      And on the other hand, no one is asking that the devs at wikipedia be sued or lose their programming license just because they created a dark mode where the radio button has an option of "system" but is defaulted to "light". Nor because they didn't go to the lengths is would be to make sure all images properly render when viewed in dark mode. These don't cause harm. Annoying and easy to fix issues, but no real harm has been done. Just petty issues.

      It can definitely be fuzzy at certain points, especially as all this is new, but it is also okay that things become more defined over time as we learn more. The point is to keep ethics in mind and to be thinking of the consequences of your work. No one is in trouble if you don't hurt someone but you can't walk around never considering the consequences of your actions. It's the work version of not allowing an excuse of "I was just following orders" or "I was just delivering them, I didn't know what was in them". This is not some belief that people should be sued just because they wrote shitty code. But they could IF someone gets hurt AND you used AI to write code because it is cheaper than a person AND knew that the code being written could harm someone.

      [0] https://www.justice.gov/criminal/criminal-vns/case/united-st...

      [1] https://en.wikipedia.org/wiki/Space_Shuttle_Challenger_disas...

      [2] https://techcrunch.com/2014/06/29/ethics-in-a-data-driven-wo...