Comment by efitz
2 months ago
Most people have an intuitive sense to ask themselves questions like "If I do this, will someone be harmed, who, how much harm, what kind of harm, etc.", that factors into moral decisions.
Almost everyone, even people without a moral sense, have a self-preservation sense- "How likely is it that I will get caught? If I get caught, will I get punished? How bad will the punishment be?" and these factor into a personal risk decision. Laws, among having other purposes, are a convenient way to inform people ahead of time of the risks, in hopes of deterring undesirable behavior.
But most people aren't sociopaths and while they might make fuzzy moral decisions about low-harm low-risk activities, they will shy away from high-harm or high-risk activities, either out of moral sense or self preservation sense or both.
"Stealing from rich companies" is a just a cope. In the case of an exploit against a large company, real innocent people can be harmed, even severely. Exposing whistleblowers or dissidents has even resulted in death.
> Most people have an intuitive sense to ask themselves questions like "If I do this, will someone be harmed
How much time do you spend asking yourself whether your paycheck is coming from a source that causes harm? Or whether the code you have written will be used directly or indirectly to cause harm? Pretty much everyone in tech is responsible for great harm by this logic.
I actually think about it a lot:
https://news.ycombinator.com/item?id=42540862#42542151
Great, would you be surprised that most of us don't?
Most will just take the 500k paycheck and work at whatever the next big tech thing is.
There's some chance that thing is autonomous drones or something like that...
That's definitely a factor at least some people consider when choosing their job.
> Pretty much everyone in tech is responsible for great harm by this logic.
We're also responsible for great good. The question which is greater is tricky, case-by-case and subjective.
It‘s a grey zone.
If Mr GRU asks, I probably say say no.
If the CIA, Mossad or BND asks, maybe I say yes? It’s not clear for a person with a better moral compass than mine.
...has even resulted in death
I wish developers (and their companies, tooling, industry, etc.) creating such flaws in the first place would treat the craft with a higher degree of diligence. It bothers me that someone didn't maintain the segregation between display name / global identifier (in YouTube frontend*) or global identifier / email address (in the older product), or was in a position to maintain the code without understanding the importance of that intended barrier.
If users knew what a mess most software these days looks like under the hood (especially with regard to privacy) I think they'd be a lot less comfortable using it. I'm encouraged by some of the efforts that are making an impact (e.g. advances in memory safety).
(*Seems like it wouldn't have been as big a deal if the architecture at Google relied more heavily on product-encapsulated account identifiers instead of global ones)