Comment by bumby
5 years ago
>Coding at scale is about managing complexity.
I would extend this one level higher to say managing complexity is about managing risk. Risk is usually what we really care about.
From the article:
>any one person's opinions about another person's opinions about "clean code" are necessarily highly subjective.
At some point CS as a profession has to find the right balance of art and science. There's room for both. Codifying certain standards is the domain of professions (in the truest sense of the word) and not art.
Software often likens itself to traditional engineering disciplines. Those traditional engineering disciplines manage risk through codified standards built through industry consensus. Somebody may build a pressure system that doesn't conform to standards. They don't get to say "well your idea of 'good' is just an opinion so it's subjective". By "professional" standards they have built something outside the acceptable risk envelope and, if it's a regulated engineering domain, they can't use it.
This isn't to mean a coder would have to follow rigid rules constantly or that it needs a regulatory body, but that the practice of deviating from standardized best-practices should be communicated in terms of the risk rather than claiming it's just subjective.
A lot of "best practices" in engineering were established empirically, after root cause analysis of failures and successes. Software is more or less evolving along the same path (structured programming, OOP, higher-than-assembly languages, version control, documented ISAs).
Go back to earlier machines and each version had it's own assembly language and instruction set. Nobody would ever go back to that era.
OOP was pitched as a one-size-fits-all solution to all problems, and as a checklist of items that would turn a cheap offshored programmer into a real software engineer thanks to design patterns and abstractions dictated by a "Software Architect". We all know it to be false, and bordering on snake oil, but it still had some good ideas. Having a class encapsulate complexity and defining interfaces is neat. It forces to think in terms of abstractions and helps readability.
> This isn't to mean a coder would have to follow rigid rules constantly or that it needs a regulatory body, but that the practice of deviating from standardized best-practices should be communicated in terms of the risk rather than claiming it's just subjective.
As more and more years pass, I'm less and less against a regulatory body. Would help with getting rid of snake oil salesman in the industry and limit offshoring to barely qualified coders. And simplify hiring too by having a known certification that tells you someone at least meets a certain bar.
Software is to alchemy what software engineering is to chemistry. Software engineering hasn't been invented yet. You need a systematizing scientific revolution (Kuhn style) before you can or should create a regulatory body to enforce it. Otherwise you're just enforcing your particular brand of alchemy.
Well said. In the 1990s, in the aerospace software domain it was a once referred to an era of “cave drawings”
> OOP was pitched as a one-size-fits-all solution to all problems, and as a checklist of items that would turn a cheap offshored programmer into a real software engineer.
Not initially. Eventually, everything that reaches a certain minimal popularity in software development level gets pitched by snake-oil salesman to enterprise management as a solution to that problem, including things developed specifically to deal with the problem of othee solutions being cargo culted and repackaged that way, whether its a programming paradigm or a development methodology or metamethodology.
>having a known certification that tells you someone at least meets a certain bar.
This was tried a few years back by creating a Professional Engineer licensure for software but it went away due to lack of demand. It could make sense to artificially create a demand by the government requiring it for, say, safety critical software but I have a feeling companies wouldn't want this out of their own accord because that license gives the employee a bit more bargaining power. It also creates a large risk to the SWEs due to the lack of codified standards and the inherent difficulty in software testing. It's not like a mechanical engineer who can confidently claim a system is safe because it was built to ASME standards.
> It could make sense to artificially create a demand by the government requiring it for, say, safety critical software but I have a feeling companies wouldn't want this out of their own accord because that license gives the employee a bit more bargaining power.
For any software purchase above a certain amount the government should be forced to have someone with some kind of license sign on the request. So many projects have doubled or tripled in price after it was discovered the initial spec didn't make any sense.
9 replies →
>the practice of deviating from standardized best-practices should be communicated in terms of the risk rather than claiming it's just subjective.
The problem I see with this is that programming could be described as a kind of general problem solving. Other engineering disciplines standardize methods that are far more specific, e.g. how to tighten screws.
It's hard to come up with specific rules for general problems though. Algorithms are just solution descriptions in a language the computer and your colleagues can understand.
When we look at specific domains, e.g. finance and accounting software, we see industry standards have already emerged, like dealing with fixed point numbers instead of floating point to make calculation errors predictable.
If we now start codifying general software engineering, I'm worried we will just codify subjective opinions about general problem solving. And that will stop any kind of improvement.
Instead we have to accept that our discipline is different from the others, and more of a design or craft discipline.
>kind of general problem solving
Could you elaborate on this distinction? At the superficial level, "general problem solving" is exactly how I describe engineering in general. The example of tightening screws is just a specific example of a fastening problem. In that context, codified standards are an industry consensus on how to solve a specific problem. Most people wrenching on their cars are not following ASME torque guidelines but somebody building a spacecraft should be. It helps define the distinction of a professional build for a specific system. Fastening is the "general problem"; fastening certain materials for certain components in certain environments is the specific problem that the standards uniquely address.
For software, there are quantifiable measures. As an example, there are some sorting algorithms that are objectively faster than others. For those systems that it matters in terms of risk, it probably shouldn't be left up to the subjective eye of an individual programmer, just like the spacecraft should rely on a technician's subjective opinion of that a bolt is "meh, tight enough."
>I'm worried we will just codify subjective opinions about general problem solving.
Ironically, this is the same attitude in many circles of traditional engineering. People who don't want adhere to industry standards have their own subjective ideas about should solve the problem. Standards aren't always right, but it creates a starting point to 1) identify a risk and 2) find an acceptable way to mitigate it.
>Instead we have to accept that our discipline is different from the others
I strongly disagree with this and I've seen this sentiment used (along with "it's just software") to justify all kinds of bad design choices.
>For software, there are quantifiable measures. As an example, there are some sorting algorithms that are objectively faster than others. For those systems that it matters in terms of risk, it probably shouldn't be left up to the subjective eye of an individual programmer, just like the spacecraft should rely on a technician's subjective opinion of that a bolt is "meh, tight enough."
Then you start having discussions about every algorithm being used on collections of 10 or 100 elements, it doesn't really matter to the problem to be solved. Instead the language's built in sort functionality will probably do here and increase readability, because you know what's meant.
Profiling and replacing the algorithms that matter is much more efficient than looking at each usage.
Which again brings us back to the general vs specific issue. In general this won't matter, but if you're in a real-time embedded system you will need algorithms that don't allocate with known worst case execution times. But here again, at least for the systems that matter, we have specific rules.
1 reply →
> At some point CS as a profession has to find the right balance of art and science.
That seems like such a hard problem. Why not tackle a simpler one?
I didn’t downvote but I’ll weigh in on why I disagree.
The glib answer is “because it’s worth it.” As software interfaces with more and more of our lives, managing the risks becomes increasingly important.
Imagine if I transported you back 150 years to when the industrial revolution and steam power were just starting to take hold. At that time there were no consensus standards about what makes a mechanical system “good”; it was much more art than science. The numbers of mishaps and the reliability reflected this. However, as our knowledge grew we not only learned about what latent risks were posed by, say, a boiler in your home but we also began to define what is an acceptable design risk. There’s still art involved, but the science we learned (and continue to learn) provides the guardrails. The Wild West of design practice is no longer acceptable due to the risk it incurs.
I imagine that's part of why different programming languages exist -- IE you have slightly less footguns with Java than with C++.
The problem is, the nature of writing software intrinsically requires a balance of art and science no matter what language it is. That is because solving business problems is a blend of art and science.
It's a noble aim to try and avoid solving unnecessarily hard problems, but when it comes to the customer, a certain amount of it gets incompressible. So you can't avoid it.