Comment by epolanski
3 days ago
> and that isn't something I ever encountered in the wild (in any formal sense)
Because in the software engineering world there is very little engineering involved.
That being said, I also think that the industry is unwilling to accept the slowliness of the proper engineering process for various reasons, including non criticality of most software and the possibility to amend bugs and errors on the fly.
Other engineering fields enjoy no such luxuries, the bridge either holds the train or it doesn't, you either nailed the manufacturing plant or there's little room for fixing, the plane's engine either works or not
Different stakes and patching opportunities lend to different practices.
Summoning Hillel Wayne....
https://www.hillelwayne.com/post/are-we-really-engineers/
He's spot on.
There is plenty on large scale enterprise projects, but than whole that stuff is looked down by "real developers".
Also in many countries, to one call themselves Software Engineer, they actually have to hold a proper degree, from a certified university or professional college, validated by the countrie's engineering order.
Because naturally 5 year (or 3 per country) degree in Software Engineering is the same a six weeks bootcamp.
I never finished my degree, but I believe I'm a very good developer (my employere agree). In my times most good programmers were self-taught.
I don't mind (hypothetically) not being allowed to call myself "engineer", but I do mind false dichotomy of "5 year course" vs "six week bootcamp". In the IT world it's entirely possibly to learn everything yourself and learn it better than one-fits-all course ever could.
CS is sort of unique in that regard. I value my university degree, but when I think about the classes that helped me the most, only one of them was a degree requirement. Not because the degree was useless, but because the information was accessible enough that I already knew most of the required content for an undergraduate degree when I got there.
I took lots of electives outside my major, and I know that I could have easily loved chemistry, mathematics, mechanical engineering, electrical engineering, or any number of fields. But when you're 12 years old with a free period in the school computer lab, you can't download a chemistry set or an oscilloscope or parts for building your next design iteration. You can download a C compiler and a PDF of K&R's "The C Programming Language," though.
CS just had a huge head-start in capturing my interest compared to every other subject because the barrier to entry is so low.
> In the IT world it's entirely possibly to learn everything yourself and learn it better than one-fits-all course ever could.
Strong disagree. However, this is closer to truth:
In the IT world, if you have learned everything yourself, it's entirely possible to think you have learned it better than one-fits-all course ever could.
There is lots of theoretical knowledge that comes with the degree that, while mostly useless in day-to-day work, is priceless in those rare moments that it comes handy. A self-taught developer won't even know they are missing this knowledge. Example of this is knowing how compilers work (which is surprisingly useful) - without the theoretical background one might attempt to parse HTML with regex and expect correct results.
Not that all degrees are created equal. But those X years do give you an edge over self-taught developers. You still need to work on other skills too, of course.
In Italy a degree isn't enough, you need to take an exam and be certified.
Like in Portugal, and in many countries yes, usually having Software Engineer on legally bound contracts implies taking the final examination.
However, already by having been through the degree there is a whole set of skills that one would not have gotten otherwise.
Assuming that they actually did it the right way, and not getting through it with minimal effort.
2 replies →
It still is engineering you only mistake design phase.
Writing code is the design phase.
You don’t need design phase for doing design.
Will drop link to relevant video later.
I see there has been a “spirited discussion” on this. We can get fairly emotionally invested into our approaches.
In my experience (and I have quite a bit of it, in some fairly significant contexts), “It Depends” is really where it’s at. I’ve learned to take an “heuristic” approach to software development.
I think of what I do as “engineering,” but not because of particular practices or educational credentials. Rather, it has to do with the Discipline and Structure of my approach, and a laser focus on the end result.
I have learned that things don’t have to be “set in stone,” but can be flexed and reshaped, to fit a particular context and development goal, and that goals can shift, as the project progresses.
When I have worked in large, multidisciplinary teams (like supporting hardware platforms), the project often looked a lot more “waterfall,” than when I have worked in very small teams (or alone), on pure software products. I’ve also seen small projects killed by overstructure, and large projects, killed, by too much flexibility. I’ve learned to be very skeptical of “hard and fast” rules that are applied everywhere.
Nowadays, I tend to work alone, or on small teams, achieving modest goals. My work is very flexible, and I often start coding early, with an extremely vague upfront design. Having something on the breadboard can make all the difference.
I’ve learned that everything that I write down, “ossifies” the process (which isn’t always a bad thing), so I avoid writing stuff down, if possible. It still needs to be tracked, though, so the structure of my code becomes the record.
Communication overhead is a big deal. Everything I have to tell someone else, or that they need to tell me, adds rigidity and overhead. In many cases, it can’t be avoided, but we can figure out ways to reduce the burden of this crucial component.
It’s complicated, but then, if it were easy, everyone would be doing it.
Googler, but opinions are my own.
I disagree. The design phase of a substantial change should be done beforehand with the help of a design doc. That forces you to put in writing (and in a way that is understandable by others) what you are envisioning. This exercise is really helpful in forcing you to think about alternatives, pitfalls, pros & cons, ... . This way, once stakeholders (your TL, other team members) agreed then the reviews related to that change become only code related (style, use this standard library function that does it, ... ) but the core idea is there.
This should only be a first phase of the design and should be high level and not a commitment. Then you quickly move on to iterate on this by writing working code, this is also part of the design.
Having an initial design approved and set in stone, and then a purely implementation phase is very waterfall and very rarely works well. Even just "pitfalls and pros & cons" are hard to get right because what you thought was needed or would be a problem may well turn out differently when you get hands-on and have actual data in the form of working code.
This is the talk I mentioned in my original comment:
https://www.youtube.com/watch?v=RhdlBHHimeM
I also read this series of blog posts recently where the author, Hillel Wayne, talked to several "traditional" engineers that had made the switch to software. He came to a similar conclusion and while I was previously on the fence of how much of what software developers do could be considered engineering, it convinced me that software engineer is a valid title and that what we do is engineering. First post here: https://www.hillelwayne.com/post/are-we-really-engineers/
Personally I don't need to talk with "traditional" engineers to have an opinion there, as I am mechanical engineer that currently deals mostly with software, but still in the context of "traditional" engineering (models and simulation, controls design).
Definitely making software can be engineering, most of the time it is not, not because of the nature of software, but the characteristics of the industry and culture that surrounds it, and argument in this article is not convincing (15 not very random engineers is not that much to support the argument from "family resemblance").
17 replies →
I was an undergraduate (computer) engineer student, but like many of my friends at that time (dot-com boom) I did not graduate since it was too tempting to get a job and get well paid instead.
However many, probably half, that I work with, and most that I worked with overall for the last 25+ years (since after I dropped out) have an engineering degree. Especially the younger ones, since this century it has been more focus on getting a degree and fewer seems to drop out early to get a job like many of us did in my days.
So when American employers insist on giving me titles like "software engineer" I cringe. It's embarrassing really, since I am surrounded by so many that have a real engineering degree, and I don't. It's like if I dropped out of medical school and then people started calling me "doctor" even if I wasn't one, legally. It would be amazing if we could find a better word so that non-engineers like me are not confused with the legally real engineers.
2 replies →
This is the talk on real software engineering:
https://www.youtube.com/watch?v=RhdlBHHimeM
> Writing code is the design phase.
Rich Hickey agrees it's a part of it, yes. https://www.youtube.com/watch?v=c5QF2HjHLSE
> Writing code is the design phase.
No, it really isn't. I don't know which amateur operation you've been involved with, but that is really not how things work in the real world.
In companies that are not entirely dysfunctional, each significant change to the system's involve a design phase, which often includes reviews from stakeholders and involved parties such as security reviews and data protection reviews. These tend to happen before any code is even written. This doesn't rule out spikes, but their role is to verify and validate requirements and approaches, and allow new requirements to emerge to provide feedback to the actual design process.
The only place where cowboy coding has a place is in small refactoring, features and code fixes.
It is, as often, a trade-off.
You need a high level design up-front but it should not be set in stone. Writing code and iterating is how you learn and get to a good, working design.
Heavy design specs up-front are a waste of time. Hence, the agile manifesto's "Working software over comprehensive documentation", unfortunately the key qualifier "comprehensive" is often lost along the way...
On the whole I agree that writing code is the design phase. Software dev. is design and test.
9 replies →
Operation that uses software developers not as code monkeys but actual business problem solvers that have also business knowledge.
Operation that delivers features instead of burning budget on discussions.
Operation that uses test/acceptance environments where you deploy and validate the design so people actually see the outcome.
Obviously you have to write down the requirements - but writing down requirements is not design phase.
Design starts with idea, is written down to couple sentences or paragraphs then turned into code and while it is still on test/acceptance it still is design phase. Once feature goes to production in a release "design phase" is done, implementation and changes are part of design and finding out issues, limitations.
This response is rude / insulting and doesn't actually add much because you've just asserted a bunch of fallacious opinions without any meat.
My opinion is reality is more nuanced. Both "the code is self documenting" and "the code is the design" are reasonable takes within reasonable situations.
I'll give an example.
I work in a bureaucratic organization where there's a requirement to share data and a design doc that goes through a series of not-really-technical approvals. The entire point of the process is to be consumable to people who don't really know what an API is. It's an entirely reasonable point of view that we should just create the swagger doc and publish that for approval.
I worked in another organization where everything was an RFC. You make a proposal, all the tech leads don't really understand the problem space, and you have no experience doing the thing, so you get the nod to go ahead. You now have a standard that struggles against reality, and is difficult to change because it has broad acceptance.
I'm not saying we should live in a world with zero non-code artifacts, but as someone who hops org to org, most of the artifacts aren't useful, but a CI/CD that builds, tests, and deploys, looking at the output and looking at the code gives me way more insight that most non-code processes.
> Because in the software engineering world there is very little engineering involved.
I can count on one hand the number of times I've been given the time to do a planning period for something less than a "major" feature in the past few years. Oddly, the only time I was able to push good QA, testing, and development practices was at an engineering firm.