Comment by evidencetamper
1 day ago
This is a mixed bag of advice. While it seems wise at the surface, and certainly works as an initial model, the reality is a bit more complex than aphorisms.
For example, what you know might not provide the cost benefit ratio your client would. Or the performance. If you only know Cloud Spanner but now there is a need for a small relational table? These maxims have obvious limitations.
I do agree that the client doesn't care about the tech stack. Or that seeking a golden standard is a McGuffin. But it does much deeper than that. Maybe a great solution will be a mix of OCaml, Lua, Python, Perl, low latency and batches.
A good engineer balances tradeoffs and solves problems in a satisfying way sufficing all requirements. That can be MySQL and Node. But it can also be C++ and Oracle Coherence. Shying away from a tool just because it has a reputation is just as silly as using it for a hype.
> Maybe a great solution will be a mix of OCaml, Lua, Python, Perl, low latency and batches.
Your customer does care about how quickly you can iterate new features over time, and product stability. A stack with a complex mix of technologies is likely to be harder to maintain over the longer term.
That's also an aphorism that may or may not correspond to reality.
Not only there are companies with highly capable teams that are able to move fast using a complex mix of technologies, but also there are customers who have very little interest in new features.
This is the point of my comment: these maxims are not universal truths, and taking them as such is a mistake. They are general models of good ideas, but they are just starter models.
A company needs to attend to its own needs and solve its own problems. The way this goes might be surprisingly different from common sense.
Sure universal truths are rare - though I think there are many more people using such an argument to justify an overly complex stack, than there are cases where it truly is the best solution long term.
Remember even if you have an unchanging product, change can be forced on you in terms of regulatory compliance, security bugs, hardware and OS changes etc.
I think the point of the original post is that most important part of the context is the people ( developers ) and what they know how to use well and I'd agree.
I'd just say that one thing I've learnt is that even if the developer in the future that has to add some feature or fix some bug, is the developer who originally wrote it, life is so much easier if the original is as simple as possible - but hey maybe that's just me.
> these maxims are not universal truths, and taking them as such is a mistake.
Amen.
How big is your team?
One person writing a stack in 6 languages is different from a team of 100 using 6 languages.
The problem emerges if you have some eccentric person who likes using a niche language no one else on the team knows. Three months into development they decide they hate software engineering and move to a farm in North Carolina.
Who else is going to be able to pick up their tasks, are you going to be able to quickly on board someone else. Or are you going to have to hire someone new with a specialty in this specific language.
This is a part of why NodeJS quickly ate the world. A lot of web studios had a bunch of front end programmers who were already really good with JavaScript. While NodeJS and frontend JS aren't 100% the same, it's not hard to learn both.
Try to get a front end dev to learn Spring in a week...
Excellent comment. What you raised are two important aspects of the analysis that the article didn't bother thinking about:
- how to best leverage the team you currently have
- what is the most likely shape your team will have in the future
Jane Street has enough resources and experts to be able to train developers on OCaml, Nubank and Clojure also comes to mind. If one leaves, the impact is not devastating. Hiring is not straightforward, but they are able to hire engineers willing to learn and train them.
This is not true for a lot of places, that have tighter teams and budgets, whose product is less specialized, and so on.
But this is where the article fails and your comment succeeds: actually setting out parameters to establish a strategy.
1 reply →
> This is a part of why NodeJS quickly ate the world
And the other part is you can share, say, data validation code between client and server easily - or move logic either side of the network without having to rewrite it.
ie Even if you are an expert in Java and Javascript - there are still benefits to running the same both ends.
Very much this. The concerns with running a six person team are quite a bit different from the concerns of directing hundreds to thousands of developers across multiple projects. No matter how good the team is and how well they are paid and treated, there will be churn. Hiring and supporting folks until they are productive is very expensive and gets more expensive the more complicated and number of different stacks you have to maintain.
If you want to have efficient portability of developers between teams you've got to consolidate and simplify your stacks as much as possible. Yeah your super star devs already know most of the languages and can pick up one more in stride no problem. But that's not your average developer. That average dev in very large organizations has worked on one language in one capacity for the last 5-15 years and knows almost nothing else. They aren't reading HN or really anything technology related not directly assigned via certification requirements. It's just a job. They aren't curious about the craft. How are you able to get those folks as productive as possible within your environment while still building institutional resiliency and, when possible, improving things?
That's why the transition from small startup with a couple pizza teams to large organizations with hundreds of developers is so difficult. They are able to actually hire full teams of amazing developers who are curious about the craft. The CTO has likely personally interviewed every single developer. At some point that doesn't become feasible and HR processes become involved. So inevitably the hiring bar will drop. And you'll start getting in more developers who are better about talking through an interview process than jumping between tech stacks fluidly. At some point, you have to transition to a "serious business" with processes and standards and paperwork and all that junk that startup devs hate. Maybe you can afford to have a skunkworks team that can play like startups. But it's just not feasible for the rest of Very Large Organizations. They have to be boring and predictable.
1 reply →
> Your customer does care about how quickly you can iterate new features over time
How true this is depends on your particular target market. There is a very large population of customers that are displeased by frequent iterations and feature additions/changes.
The author didn't say listen to the opinion of other, hype or not. The author said "set aside time to explore new technologies that catch your interest ... valuable for your product and your users. Finding the right balance is key to creating something truly impactful.".
It means we should make our own independent, educated judgement based on the need of the product/project we are working on.
> Finding the right balance is key to creating something truly impactful
This doesn't mean anything at all. These platitudes are pure vapor, and seem just solid enough that they make sense at first glance, but once you try to grasp it, there is nothing there. What is impactful? What is untruly impactful as opposed to truly impactful? Why is that important? Why is the right balance key for it? Balance of what? How do you measure if the balance is right?
My expectation for engineering (including its management) is that we deal in requirements, execution, delivery, not vibes. We need measurable outcomes, not vapor clouds.
> , the reality is a bit more complex than aphorisms.
This is the entire tech blog, social media influencer, devx schtick though. Nuance doesn't sell. Saying "It depends" doesn't get clicks.
> Shying away from a tool just because it has a reputation is just as silly as using it for a hype.
Trying to explain this to a team is one of the most frustrating things ever. Most of the time people pick / reject tools because of "feels".
On a related note, I never understood the hype around GraphQL for example.
I heavily dislike GraphQL for all of the reasons. But I'll say that for a lot of developers, if you are already setting up an API gateway, you might as well batch the calls, and simplify the frontend code.
I don't buy it :) but I can see the reasoning.
I'd saw nowadays, C++ is rarely the best answer, especially for the users.
C++ is often the best answer for users, but this is about how bad the other options are, and not that C++ is good. Options like Rust doesn't have the mature frameworks that C++ does. (rust-qt is often used as a hack instead of a pure rust framework). There is a big difference between modern C++ and the old C++98 as well, and the more you force you code to be modern C++ the less the footguns in C++ will hit you. The C++ committee is also driving forward in eliminating the things people don't like about C++.
Users don't care about your tech stack. They care about things like battery life, and how fast your program runs, how fast your program starts - places where C++ does really well. (C, rust... also do very well). Remember this is real world benchmarks, you can find micro benchmarks where python is just as fast as well written C, but if write a large application in python they will be 30-60 times slower than the same written in C++.
Note however that users only care about security after it is too late. C++ can be much better than C, but since it is really easy to write C style code in C++ you need a lot more care than you would want.
If for your application Rust or ada does have mature enough frameworks to work with then I wouldn't write C++, but all too often the long history of C++ means it is the best choice. In some applications managed languages like Java works well, but in others the limits of the runtime (startup, worse battery life) make it a bad choice. Many things are scripts you won't run very much and so python is just fine despite how slow it is. Make the right choice, but don't call C++ a bad choice just because for you it is bad.
For real time audio synthesis or video game engines then C++ is the industry standard.
It's true, and of course, all models are wrong, especially as you go into deeper detail, so I can't really argue an edge case here. Indeed, C++ is rarely the best answer. But we all know of trading systems and gaming engines that rely heavily on C++ (for now, may Rust keep growing).
...unless you do HFT...