If you're going to run these containers in production [on more than a single host], throw out the volumes and docker compose. Mock up your dev sdlc to work like production (ex. you can't use Docker Compose to start Fargate tasks)
In fact, I'm going to make a very heretical suggestion and say, don't even start writing app code until you know exactly how your whole SDLC, deployment workflow, architecture, etc will work in production. Figure out all that crap right at the start. You'll have a lot of extra considerations you didn't think of before, like container and app security scanning, artifact repository, source of truth for deployment versions, quality gates, different pipelines for dev and prod, orchestration system, deployment strategy, release process, secrets management, backup, access control, network requirements, service accounts, monitoring, etc.
The reason to map all that out up front is to "shift left". If you do these things one at a time, you lose more time later as you slowly implement each piece, refactoring as you go. Whereas if you know everything you're going to have to do, you have much better estimates of work. It's like doing sprint grooming but much farther ahead. Figure out potential problems sooner and it saves your butt down the road. (You can still change everything as you go, but your estimates will be wayyyy closer to reality, and you'll need less rework)
A weird comparison would be trying to build wooden furniture without planning out how you were gonna build it. You can get it done, but you have no idea if it'll take a weekend or two months. Plan it out and you can get more done in one shot, and the quality even improves. This is also the principle behind mise en place.
I don't think you're worrying about the right things here if you're about to start writing app code. Infrastructure can be changed easily - poorly architected code cannot.
What I'm talking about isn't infrastructure, it's the entire system architecture and workflow. Code architecture is a part of that. If you design your code architecture, and then look at system architecture, your code architecture may have to change. I'm suggesting to do them at the same time.
Say you did your code architecture, and you've been writing code for 3 months. The security architect comes by and takes a look at your work, and announces that your design is inherently flawed; you need to fix some token-passing thing that's tied deeply into your app to support some system they have to audit company apps. You end up doing rework for a sprint or 2 to fix it. This in particular may not apply to you, but there are hundreds of examples like this.
And even if you are planning to write desktop only software or an app for mobile, think in advance how do you want to package and release it, sign the code, provide help, branding customisation, etc.
Agile is an anti-pattern of SDLC as the lie "improve as you go" doesn't apply to release planning
I will make a heretical suggestion on the other side and say that unless you're pretty certain up front that your app will succeed, you need to get it in front of users ASAP, and if to you that means cutting corners on the SDLC and infra, so be it. If the app falls flat in the market, you'll never get a chance to amortize all that work.
Those lessons are highly subjective. You can use docker in production with 5% of this info. Ie I’m personally not a fan of using docker locally for development. I use it sometimes to boot local dependencies but never direct project I’m currently working on.
Is there a benefit to why you'd `npm install` in docker? I would likely have already done that in the checkout and test part of my workflow, and can just copy everything over from that?
> I would likely have already done that in the checkout and test part of my workflow, and can just copy everything over from that?
No you cannot, at least not for stuff that ships nodejs extensions to be compiled (e.g. by node-gyp). So for example if you're working on OS X and then run stuff in the Docker container you may hit errors. Additionally, if you are running e.g. on Ubuntu 18.04 and compile there and then run npm in a docker container on Ubuntu 16.04, you may hit library mismatches.
Thanks, @JohnHammersley! Your 2016 writeup of the same name was well-received; given how fast things change in this space, I'd bet this "updated for 2019" version is worth bookmarking.
I'm no fan of Kubernetes in the general case, but by no metric is Docker "dead". And the article isn't spam; it's insightful, though not an approach I'd use.
It seems like your account exists just to scream about Kubernetes and Docker. If it's so bad, why waste the time?
I think the hype might have diminished a little bit, but I'm curious what would make you think any major containerization technology is dying? Both Docker & k8s certainly continue to show up in enough job postings...
(Also, I will out right object to the claim that this is off topic; it's about technology and someone found it interesting, so it belongs.)
If you're going to run these containers in production [on more than a single host], throw out the volumes and docker compose. Mock up your dev sdlc to work like production (ex. you can't use Docker Compose to start Fargate tasks)
In fact, I'm going to make a very heretical suggestion and say, don't even start writing app code until you know exactly how your whole SDLC, deployment workflow, architecture, etc will work in production. Figure out all that crap right at the start. You'll have a lot of extra considerations you didn't think of before, like container and app security scanning, artifact repository, source of truth for deployment versions, quality gates, different pipelines for dev and prod, orchestration system, deployment strategy, release process, secrets management, backup, access control, network requirements, service accounts, monitoring, etc.
The reason to map all that out up front is to "shift left". If you do these things one at a time, you lose more time later as you slowly implement each piece, refactoring as you go. Whereas if you know everything you're going to have to do, you have much better estimates of work. It's like doing sprint grooming but much farther ahead. Figure out potential problems sooner and it saves your butt down the road. (You can still change everything as you go, but your estimates will be wayyyy closer to reality, and you'll need less rework)
A weird comparison would be trying to build wooden furniture without planning out how you were gonna build it. You can get it done, but you have no idea if it'll take a weekend or two months. Plan it out and you can get more done in one shot, and the quality even improves. This is also the principle behind mise en place.
I don't think you're worrying about the right things here if you're about to start writing app code. Infrastructure can be changed easily - poorly architected code cannot.
What I'm talking about isn't infrastructure, it's the entire system architecture and workflow. Code architecture is a part of that. If you design your code architecture, and then look at system architecture, your code architecture may have to change. I'm suggesting to do them at the same time.
Say you did your code architecture, and you've been writing code for 3 months. The security architect comes by and takes a look at your work, and announces that your design is inherently flawed; you need to fix some token-passing thing that's tied deeply into your app to support some system they have to audit company apps. You end up doing rework for a sprint or 2 to fix it. This in particular may not apply to you, but there are hundreds of examples like this.
This.
And even if you are planning to write desktop only software or an app for mobile, think in advance how do you want to package and release it, sign the code, provide help, branding customisation, etc.
Agile is an anti-pattern of SDLC as the lie "improve as you go" doesn't apply to release planning
I will make a heretical suggestion on the other side and say that unless you're pretty certain up front that your app will succeed, you need to get it in front of users ASAP, and if to you that means cutting corners on the SDLC and infra, so be it. If the app falls flat in the market, you'll never get a chance to amortize all that work.
Why do you run the final production container with a node (slim) image and not just nginx? Would be another 6 times smaller.
They are running JavaScript as server code, not just serving JS files for a client.
In case anyone needs ARM containers for node, I build my own LTS containers on Travis:
https://github.com/insightfulsystems/alpine-node
...and use those as the base for most of my stuff in order to have a bit more control over what goes in the images:
https://github.com/insightfulsystems/node-red
The fact that you need to learn so many lessons while using docker shows how complex it is.
Those lessons are highly subjective. You can use docker in production with 5% of this info. Ie I’m personally not a fan of using docker locally for development. I use it sometimes to boot local dependencies but never direct project I’m currently working on.
What is the point of having docker at all, then, if development and production deployments are so different...?
Not trying to be flippant here - I am genuinely still trying to get my head around docker’s popularity, it’s just so awkward in so many cases...
3 replies →
Is there a benefit to why you'd `npm install` in docker? I would likely have already done that in the checkout and test part of my workflow, and can just copy everything over from that?
> I would likely have already done that in the checkout and test part of my workflow, and can just copy everything over from that?
No you cannot, at least not for stuff that ships nodejs extensions to be compiled (e.g. by node-gyp). So for example if you're working on OS X and then run stuff in the Docker container you may hit errors. Additionally, if you are running e.g. on Ubuntu 18.04 and compile there and then run npm in a docker container on Ubuntu 16.04, you may hit library mismatches.
That is done to have a complete and repeatable build. If that is run on a different machine will still work the same.
Thanks, @JohnHammersley! Your 2016 writeup of the same name was well-received; given how fast things change in this space, I'd bet this "updated for 2019" version is worth bookmarking.
Thanks Chris, but I should point out that I'm not the author of the blog post (that's my good friend @jdleesmiller) :)
Related from 2016: https://news.ycombinator.com/item?id=11545975
Docker spam, please stop submitting this crap. Docker is dead. Kubernetes is dying. This is just spam.
Stop it. Its off topic.
I'm no fan of Kubernetes in the general case, but by no metric is Docker "dead". And the article isn't spam; it's insightful, though not an approach I'd use.
It seems like your account exists just to scream about Kubernetes and Docker. If it's so bad, why waste the time?
I think the hype might have diminished a little bit, but I'm curious what would make you think any major containerization technology is dying? Both Docker & k8s certainly continue to show up in enough job postings...
(Also, I will out right object to the claim that this is off topic; it's about technology and someone found it interesting, so it belongs.)
What about docker performance overhead? When performance is more important than security?
5 replies →
Who says kubernetes is dying?
This user. You'll see their short comment history, with only one exception, is strictly on Kubernetes posts.