Comment by ryukoposting
16 hours ago
> That’s taking the structural discipline from the skyscraper and applying it to a space where I had total freedom.
Yeah, nah. When I take my learnings home with me, it fails every time.
Usually, the scale of work necessary to maintain an enterprise-grade system rapidly outgrows the time I can reasonably allocate to it. In other cases, I lose interest because it's boring corporate crap.
I don't known how all of you "homelab" people put up with it. I have enough Linux boxes at work that demand too much care and feeding.
The author has a good point but it really isn't a two-way street. The hobby stuff can feed into your career, but letting it go the other way is usually either counterproductive, or bad for your mental health.
Don't tinker in your shed because you think it'll advance your career. You'll be disappointed. Sorry for the spoiler.
Tinker in your shed because it makes you happy, and brings joy and meaning to your life. You'll be more productive and, in my experience, you'll actually be more likely to learn something useful for work.
The trick is to not overengineer your hobby if you're only doing it to prove a point.
ie. Yes, you could run a full on corporate CA, issue SSL certificates for your domains, manually rig up wireguard and run your own internal corporate VPN... or you just accept that your grand total of 1 concurrent user on an intranet is probably just better served by setting up Tailscale and a wildcard LE certificate so that the browser shuts up. (Which is still not great, but the argument over HTTPS and intranets is not for right now.)
Same with other deployment tools like Docker - yes, there's a ton of fancy ways to do persistent storage for serverless setups but get real: you're throwing the source folder in /opt/ and you have exactly one drive on that server. Save yourself the pain and just bind mount it to somewhere on your filesystem. Being able to back the folder up just with cp/rsync/rclone/scp is a lot easier than fiddling with docker's ambiguous mess of overlay2 subfolders.
Every overengineered decision of today is tomorrow's "goddammit I need to ssh into the server again for an unexpected edgecase".
I have a professional 'homelab' and a personal 'homelab'. You're 100% right, they can be a time sink. The important bit is to make sure the time is setup not 'maintenance' time.
The trick is twofold: if it isn't 'declare and deploy' don't run it. If it isn't in your backup/restore pipeline don't run it.
Pfsense and Home assistant are huge pains in the ass. Everything else is easy breezy.
Proxmox/pbs/truenas/talos/linstor/DRBD are all amazing.
I'm thinking about ditching pfsense for tailscale/cloudflare tunnels, but it's not worth the time atm. I don't have a viable alternative for HA.
Out of curiosity, what makes Home Assistant a pain?
I'm actually grateful (today) for the lightning strike that nuked my old pile of servers at home. It freed me from the whole thing in one step. I was completely disabused of any notion that I had control over anything at that point.
You might think you are protected with UPSes and what not, but nothing will stop the electromagnetic effects if it hits within a few feet. Every piece of copper is going to get lit up. No solution is 100% guaranteed here, but EC2 and snapshots is a hell of a lot more likely to survive a single event like that.
[dead]