I use the LB's for high availability rather than needing load balancing. The LB + 2 web back-ends + Managed DB means a project is resilient to a single server failing, for relatively low devops effort and around $75/mo.
I have a couple of instances of this same pattern for various things that have been running for 5+ years, none of them have suffered downtime caused by the infrastructure. I use ansible scripts for the web servers, and the DO API or dashboard to provision the Load Balancer and Database. You can get it all hooked up in a half hour, and it really doesn't take any maintenance other than setting up good practices for rotating the web servers out for updates.
They wouldn't survive DO losing a DC, they're not so mission critical that it's worth the extra complexity to do that, and I don't recall DO losing a DC in the past 10 years or so.
They did stay up during this outage, which was apparently mostly concentrated on a different product called the 'global load balancer', which ironically is exactly the extra complexity I mentioned to in theory survive a DC outage.
Keep in mind these are "important" in the sense that they justify $100/mo on infra and monitoring, but not "life critical" in that an outage is gonna kill somebody or cost millions of bucks an hour. Once your traffic gets past a certain threshold, DO's costs don't scale that well and you're better off on a large distributed self-managed setup on Hetzner or buying into a stack like AWS.
To me their LB and DB products hit a real sweet spot -- better reliability than one box, and meaningfully less work than setting up a cluster with floating IP and heartbeats and all that for a very minimal price difference.
Usually because of SSL termination. It's generally "easier" to just let DO manage getting the cert installed. Of course, there are tradeoffs.
I use the LB's for high availability rather than needing load balancing. The LB + 2 web back-ends + Managed DB means a project is resilient to a single server failing, for relatively low devops effort and around $75/mo.
Are both servers deployed from the exact same repo/scripts? Or are they meaningful different, and/or balanced across multiple data centers?
Did your high availability system survive this outage?
I have a couple of instances of this same pattern for various things that have been running for 5+ years, none of them have suffered downtime caused by the infrastructure. I use ansible scripts for the web servers, and the DO API or dashboard to provision the Load Balancer and Database. You can get it all hooked up in a half hour, and it really doesn't take any maintenance other than setting up good practices for rotating the web servers out for updates.
They wouldn't survive DO losing a DC, they're not so mission critical that it's worth the extra complexity to do that, and I don't recall DO losing a DC in the past 10 years or so.
They did stay up during this outage, which was apparently mostly concentrated on a different product called the 'global load balancer', which ironically is exactly the extra complexity I mentioned to in theory survive a DC outage.
Keep in mind these are "important" in the sense that they justify $100/mo on infra and monitoring, but not "life critical" in that an outage is gonna kill somebody or cost millions of bucks an hour. Once your traffic gets past a certain threshold, DO's costs don't scale that well and you're better off on a large distributed self-managed setup on Hetzner or buying into a stack like AWS.
To me their LB and DB products hit a real sweet spot -- better reliability than one box, and meaningfully less work than setting up a cluster with floating IP and heartbeats and all that for a very minimal price difference.