Comment by AdamN
9 days ago
Jersey City still was fine and 50 miles can be problematic for certain types of backup (failover) protocols. Regular tape backups would be fine but secondary databases can't be that far away (at least not at the time). I remember my boss at WFC saying that the most traffic over the data lines was in the middle of the night due to backups - not when everybody was in the office.
Companies big enough will lay the fibre. 50-100 miles of fibre isn't much if you are a billion dollar business. Even companies like BlackRock who had their own datacenters have since taken up Azure. 50 miles latency is negligible, even for databases.
The WTC attacks were in the 90s and early 00s and back then, 50 miles of latency was anything but negligible and Azure didn’t exist.
I know this because I was working on online systems back then.
I also vividly remember 9/11 and the days that followed. We had a satellite dish with multiple receivers (which wasn’t common back then) so had to run a 3rd party Linux box to descramble the single. We watch 24/7 global news on a crappy 5:4 CRT running Windows ME during the attack. Even in the UK, it was a somber and sobering experience.
For backups, latency is far less an issue than bandwidth.
Latency is defined by physics (speed of light, through specific conductors or fibres).
Bandwidth is determined by technology, which has advanced markedly in the past 25 years.
Even a quarter century ago, the bandwidth of a station wagon full of tapes was pretty good, even if the latency was high. Physical media transfer to multiple distant points remains a viable back-up strategy should you happen to be bandwidth-constrained in realtime links. The media themselves can be rotated / reused multiple times.
Various cloud service providers have offered such services, effectively a datacentre-in-a-truck, which loads up current data and delivers it, physically, to an off-site or cloud location. A similar current offering from AWS is data transfer terminals: <https://www.datacenterdynamics.com/en/news/aws-retires-snowm...>.
1 reply →
Laws of physics hasn't changed since the early 00s though, we could build very low latency point to point links back then too.
9 replies →
In the US, dark fiber will run you around 100k / mile. Thats expensive for anyone even if they can afford it. I worked in HFT for 15 years and we had tons of it.
DWDM per-wavelength costs are way, way lower than that, and, with the optional addition of encryption, perfectly secure and fast enough for disk replication for most storage farms. I've been there and done it.
2 replies →
So that's 5 million bucks for 50 miles? If there are other costs not being accounted for, like paying for the right-of-way that's one thing, but I would think big companies or in this case, a national government, could afford that bill.
1 reply →