← Back to context

Comment by flumpcakes

9 days ago

Companies big enough will lay the fibre. 50-100 miles of fibre isn't much if you are a billion dollar business. Even companies like BlackRock who had their own datacenters have since taken up Azure. 50 miles latency is negligible, even for databases.

The WTC attacks were in the 90s and early 00s and back then, 50 miles of latency was anything but negligible and Azure didn’t exist.

I know this because I was working on online systems back then.

I also vividly remember 9/11 and the days that followed. We had a satellite dish with multiple receivers (which wasn’t common back then) so had to run a 3rd party Linux box to descramble the single. We watch 24/7 global news on a crappy 5:4 CRT running Windows ME during the attack. Even in the UK, it was a somber and sobering experience.

  • For backups, latency is far less an issue than bandwidth.

    Latency is defined by physics (speed of light, through specific conductors or fibres).

    Bandwidth is determined by technology, which has advanced markedly in the past 25 years.

    Even a quarter century ago, the bandwidth of a station wagon full of tapes was pretty good, even if the latency was high. Physical media transfer to multiple distant points remains a viable back-up strategy should you happen to be bandwidth-constrained in realtime links. The media themselves can be rotated / reused multiple times.

    Various cloud service providers have offered such services, effectively a datacentre-in-a-truck, which loads up current data and delivers it, physically, to an off-site or cloud location. A similar current offering from AWS is data transfer terminals: <https://www.datacenterdynamics.com/en/news/aws-retires-snowm...>.

    • I’ve covered those points already in other responses. It’s probably worth reading them before assuming I don’t know the differences between the most basic of networking terms.

      I was also specifically responding to the GPs point about latency for DB replication. For backups, one wouldn’t have used live replication back then (nor even now, outside of a few enterprise edge cases).

      Snowmobile and its ilk was a hugely expensive service by the way. I’ve spent a fair amount of time migrating broadcasters and movie studios to AWS and it was always cheaper and less risky to upload petabytes from the data centre than it was to ship HDDs to AWS. So after conversations with our AWS account manager and running the numbers, we always ended up just uploading the stuff ourselves.

      I’m sure there was a customer who benefited from such a service, but we had petabytes and it wasn’t us. And anyone I worked with who had larger storage requirements didn’t use vanilla S3, so I can’t see how Snowmobile would have worked for them either.

  • Laws of physics hasn't changed since the early 00s though, we could build very low latency point to point links back then too.

    • Switching gear was slower and laying new fibre wasn't an option for your average company. Particularly not point-to-point between your DB server and your replica.

      So if real-time synchronization isn't practical, you are then left to do out-of-hours backups and there you start running into bandwidth issues of the time.

      1 reply →

    • Plus long distance was mostly fibre already. And even regular electrical wires aren’t really much slower than fibre in term of latency. Parent probably meant bandwidth.

      5 replies →

    • Yes but good luck trying to get funding approval. There is a funny saying that wealthy people don't become wealthy by giving their wealth away. I think it applies to companies even more.

In the US, dark fiber will run you around 100k / mile. Thats expensive for anyone even if they can afford it. I worked in HFT for 15 years and we had tons of it.

  • DWDM per-wavelength costs are way, way lower than that, and, with the optional addition of encryption, perfectly secure and fast enough for disk replication for most storage farms. I've been there and done it.

    • Assuming that dark fiber is actually dark (without amplifiers/repeaters), I'd wonder how they'd justify the 4 orders of magnitude (99.99%!) profit margin on said fiber. That already includes one order of magnitude between the 12th-of-a-ribbon clad-fiber and opportunistically (when someone already digs the ground up) buried speed pipe with 144-core cable.

      1 reply →

  • So that's 5 million bucks for 50 miles? If there are other costs not being accounted for, like paying for the right-of-way that's one thing, but I would think big companies or in this case, a national government, could afford that bill.

    • Yeah, most large electronic finance companies do this. Lookup “the sniper in mahwah” for some dated but really interesting reading on this game.