← Back to context

Comment by jonatron

7 hours ago

Why would you call colocation "building your own data center"? You could call it "colocation" or "renting space in a data center". What are you building? You're racking. Can you say what you mean?

I have to second this. While it takes mich effort and in-depth knowledge do build up from an “empty” cage it’s still far from dealing with everything from building permits, to plan and realize a data center to code including redundant power lines, AC and fibre.

Still kudos going this path in the cloud-centric time we live in.

  • While it is more complex to actually build out the center , a lot of that is specific to the regional you are doing it.

    Thy will vary by country, by state or even county , setting up a DC in the Bay Area and say one in Ohio or Utah is a very different endeavor with different design considerations.

  • Do I have stories.

    One of the better was the dead possum in the drain during a thunderstorm.

    >So do we throw the main switch before we get electroduced? Or do we try to poke enough holes in it that it gets flushed out? And what about the half million in servers that are going to get ruined?

    Sign up to my patreon to find out how the story ended.

Not saying I don't agree with you but most tech businesses that have their own "Data center" usually have a private cage in a Colo.

  • They usually don’t say they are building their own datacenter, though. It is different to say something like, “our website runs in our datacenter” than saying you built a datacenter. You would still say, “at our office buildings”, even if you are only renting a few offices in an office park.

Dealing with power at that scale, arranging your own ISPs, seems a bit beyond your normal colocation project, but I haven’t bee in the data center space in a very long time.

  • I worked for a colo provider for a long time. Many tenants arranged for their own ISPs, especially the ones large enough to use a cage.

  • One of the many reasons we went with Switch for our DC is because they have a service to handle all of that for you. Having stumbled on doing this ourselves before, it can be pretty tricky to negotiate everything.

    We had one provider give us a great price and then bait and switch at the last moment to tell us that there is some other massive installation charge that they didn't realize we had to pay.

    Switch Connect/Core is based off the old Enron business that Rob (CEO) bought...

    https://www.switch.com/switch-connect/ https://www.switch.com/the-core-cooperative/

It seems a bit disingenuous but it’s common practice. Even the hyperscalers, who do have their own datacenters, include their colocation servers in the term “datacenter.” Good luck finding the actual, physical location of a server in GCP europe-west2-a (“London”). Maybe it’s in a real Google datacenter in London! Or it could be in an Equinix datacenter in Slough, one room away from AWS eu-west-1.

Cloudflare has also historically used “datacenter” to refer to their rack deployments.

All that said, for the purpose of the blog post, “building your own datacenter” is misleading.

  • You're correct, there are multiple flavors of Google Cloud Locations. The "Google concrete" ones are listed at google.com/datacenters and London isn't on that list, today.

    cloud.google.com/about/locations lists all the locations that GCE offers service, which is a super set of the large facilities that someone would call a "Google Datacenter". I liked to mostly refer to the distinction as Google concrete (we built the building) or not. Ultimately, even in locations that are shared colo spaces, or rented, it's still Google putting custom racks there, integrating into the network and services, etc. So from a customer perspective, you should pick the right location for you. If that happens to be in a facility where Google poured the concrete, great! If not, it's not the end of the world.

    P.S., I swear the certification PDFs used to include this information (e.g., https://cloud.google.com/security/compliance/iso-27018?hl=en) but now these are all behind "Contact Sales" and some new Certification Manager page in the console.

    Edit: Yes! https://cloud.google.com/docs/geography-and-regions still says:

    > These data centers might be owned by Google and listed on the Google Cloud locations page, or they might be leased from third-party data center providers. For the full list of data center locations for Google Cloud, see our ISO/IEC 27001 certificate. Regardless of whether the data center is owned or leased, Google Cloud selects data centers and designs its infrastructure to provide a uniform level of performance, security, and reliability.

    So someone can probably use web.archive.org to get the ISO-27001 certificate PDF from whenever the last time it was still up.

  • The hyperscalers are absolutely not colo-ing their general purpose compute at Equinix! A cage for routers and direct connect, maybe some limited Edge CDN/compute at most.

    Even where they do lease wholesale space, you'd be hard pushed to find examples of more than one in a single building. If you count them as Microsoft, Google, AWS then I'm not sure I can think of a single example off the top of my head. Only really possible if you start including players like IBM or Oracle in that list.

    • Maybe leasing wholesale space shouldn’t be considered colocation, but GCP absolutely does this and the Slough datacenter was a real example.

      I can’t dig up the source atm but IIRC some Equinix website was bragging about it (and it wasn’t just about direct connect to GCP).

      7 replies →

    • The best part about adamantly making such a claim is that anybody who knows better also knows better than to break NDA and pull a Warthunder to prove that the CSPs do use colo facilities, so you're not going to get anyone who knows better to disagree with you and say AWS S3 or GCP compute is colo-ed at a specific colo provider.

      1 reply →

  • Indeed, I've seen "data center" maps, and was surprised they were just a tenant in some other guys data center.

> Why would you call colocation "building your own data center"?

The cynic in me says this was written by sales/marketing people targeted specifically at a whole new generation of people who've never laid hands on the bare metal or racked a piece of equipment or done low voltage cabling, fiber cabling, and "plug this into A and B power AC power" cabling.

By this, I mean people who've never done anything that isn't GCP, Azure, AWS, etc. Many terminologies related to bare metal infrastructure are misused by people who haven't been around in the industry long enough to have been required to DIY all their own infrastructure on their own bare metal.

I really don't mean any insult to people reading this who've only ever touched the software side, but if a document is describing the general concept of hot aisles and cold aisles to an audience in such a way that it assumes they don't know what those are, it's at a very introductory/beginner level of understanding the OSI layer 1 infrastructure.

  • I think that's my fault BTW (Railway Founder here). I asked Charith to cut down a bit on the details to make sure it was approachable to a wider audience (And most people have only done Cloud)

    I wanted to start off with the 101 content to see if people found it approachable/interesting. He's got like reams and reams of 201, 301, 401

    Next time I'll stay out of the writing room!

  • I mean the more people realize the the cloud is now a bad deal the better.

    When the original aws instance came out it would take you about two years or on demand to pay for the same hardware on prem. Now its between two weeks for ml heavy instances to six months for medium CPU instances.

    It just doesn't make sence to use the cloud for anything past prototyping unless you want Bazos to have a bigger yacth.

> You could call it "colocation" or "renting space in a data center". What are you building? You're racking. Can you say what you mean?

TFA explain what they're doing, they literally write this:

"In general you have three main choices: Greenfield buildout (...), Cage Colocation (getting a private space inside a provider's datacenter enclosed by mesh walls), or Rack colocation...

We chose the second option"

I don't know how much clearer they can be.