Comment by jonatron

3 months ago

Why would you call colocation "building your own data center"? You could call it "colocation" or "renting space in a data center". What are you building? You're racking. Can you say what you mean?

I have to second this. While it takes mich effort and in-depth knowledge do build up from an “empty” cage it’s still far from dealing with everything from building permits, to plan and realize a data center to code including redundant power lines, AC and fibre.

Still kudos going this path in the cloud-centric time we live in.

  • Yes, the second is much more work, orders of magnitude at least.

    • > Yes, the second is much more work, orders of magnitude at least.

      I feel it's important to stress that the difficulty level of collocating something, let alone actually building a data center, is exactly what makes cloud computing so enticing and popular.

      Everyone focuses on trivia items like OpEx vs CapEx and dynamic scaling, but the massive task of actually plugging in the hardware in a secure setting and get it to work reliably is a massive undertaking.

      1 reply →

  • While it is more complex to actually build out the center , a lot of that is specific to the regional you are doing it.

    Thy will vary by country, by state or even county , setting up a DC in the Bay Area and say one in Ohio or Utah is a very different endeavor with different design considerations.

    • >Thy will vary by country, by state or even county , setting up a DC in the Bay Area and say one in Ohio or Utah is a very different endeavor with different design considerations.

      What point are you trying to make? It does not matter where you are in the world, or what local laws exist or permits are required, racking up servers in a cage is much less difficult than physically building a data center (of which racking up servers is a part).

      2 replies →

    • > Thy will vary by country, by state or even county , setting up a DC in the Bay Area and say one in Ohio or Utah is a very different endeavor with different design considerations.

      Regarding data centers that cost 9 figures and up:

      For the largest players, there’s not a ton of variation. A combination of evaporative cooling towers and chillers are used to reject heat. This is a consequence of evaporative open loop cooling being 2-3x more efficient than a closed-loop system.

      There will be multiple medium-voltage electrical services, usually from different utilities or substations, with backup generators and UPSes and paralleling switchgear to handle failover between normal, emergency, and critical power sources.

      There’s not a lot of variation since the two main needs of a data center are reliable electricity and the ability to remove heat from the space, and those are well-solved problems in mature engineering disciplines (ME and EE). The huge players are plopping these all across the country and repeatability/reliability is more important than tailoring the build to the local climate.

      FWIW my employer has done billions of dollars of data center construction work for some of the largest tech companies (members of Mag7) and I’ve reviewed construction plans for multiple data centers.

      1 reply →

    • Issues in building your own physical data center (based on a 15MW location some people I know built): 1 - thermal. To get your PUE down below say 1.2 you need to do things like hot aisle containment or better yet water cooling - the hotter your heat, the cheaper it is to get rid of.[] 2 - power distribution. How much power do you waste getting it to your machines? Can you run them on 220v, so their power supplies are more efficient? 3 - power. You don't just call your utility company and as them to run 10+MW from the street to your building. 4 - networking. You'll probably need redundant dark fiber running somewhere.

      1 and 2 are independent of regulatory domain. 3 involves utilities, not governments, and is probably a clusterfck anywhere; 4 isn't as bad (anywhere in the US; not sure elsewhere) because it's not a monopoly, and you can probably find someone to say "yes" for a high enough price.

      There are people everywhere who are experts in site acquisition, permits, etc. Not so many who know how to build the thermals and power, and who aren't employed by hyperscalers who don't let them moonlight. And depending on your geographic location, getting those megawatts from your utility may be flat out impossible.

      This assumes a new build. Retrofitting an existing building probably ranges from difficult to impossible, unless you're really lucky in your choice of building.

      [*] hmm, the one geographic issue I can think of is water availability. If you can't get enough water to run evaporative coolers, that might be a problem - e.g. dumping 10MW into the air requires boiling off I think somewhere around 100K gallons of water a day.

  • Do I have stories.

    One of the better was the dead possum in the drain during a thunderstorm.

    >So do we throw the main switch before we get electroduced? Or do we try to poke enough holes in it that it gets flushed out? And what about the half million in servers that are going to get ruined?

    Sign up to my patreon to find out how the story ended.

Dealing with power at that scale, arranging your own ISPs, seems a bit beyond your normal colocation project, but I haven’t bee in the data center space in a very long time.

  • I worked for a colo provider for a long time. Many tenants arranged for their own ISPs, especially the ones large enough to use a cage.

  • One of the many reasons we went with Switch for our DC is because they have a service to handle all of that for you. Having stumbled on doing this ourselves before, it can be pretty tricky to negotiate everything.

    We had one provider give us a great price and then bait and switch at the last moment to tell us that there is some other massive installation charge that they didn't realize we had to pay.

    Switch Connect/Core is based off the old Enron business that Rob (CEO) bought...

    https://www.switch.com/switch-connect/ https://www.switch.com/the-core-cooperative/

> Why would you call colocation "building your own data center"?

The cynic in me says this was written by sales/marketing people targeted specifically at a whole new generation of people who've never laid hands on the bare metal or racked a piece of equipment or done low voltage cabling, fiber cabling, and "plug this into A and B power AC power" cabling.

By this, I mean people who've never done anything that isn't GCP, Azure, AWS, etc. Many terminologies related to bare metal infrastructure are misused by people who haven't been around in the industry long enough to have been required to DIY all their own infrastructure on their own bare metal.

I really don't mean any insult to people reading this who've only ever touched the software side, but if a document is describing the general concept of hot aisles and cold aisles to an audience in such a way that it assumes they don't know what those are, it's at a very introductory/beginner level of understanding the OSI layer 1 infrastructure.

  • I think that's my fault BTW (Railway Founder here). I asked Charith to cut down a bit on the details to make sure it was approachable to a wider audience (And most people have only done Cloud)

    I wanted to start off with the 101 content to see if people found it approachable/interesting. He's got like reams and reams of 201, 301, 401

    Next time I'll stay out of the writing room!

    • Sitting on the front page of HN with a good read, and what is ultimately company promo and a careers link seems like a job well done. It made me read/click.

      Yes, building a physical DC is much wider scope than colo. This is one part of that, which is also still interesting. The world is built on many, many layers of abstraction which can all take lifetimes to explore. There are non-devs who enjoy learning about software, web-devs who dabble in compilers, systems programmers curious about silicon, EE's that are aspiring physicists, who in turn peek into the universe of pure path (cue yes, that xkcd you're thinking of).

      A 'full stack' overview of a standalone DC build still has to set a bound somewhere. This was an approachable intro and look forward to reading more from the layers you operate.

  • I mean the more people realize the the cloud is now a bad deal the better.

    When the original aws instance came out it would take you about two years or on demand to pay for the same hardware on prem. Now its between two weeks for ml heavy instances to six months for medium CPU instances.

    It just doesn't make sence to use the cloud for anything past prototyping unless you want Bazos to have a bigger yacth.

Not saying I don't agree with you but most tech businesses that have their own "Data center" usually have a private cage in a Colo.

  • They usually don’t say they are building their own datacenter, though. It is different to say something like, “our website runs in our datacenter” than saying you built a datacenter. You would still say, “at our office buildings”, even if you are only renting a few offices in an office park.

    • Don't the hyperscalers outsource datacenter construction and operation? Maybe it's not clear where to draw the line because the datacenters are owned or operated by disposable shell companies for various reasons.

  • When you rent an apartment, you can still invite people to your apartment for drinks. But you don't claim to have built an apartment.

Come to my office and tell me how it’s not actually my office because it’s leased by my company from the investment vehicle for institutional investors that owns the building that stands on land owned by someone else again that was stolen by the British anyway and therefore calling it “my office” makes me a fool and a liar and I should just “say what I mean”.

  • I think the word GP is objecting to isn't "your own" but rather "build".

    For people who have taken empty lots and constructed new data centers (ie, the whole building) on them from scratch, the phrase "building a datacenter" involves a nonzero amount of concrete.

    OP seems to have built out a data hall - which is still a cool thing in its own right! - but for someone like me who's interested in "baking an apple pie from scratch", the mismatch between the title and the content was slightly disappointing.

    • It doesn't matter which word. Which I should confess makes my remark above appear, in retrospect, to be something of a trap; because when parsing ambiguity, it's a matter of simple courtesy and wisdom to choose the interpretation that best illustrates the point rather than complaining about the ones that don't.

      I say this not merely to be a pompous smartass but also because it illustrates and echoes the very same problem the top-level comment embodies, viz. that some folks struggle with vernacular, nonliteral, imprecise, and nonlinear language constructs. Yet grasping this thistle to glark one's grok remains parcel-and-part of comprehension and complaining about it won't remeaningify the barb'd disapprehensible.

      Your disappointment, nevertheless, seems reasonable, because the outcome was, after all, a bait-and-route.

  • When you invite a girl/guy over, do you say "let's meet at my place" or "let's meet at the place I'm renting"? The possessive pronoun does not necessarily express ownership, it can just as well express occupancy.

    • I wouldn't oppose telling a client "we can meet at your data centre". I would not tell my wife "we need to discuss building our apartment complex" when we are planning interior decorations in our flat.

      2 replies →

  • > Come to my office and tell me how it’s not actually my office (...)

    I think you're failing to understand the meaning and the point of "building your own datacenter".

    Yes, you can talk about your office all you'd like. Much like OP can talk about there server farm and their backend infrastructure.

    What you cannot talk about is your own office center. You do not own it. You rent office space. You only have a small fraction of the work required to operate an office, because you effectively offloaded the hard part to your landlord.

It seems a bit disingenuous but it’s common practice. Even the hyperscalers, who do have their own datacenters, include their colocation servers in the term “datacenter.” Good luck finding the actual, physical location of a server in GCP europe-west2-a (“London”). Maybe it’s in a real Google datacenter in London! Or it could be in an Equinix datacenter in Slough, one room away from AWS eu-west-1.

Cloudflare has also historically used “datacenter” to refer to their rack deployments.

All that said, for the purpose of the blog post, “building your own datacenter” is misleading.

  • You're correct, there are multiple flavors of Google Cloud Locations. The "Google concrete" ones are listed at google.com/datacenters and London isn't on that list, today.

    cloud.google.com/about/locations lists all the locations that GCE offers service, which is a super set of the large facilities that someone would call a "Google Datacenter". I liked to mostly refer to the distinction as Google concrete (we built the building) or not. Ultimately, even in locations that are shared colo spaces, or rented, it's still Google putting custom racks there, integrating into the network and services, etc. So from a customer perspective, you should pick the right location for you. If that happens to be in a facility where Google poured the concrete, great! If not, it's not the end of the world.

    P.S., I swear the certification PDFs used to include this information (e.g., https://cloud.google.com/security/compliance/iso-27018?hl=en) but now these are all behind "Contact Sales" and some new Certification Manager page in the console.

    Edit: Yes! https://cloud.google.com/docs/geography-and-regions still says:

    > These data centers might be owned by Google and listed on the Google Cloud locations page, or they might be leased from third-party data center providers. For the full list of data center locations for Google Cloud, see our ISO/IEC 27001 certificate. Regardless of whether the data center is owned or leased, Google Cloud selects data centers and designs its infrastructure to provide a uniform level of performance, security, and reliability.

    So someone can probably use web.archive.org to get the ISO-27001 certificate PDF from whenever the last time it was still up.

  • The hyperscalers are absolutely not colo-ing their general purpose compute at Equinix! A cage for routers and direct connect, maybe some limited Edge CDN/compute at most.

    Even where they do lease wholesale space, you'd be hard pushed to find examples of more than one in a single building. If you count them as Microsoft, Google, AWS then I'm not sure I can think of a single example off the top of my head. Only really possible if you start including players like IBM or Oracle in that list.

    • Maybe leasing wholesale space shouldn’t be considered colocation, but GCP absolutely does this and the Slough datacenter was a real example.

      I can’t dig up the source atm but IIRC some Equinix website was bragging about it (and it wasn’t just about direct connect to GCP).

      7 replies →

    • The best part about adamantly making such a claim is that anybody who knows better also knows better than to break NDA and pull a Warthunder to prove that the CSPs do use colo facilities, so you're not going to get anyone who knows better to disagree with you and say AWS S3 or GCP compute is colo-ed at a specific colo provider.

      2 replies →

  • > It seems a bit disingenuous but it’s common practice. Even the hyperscalers, who do have their own datacenters, include their colocation servers in the term “datacenter.”

    I think you're conflating things.

    Those hypothetical hyperscalers can advertise their availability zones and deployment regions, but they do not claim they built the data centers. They provide a service, but they do not make broad claims on how they built infrastructure.

> You could call it "colocation" or "renting space in a data center". What are you building? You're racking. Can you say what you mean?

TFA explain what they're doing, they literally write this:

"In general you have three main choices: Greenfield buildout (...), Cage Colocation (getting a private space inside a provider's datacenter enclosed by mesh walls), or Rack colocation...

We chose the second option"

I don't know how much clearer they can be.

  • The title is "So you want to build your own data center" and the article is about something else. Its nice that they say that up front, but its valid to criticize the title.

  • Only one of those options is ‘building your own data center’, and I’ll give you three guesses as to which one it is. I’ll even give you a hint: ‘greenfield’ is in the correct answer.