Comment by Sebb767

3 days ago

I dislike those black and white takes a lot. It's absolutely true that most startups that just run an EC2 instance will save a lot of cash going to Hetzner, Linode, Digital Ocean or whatever. I do host at Hetzner myself and so do a lot of my clients.

That being said, the cloud does have a lot of advantages:

- You're getting a lot of services readily available. Need offsite backups? A few clicks. Managed database? A few clicks. Multiple AZs? Available in seconds.

- You're not paying up-front costs (vs. investing hundreds of dollars for buying server hardware) and everything is available right now [0]

- Peak-heavy loads can be a lot cheaper. Mostly irrelevant for you average compute load, but things are quite different if you need to train an LLM

- Many services are already certified according to all kinds of standards, which can be very useful depending on your customers

Also, engineering time and time in general can be expensive. If you are a solo entrepreneur or a slow growth company, you have a lot of engineering time for basically free. But in a quick growth or prototyping phase, not to speak of venture funding, things can be quite different. Buying engineering time for >150€/hour can quickly offset a lot of saving [1].

Does this apply to most companies? No. Obviously not. But the cloud is not too expensive - you're paying for stuff you don't need. That's an entirely different kind of error.

[0] Compared to the rack hosting setup described in the post. Hetzner, Linode, etc. do provide multiple AZs with dedicated servers.

[1] Just to be fair, debugging cloud errors can be time consuming, too, and experienced AWS engineers will not be cheaper. But an RDS instance with solid backups-equivalent will usually not amortize quickly, if you need to pay someone to set it up.

You don't actually need any of those things until you no longer have a "project", but a business which will allow you to pay for the things you require.

You'd be amazed by how far you can get with a home linux box and cloudflare tunnels.

  • On this site, I've seen these kind of takes repeatedly over the past years, so I went ahead and built a little forum that consists of a single Rust binary and SQLite. The binary runs on a Mac Mini in my bedroom with Cloudflare tunnels. I get continuous backups with Litestream, and testing backups is as trivial as running `litestream restore` on my development machine and then running the binary.

    Despite some pages issuing up to 8 database queries, I haven't seen responses take more than about 4 - 5 ms to generate. Since I have 16 GB of RAM to spare, I just let SQLite mmap the whole the database and store temp tables in RAM. I can further optimize the backend by e.g. replacing Tera with Askama and optimizing the SQL queries, but the easiest win for latency is to just run the binary in a VPS close to my users. However, the current setup works so well that I just see no point to changing what little "infrastructure" I've built. The other cool thing is the fact that the backend + litestream uses at most ~64 MB of RAM. Plenty of compute and RAM to spare.

    It's also neat being able to allocate a few cores on the same machine to run self-hosted GitHub actions, so you can have the same machine doing CI checks, rebuilding the binary, and restarting the service. Turns out the base model M4 is really fast at compiling code compared to just about every single cloud computer I've ever used at previous jobs.

  • Exactly! I've been self hosting for about two years now, on a NAS with Cloudflare in front of it. I need the NAS anyway, and Cloudflare is free, so the marginal cost is zero. (And even if the CDN weren't free it probably wouldn't cost much.)

    I had two projects reach the front page of HN last year, everything worked like a charm.

    It's unlikely I'll ever go back to professional hosting, "cloud" or not.

    • If you have explosive growth, sure cloud.

      The vast majority of us that are actually technically capable are better served self hosting.

      Especially with tools like cloudflare tunnels and Tailscale.

My pet peeves are:

1. For small stuff, AWS et al aren't that much more expensive than Hetzner, mostly in the same ballpark, maybe 2x in my experience.

2. What's easy to underestimate for _developers_ is that your self hosted setup is most likely harder to get third party support for. If you run software on AWS, you can hire someone familiar with AWS and as long as you're not doing anything too weird, they'll figure it out and modify it in no time.

I absolutely prefer self hosting on root servers, it has always been my go to approach for my own companies, big and small stuff. But for people that can't or don't want to mess with their infrastructure themselves, I do recommend the cloud route even with all the current anti hype.

  • > 2. What's easy to underestimate for _developers_ is that your self hosted setup is most likely harder to get third party support for. If you run software on AWS, you can hire someone familiar with AWS and as long as you're not doing anything too weird, they'll figure it out and modify it in no time.

    If you're at an early/smaller stage you're not doing anything too fancy either way. Even self hosted, it will probably be easy enough to understand that you're just deploying a rails instance for example.

    It only becomes trickier if you're handling a ton of traffic or apply a ton of optimizations and end up already in a state where a team of sysadmin should be needed while you're doing it alone and ad-hoc. IMHO the important part would be to properly realize when things will get complicated and move on to a proper org or stack before you're stuck.

    • You'd think that, but from what I've seen, some people come up with pretty nasty self hosting setups. All the way from "just manually set it all up via SSH last year" to Kubernetes. Of course, people can and also definitely do create a mess on AWS. It's just that I've seen that _far_ less.

  • One way of solving for this is to just use K3s or even just plain docker. It is then just kuberneters/containers and you can hire alot of people who understand that.

    • Absolutely recommend k3s. Start with a single node and keep on scaling as customer base increases.

  • > mostly in the same ballpark, maybe 2x in my experience.

    2x is the same ballpark???

>most startups that just run an EC2 instance will save a lot of cash going to Hetzner, Linode, Digital Ocean or whatever. I do host at Hetzner myself and so do a lot of my clients. That being said, the cloud does have a lot of advantages:

When did Linode and DO got dropped and not being part of the cloud ?

What used to separate VPS and Cloud was resources at per second billing. Which DO and Linode along with a lot of 2nd tier hosting also offer. They are part of cloud.

Scaling used to be an issue, because buying and installing your hardware or sending it to DC to be installed and ready takes too much time. Dedicated Servers solution weren't big enough at the time. And the highest Core count at the time was 8 core Xeon CPU in 2010. Today we have EPYC Zen 6c at 256 Core and likely double the IPC. Scaling issues that requires a Rack of server can now be done with 1 single server and fit everything inside RAM.

Managed database? PlanetScale or Neon.

A lot of issues for medium to large size project that "Cloud" managed to solve are no longer an issue in 2025. Unless you are top 5-10% of project that requires these sort of flexibilities.

  • For a lot of people (not me), if it's not from AWS, Azure, GCP or Oracle then it's not cloud, it's just a sparkling hosting provider.

    I had someone on this site arguing that Cloudflare isn't a cloud provider...

> But the cloud is not too expensive - you're paying for stuff you don't need. That's an entirely different kind of error.

Agreed. These sort of takedowns usually point to a gap in the author's experience. Which is totally fine! Missing knowledge is an opportunity. But it's not a good look when the opportunity is used for ragebait, hustlr.

> A few clicks.

Getting through AWS documentation can be fairly time consuming.

  • Figuring out how to do db backups _can_ also be fairly time consuming.

    There's a question of whether you want to spend time learning AWS or spend time learning your DB's hand-rolled backup options (on top of the question of whether learning AWS's thing even absolves you of understanding your DB's internals anyways!)

    I do think there's value in "just" doing a thing instead of relying on the wrapper. Whether that's easier or not is super context and experience dependent, though.

    • > Figuring out how to do db backups _can_ also be fairly time consuming.

      apt install automysqlbackup autopostgresqlbackup

      Though if you have proper filesystem snapshots then they should always see your database as consistent, right? So you can even skip database tools and just learn to make and download snapshots.

      4 replies →

    • Hmmm, I think you have to figure out how to do your database backups anyway as trying to get a restorable backup out of RDS to use on another provider seems to be a difficult task.

      Backups that are stored with the same provider are good, providing the provider is reliable as a whole.

      (Currently going through the disaster recovery exercise of, "What if AWS decided they didn't like us and nuked our account from orbit.")

      5 replies →

    • most definitely do not want to spend time learning aws… would rather learn about typewriter maintenance

  • And making sure you're not making a security configuration mistake that will accidentally leak private data to the open internet because of a detail of AWS you were unaware of.

  • And learning TypeScript and CDK, if we're comparing scripted, repeatable setups which you should be doing from the start.

    • > repeatable setups which you should be doing from the start

      Yes, but not with

      > TypeScript and CDK

      Unless your business includes managing infrastructure with your product, for whatever reason (like you provision EC2 instances for your customers and that's all you do), there is no reason to shoot yourself in the foot with a fully fledged programming language for something that needs to be as stable as infrastructure. The saying is Infrastructure as Code, not with code. Even assuming you need to learn Terraform from scratch but already know Typescript, you would still save you time compared to learning CDK, figuring out what is possisble with it, and debugging issues down the line.

      1 reply →

> You're getting a lot of services readily available. Need offsite backups? A few clicks

I think it is a lot safer for backups to be with an entirely different provider. It protects you in case of account compromise, account closure, disputes.

If using cloud and you want to be safe, you should be multi-cloud. People have been saved from disaster by multi-cloud setups.

> You're not paying up-front costs (vs. investing hundreds of dollars for buying server hardware)

Not true for VPSes or rented dedicated servers either.

> Peak-heavy loads can be a lot cheaper.

they have to be very spiky indeed though. LLMs might fit but a lot of compute heavy spiky loads do not. I saved a client money on video transcoding that only happened once per upload and only over a month or two an year by renting a dedi all ear round rather than using the AWS transcoding service.

> Compared to the rack hosting setup described in the post. Hetzner, Linode, etc. do provide multiple AZs with dedicated servers.

You have to do work to ensure things run across multiple availability zones (and preferably regions) anyway.

> But an RDS instance with solid backups-equivalent will usually not amortize quickly, if you need to pay someone to set it up.

You have more forced upgrades.

An unmanaged database will only need a lot of work if operating at large scale. If you are then its probably well worth employing a DBA anyway as an AWS or similar managed DB is not going to do all the optimising and tuning a DBA will do.

any serious business will(might?) have hundreds of Tbs of data. I store that in our DC and with a 2nd DC backup for about 1/10 the price of what it would cost in S3.

When does the cloud start making sense ?

  • In my case we have a B2B SaaS where access patterns are occasional, revenue per customer is high, general server load is low. Cloud bills just don’t spike much. Labor is 100x the cost of our servers so saving a piddly amount of money on server costs while taking on even just a fraction of one technical employee’s worth of labor costs makes no sense.

I think compliance is one of the key advantages of cloud. When you go through SOC2 or ISO27001, you can just tick off entire categories of questions by saying 'we host on AWS/GCP/Azure'.

It's really shitty that we all need to pay this tax, but I've been just asked about whether our company has armed guards and redundant HVAC systems in our DC, and I wouldn't know how to do that apart from saying that 'our cloud provider has all of those'.

  • In my experience you still have to provide an awful lot of "evidence". I guess the advsntage of AWS/GCP/Cloud is that they are so ubiquitous you could literally ask an LLM to generate fake evidence to speed up the process.

> That being said, the cloud does have a lot of advantages:

Another advantage is that if you aim to provide a global service consumed throughout the world then cloud providers allow you to deploy your services in a multitude of locations in separate continents. This alone greatly improves performance. And you can do that with a couple of clicks.

linode was better and had cheaper pricing before being bought by akamai

  • I don’t feel like anything really changed? Fairly certain the prices haven’t changed. It’s honestly been pleasantly stable. I figured I’d have to move after a few months, but we’re a few years into the acquisition and everything still works.

  • No longer getting DDOSed multiple years in a row on Christmas Eve is worth whatever premium Akamai wants to charge over old Linode.

  • Akamai has some really good infrastructure, and an extremely competent global cdn and interconnects. I was skeptical when linode was acquired, but I value their top-tier peering and decent DDoS mitigation which is rolled into the cost.

  • Whoa, an acquisition made things worse for everyone but the people who cashed out? Crazy, who could have seen that coming

    • Guess you came for the hot take without actually using the service or participating in any intelligent conversation. All the sibling comments observe that nothing you are talking about happened.

      Snarky ignorant comments like yours ruin Hacker News and the internet as a whole. Please reconsider your mindset for the good of us all.

To me DO is a cloud. It is pricey (for performance) and convenient. It is possibly a wiser bet than AWS for a startup that wants to spend less developer (read expensive!) time on infra.

I started out with linode, a decade ago.

It became much more expensive than AWS, because it bundled the hard drive space with the RAM. Couldn't scale one without scaling the other. It was ridiculous.

AWS has a bunch of startup credits you can use, if you're smart.

But if you want free hosting, nothing beats just CloudFlare. They are literally free and even let you sign up anonymously with any email. They don't even require a credit card, unlike the other ones. You can use cloudflare workers and have a blazing fast site, web services, and they'll even take care of shooing away bots for you. If you prefer to host something on your own computer, well then use their cache and set up a cloudflare tunnel. I've done this for Telegram bots for example.

Anything else - just use APIs. Need inference? Get a bunch of Google credits, and load your stuff into Vertex or whatever. Want to take payments anonymously from around the world? Deploy a dapp. Pay nothing. Literally nothing!

LEVEL 2:

And if you want to get extra fancy, have people open their browser tabs and run your javascript software in there, earning your cryptocurrency. Now you've got access to tons of people willing to store chunks of files for you, run GPU inference, whatever.

Oh do you want to do distributed inference? Wasmcloud: https://wasmcloud.com/blog/2025-01-15-running-distributed-ml... ... but I'd recommend just paying Google for AI workloads

Want livestreaming that's peer to peer? We've got that too: https://github.com/Qbix/Media/blob/main/web/js/WebRTC.js

PS: For webrtc livestreaming, you can't get around having to pay for TURN servers, though.

LEVEL 3:

Want to have unstoppable decentralized apps that can even run servers? Then use pears (previously dat / hypercore). If you change your mindset, from server-based to peer to peer apps, then you can run hypercore in the browser, and optionally have people download it and run servers.

https://pears.com/news/building-apocalypse-proof-application...

  • >It became much more expensive than AWS, because it bundled the hard drive space with the RAM. Couldn't scale one without scaling the other. It was ridiculous.

    You can easily scale hard drive space independently of RAM by buying block storage separately and then mounting it on your Linode.

    • I think every VPS provider I have looked at any time recently (and I have been moving things in the last few weeks) offers some option for block storage separate from compute. Most offer an object storage option too.

      2 replies →

I want more examples of people running the admin interface on prem and the user visible parts on the cloud.

I mean there are many places that sell multi AZ, hourly billed VPS/Bare Metal/GPU at a fraction of the cost of AWS.

I would personally have an account at one of those places and back up to there with everything ready to spin up instances and failover if you lose your rack, and use them for any bursty loads.