GitHub Is Having Issues

4 hours ago (githubstatus.com)

A directory over SSH can be your git server. If your CI isn't too complex, a post-receive hook looping into Docker can be enough. I wrote up about self hosting git and builds a few weeks ago[1].

There are heavier solutions, but even setting something like this up as a backstop might be useful. If your blog is being hammered by ChatGPT traffic, spare a thought for Github. I can only imagine their traffic has ballooned phenomenally.

1: https://duggan.ie/posts/self-hosting-git-and-builds-without-...

  • Doesn't post-receive block the push operation and get cancelled when you cancel the push?

    • It does, you're just running a command over ssh, so if you've a particularly long build then something more involved may make more sense.

Insert the standard comment about how git doesn't even need a hub. The whole point of it is that it's distributed and doesn't need to be "hosted" anywhere. You can push or pull from any repo on anyone's machine. Shouldn't everyone just treat GitHub as an online backup? Zero reason it being down should block development.

  • The problem is that any kind of automatic code change process like CI, PRs, code review, deployments, etc etc are based on having a central git server. Even security may be based on SSO roles synced to GH allowing access to certain repos.

    A self-hosted git server is trivial. Making sure everything built on top of that is able to fallback to that is not. Especially when GH has so many integrations out of the box

In moments like this, it's useful to have a "break glass" mode in your CI tooling: a way to run a production CI pipeline from scratch, when your production CI infrastructure is down. Otherwise, if your CI downtime coincides with other production downtime, you might find yourself with a "bricked" platform. I've seen it happen and it is not fun.

It can be a pain to setup a break-glass, especially if you have a lot of legacy CI cruft to deal with. But it pays off in spades during outages.

I'm biased because we (dagger.io) provide tooling that makes this break-glass setup easier, by decoupling the CI logic from CI infrastructure. But it doesn't matter what tools you use: just make sure you can run a bootstrap CI pipeline from your local machine. You'll thank me later.

  • It’s a hard sell. I always get blank looks when I suggest it, and often have to work off book to get us there.

    I generally recommend that the break glass solution always be pair programmed.

  • This is a must when your systems deal with critical workloads. At Fastly, we process a good chunk of the internet's traffic and can't afford to be "down" while waiting for the CI system to recover in the event of a production outage.

    We built a CI platform using dagger.io on top of GH Actions, and the "break glass" pattern was not an afterthought; it was a requirement (and one of the main reasons we chose dagger as the underlying foundation of the platform in the first place)

  • 100%. We used to design the pipeline a way that is easily reproducible locally, e.g. doesn’t rely on plugins of the CI runtime. Think build.sh shell script, normally invoked by CI runner but just as easy to run locally.

    • My automation is always an escalation of a run book that has gotten very precise and handles corner cases.

      Even if I get the idea of an automation before there’s a run book for it.

  • A while back I think I heard you on a podcast describing these pain points. Experienced them myself; sounded like a compelling solution. I remember Dagger docs being all about AI a year or two ago, and frankly it put me off, but that seems to have gone again. Is your focus back to CI?

    • Yes, we are re-focused on CI. We heard loud and clear that we should pick a lane: either a runtime for AI agents, or deterministic CI. We pick CI.

      Ironically, this makes Dagger even more relevant in the age of coding agents: the bottleneck increasingly is not the ability to generate code, but to reliably test it end-to-end. So the more we all rely on coding agents to produce code, the more we will need a deterministic testing layer we can trust. That's what Dagger aspires to be.

      For reference, a few other HN threads where we discussed this:

      - https://news.ycombinator.com/item?id=46268265

      1 reply →

What’s interesting about outages like this is how many things depend on GitHub now beyond just git hosting. CI pipelines, package registries, release automation, deployment triggers, webhooks — a lot of infrastructure quietly assumes GitHub is always available. When GitHub degrades, the blast radius is surprisingly large because it breaks entire build and release chains, not just repo browsing.

  • > a lot of infrastructure quietly assumes GitHub is always available

    Which is really baffling when talking about a service that has at least weekly hicups even when it's not a complete outage.

    There's almost 20 outages listed on HN over the past two months: https://news.ycombinator.com/from?site=githubstatus.com so much for “always available”.

    • Part of it is probably historical momentum. GitHub started as “just git hosting,” so a lot of tooling gradually grew around it over the years — Actions, package registries, webhooks, release automation, etc. Once teams start wiring all those pieces together, replacing or decoupling them becomes surprisingly hard, even if everyone knows it’s a single point of failure.

I swear this is my fault. I can go weeks without doing infra work. Github does fine, I don't see any hiccups, status page is all green.

But the day comes that I need to tweak a deploy flow, or update our testing infra and about halfway through the task I take the whole thing down. It's gotten to the point where when there's an outage I'm the first person people ask what I'm doing...and it's pretty dang consistent....

I would so very much love to see GitHub switch gears from building stuff like Copilot etc and focus on availability

  • This is an absurd state they are at! Weekly outages in 2025 and 2026. From developer beloved and very solid to Microslop went faster than I expected

    • They may have been Beloved before MS bought them. It takes awhile for technical debt to catch up.

  • I think GitHub shipping Copilot while suffering availability issues is a rational choice because they get more measurable business upside from a flashy AI product than from another uptime graph. In my experience the only things that force engineering orgs to prioritize uptime are public SLOs with enforced error budgets that can halt rollouts, plus solid observability like Prometheus and OpenTelemetry tracing, canary rollouts behind feature flags, multi-region active-active deployments, and regular chaos experiments to surface regressions. If you want them to change, push for public SLOs or pay for an enterprise SLA, otherwise accept that meaningful uptime improvements cost money and will slow down the flashy stuff.

codeberg might be a little slower on git cli, but at least it's not becoming a weekly 'URL returned error: 500' situation...

  • These days it feels like people have simply forgotten that you could also just have a bare repository on a VPS and use it over ssh.

    • I've found that a bare repo over SSH is the simplest way to keep control and reduce attack surface, especially when you don't need fancy PR workflows. I ran many projects with git init --bare on a Debian VPS, controlled access with authorized_keys and git-shell, and wrote a post-receive hook that runs docker-compose pull and systemctl restart so pushes actually deploy. The tradeoff is you lose built-in PRs, issue tracking, and easy third party CI, so either add gitolite or Gitea for access and a simple web UI, or accept writing hooks, backups, receive.denyNonFastForwards, and scheduled git gc to avoid surprises at 2AM.

  • I rarely successfully get Codeberg URLs to load. Which is sad because I actually would very much like to recommend it but I find it unreliable as a source.

    That being said, GitHub is Microsoft now, known for that Microsoft 360 uptime.

    • I have never had this issue. IIRC Codeberg has a matrix community, they are a non-profit and they would absolutely love to hear your feedback of them. I hope that you can find their matrix community and join it and talk with them

      Actually here you go, I have pasted the matrix link to their community, hope it helps https://matrix.to/#/#codeberg-space:matrix.org

    • > Microsoft 360 uptime

      I mean... It's right in the name! It's up for 360 days a year.

> This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

does anyone know where these "detailed root cause analysis" reports are shared? is there maybe an archive?

Maybe we should turn these weekly posts into an actionable item we can use to move organizations away from this critical infrastructure that is failing in realtime.

I really wish Graphite had just gone down the path of better Git hosting and reviewing, instead of trying to charge me $40 a month for an AI reviewer. It would be nice to have a real first class alternative to Github

I've taken to hosting everything critical like this myself on a single system with Docker Compose with regular off premises backups and a restore process that I know works because I test it every 6 months. I can swap from local hosting to a VPS in 30 mins if I need to. It seems like the majority of large services like GitHub have had increasingly annoying downtime while I try to get work done. If you know what you're doing it's a false premise that you'll just have more issues with self hosting. If you don't know what you are doing it's becoming an increasingly good time to learn. I've had 4 years of continuous uptime on my services at this point. I still push to third parties like GitHub as yet another backup and see the occasional 500 and my workflow keeps chugging along. I've gotten old and grumpy and rather just do it myself.

How reliable is githubstatus.com? I know that status pages are generally not updated until Leadership and/or PR has a chance to approve the changes; is that the case here?

Our health check checks against githubstatus.com to verify 'why' there may be a GHA failure and reports it, e.g.

Cannot run: repo clone failed — GitHub is reporting issues (Partial System Outage: 'Incident with Copilot and Actions'). No cached manifests available.

But, if it's not updated, we get more generic responses. Are there better ways that you all employ (other than to not use GHA, you silly haters :-))

  • Right now the page says Copilot and Actions are affected but I can't even push anything to a repo from the CLI.

    • Yep getting 500 errors intermittently on fetch and checkout operations in my CI pretty consistently at the moment. Like 1 in 2 attempts

    • Agreed. I believe that's marked under "Git Operations" and it's all green. Just began being able to push again a minute ago.

I am getting really tired of github. outages happen that's a given. but on so much stuff they don't even care or try. Github is becoming the bottleneck in my agentic coding workflows. unless I make Claude do it intelligently, I hit rate limits checking on CI jobs (5000 api requests in an hour). Depot makes their CI so much better, but it is still tied to github in a couple of annoying places.

PRs are a defacto communication and coordination bus between different code review tools, its all a mess.

LLMs make it worse because I'm pushing more code to github than ever before, and it just isn't setup to deal with this type of workload when it is working well.

In many companies I worked for, there were a bunch of infrastructure astronauts who made everything very complicated in the name of zero downtime and sold them to management as “downtime would kill pur credibility and our businesses ”, and then you have billion dollar companies everyone relies on (GitHub, Cloudflare) who have repeated downtime yet it doesn't seem to affect their business in any way.

  • It's a multitude of factors but basically they can act like that because they are dominant on the market.

    The classic "nobody ever gets fired for buying IBM".

    If you pick something else, and there's issue, people will complain about your choice being wrong, should have gone with the biggest player.

    Even if you provide metrics showing your solution's downtime being 1% of the big player.

    Something like Cloudflare is so big and ubiquitous, that, when there's a downtime, even your grandma is aware of it because they talk about it in the news. So nobody will put the blame on the person choosing Cloudflare.

    Even if people decides to go back (I had a few customers asking us to migrate to other solutions or to build some kind of failover after the last Cloudflare incidents), it costs so much to find the solutions that can replace it with the same service level and to do the migration, that, in the end, they prefer to eat the cost of the downtimes.

    Meanwhile, if you're a regular player in a very competitive market, yes, every downtime will result in lost income, customers leaving... which can hurt quite a lot when you don't have hundreds of thousands of customers.

  • Businesses are incommensurate.

    GitHub is a distributed version control storage hub with additional add-on features. If peeps can’t work around a git server/hub being down and don’t know to have independent reproducible builds or integrations and aren’t using project software wildly better that GitHubs’, there are issues. And for how much money? A few hundred per dev per year? Forget total revenue, the billions, the entire thing is a pile of ‘suck it up, buttercup’ with ToS to match.

    In contrast, I’ve been working for a private company selling patient-touching healthcare solutions and we all would have committed seppuku with outages like this. Yeah, zero downtime or as close to it as possible even if it means fixing MS bugs before they do. Fines, deaths, and public embarrassment were potential results of downtime.

    All investments become smart or dumb depending on context. If management agrees that downtime would be lethal my prejudice would be to believe them since they know the contracts and sales perspective. If ‘they crashed that one time’ stops all sales, the 0% revenue makes being 30% faster than those astronauts irrelevant.

  • To be fair - it SUPER does. Being down frequently makes your competition look better.

    Of course, once you have the momentum it doesn't matter nearly as much, at least for a while. If it happens too much though, people will start looking for alternatives.

    The key to remember is Momentum is hard to redirect, but with enough force (reasons), it will.

I have a bug bash in an hour and fixes that need to go in beforehand. So of course GitHub is down.

How many 9s is GitHub at now? 2?

You know that it's bad when the status page doesn't have the availability stats anymore.

Lowendtalk providers who take 7$ per year deals can provide more reliability than Github at this moment and I am not kidding.

If anyone is using Github professionally and pays for github actions or any github product, respectfully, why?

You can switch to a VPS provider and self host gitea/forejo in less time than you might think and pay a fraction of a fraction than you might pay now.

The point becomes more moot because github is used by developers and devs are so so much more likely to be able to spin up a vps and run forejo and run terminal. I don't quite understand the point.

There are ways to run github actions in forejo as well iirc even on locally hosted which uses https://github.com/nektos/act under the hood.

People, the time where you spent hundreds of thousands of dollars and expected basic service and no service outage issues is over.

What you are gonna get is service outage issues and lock-ins. Also, your open source project is getting trained on by the parent company of the said git provider.

PS: But if you do end up using Gitea/forejo. Please donate to Codeberg/forejo/gitea (Gitea is a company tho whereas Codeberg is non profit). I think that donating 1k$ to Codeberg would be infinitely better than paying 10k$ or 100k$ worth to Github.

I spent hours trying to figure out what was wrong with GHCR, &^$% Github.

I'm on the lookout for an alternative, this really is not acceptable.

So Tay.ai and Zoe are still wrecking GitHub infrastructure.

Should have self hosted.

the day ends in y, water is wet. I really hate that github doesn't have any real competition. Yes, I know about gitlab, but it isnt real competition.

GitHub has been shit lately. What the fuck is going on?