GitHub pull requests were down

8 days ago (githubstatus.com)

I've got this feeling that the endless feature creep of Github has begun to cause rot of core essential features. Up until only recently, the PR review tab performed so poorly it was practically useless for large PRs.

  • GitHub isn't focusing on creating a good Git platform anymore, they are an AI company now

    • Bets on where everything/everyone goes next? Will it be like the transition from SourceForge to GitHub, where the center of gravity moves from one big place to another big place? Or more like Twitter, where factions split off to several smaller places?

      4 replies →

  • > I've got this feeling that the endless feature creep of Github has begun to cause rot of core essential features.

    Tangential, but... I was so excited by their frontend, which was slowly adopting web components, until after acquisition by Microsoft they started rewriting it in React.

    (Design is still very solid though!)

  • I often miss entire files in the review process because the review page collapses them by default and makes them hard to spot. If they’re going to be collapsed by default at least make it very visible. This is critical for security too, you don’t want people sneaking in code.

  • GitHub in essence is still pretty much the same, there's products that have feature creep but I wouldn't say GitHub does that.

    I can't say that I'm having issues with the performance either. I work with large PRs too (Especially if there's vendored dependencies) but I never ran into a show stopping performance issue that would make it "useless".

    • > GitHub in essence is still pretty much the same, there's products that have feature creep but I wouldn't say GitHub does that.

      I think we're using two different products. Off the top of my head, I can think of Github Projects (the Trello-like feature), Github Marketplace, Github Discussions, the complete revamp of the file-viewer/editor, and all the new AI/LLM-based stuff baked into yet another feature known as Codespaces.

      > I can't say that I'm having issues with the performance either. I work with large PRs too

      Good for you. I suffered for maybe 4 years from this, and so have many others: https://github.com/orgs/community/discussions/39341

    • > there's products that have feature creep but I wouldn't say GitHub does that.

      I remember GitHub from years ago. I still find myself looking for things that were there years ago but have since moved.

      Also, GitHub search is (still) comically useless. I just clone and use grep instead.

      2 replies →

  • Yeah i've switched to doing pr reviews in goland because their ui is dogshit slow if there are more than like 10 files to diff.

HN sure has changed. A few years ago there would be at least a dozen comments about installing Gitlab, including one major subthread started by someone from Gitlab.

  • We recommend Codeberg/Forgejo now since it is better in every way, and Gitlab went corpo.

    • Gitlab was always for profit.

      And forgejo doesn't have feature parity at all with gitlab. Neither does github, for that matter.

      Just take a look at how to push container images from a cicd pipeline in gitlab vs. Forgejo.

      3 replies →

    • > We recommend Codeberg/Forgejo now since it is better in every way...

      Lol.

      > ...and Gitlab went corpo.

      How else will they sustain/maintain such a product and compete with the likes of GitHub? With donations? Good luck.

  • I've used self-hosted GitLab a bunch at work, it's pretty good there still. In my opinion GitLab CI is also a solid offering, especially for the folks coming from something like Jenkins, doubly so when combined with Docker executors and mostly working with containers.

    I used to run a GitLab instance for my own needs, however keeping up with the updates (especially across major versions) proved to be a bit too much and it was quite resource hungry.

    My personal stack right now is Gitea + Drone CI + Nexus, though I might move over to Woodpecker CI in the future and also maybe look for alternatives to Nexus (it's also quite heavyweight and annoying to admin).

  • Having tried gitlab, it's a very poor product almost unmaintainable as a self hosted option. Reminds me of Eclipse IDE - crammed with every other unnecessary feature/plugin and the basic features are either very slow or buggy.

    At this point Gitlab is just there because being even a small X% of a huge million/billion dollar market is good enough as a company even if the product is almost unusable.

  • I wouldn't touch Gitlab at this point. I didn't change. They did.

    • Which is probably good, as otherwise they would be dead. Building products for self-hosting HN users isn't really a big money maker.

did all of the devs leave?

https://www.businessinsider.com/github-ceo-developers-embrac...

  • > Instead of selling products based on helpful features and letting users decide, executives often deploy scare tactics that essentially warn people they will become obsolete if they don't get on the AI bandwagon. For instance, Julia Liuson, another executive at Microsoft, which owns GitHub, recently warned employees that "using AI is no longer optional."

    So many clowns. It's like everyone's reading from the same script/playbook. Nothing says "this tool is useful" quite like forcing people to use it.

    • > It's like everyone's reading from the same script/playbook.

      I'd assume that many CEO are driven by the same urge to please the board. And depending on your board, there might be people on it who spend many hours per week on LinkedIn, and see all the success stories around AI, maybe experienced something first hand.

      Good news: It's, from my estimate, only a phase. Like when blockchain hit, and everyone wanted to be involved. This time - and that worries me - the ressources involved are more expensive, though. There might be a stronger incentive for people to "get their money back". I haven't thought about the implications yet.

      9 replies →

    • People are biased to using tools they are familiar with. The idea that if a tool was useful people would use it simply false. In order to avoid being disrupted, extra effort needs to be made to get people to learn new tools.

      2 replies →

  • From the CEO's article referenced in that post [1]:

    > the rise of AI in software development signals the need for computer science education to be reinvented as well.

    > Teaching in a way that evaluates rote syntax or memorization of APIs is becoming obsolete

    He thinks computer science is about memorizing syntax and APIs. No wonder he's telling developers to embrace AI or quit their careers if he believes the entire field is that shallow. Not the best person to take advice from.

    It's also hilarious how he downplays fundamental flaws of LLMs as something AI zealots, the truly smart people, can overcome by producing so much AI slop that they turn from skeptics into ...drumroll... AI strategists. lol

    [1]: https://ashtom.github.io/developers-reinvented

I use gitea on a server in my basement because I don't trust these hosted solutions to not use my code for LLM training or who knows what else.

  • Me too. I have it mirroring stuff from github too for occasions just like this.

Given Github's critical role in software engineering delivery, their SLA commitments are really quite poor, perhaps unacceptable.

  • luckily, git itself works pretty well when there's an outage

    sucks for people that use issues/PRs for coordination and had a planning meeting scheduled, though

  • It is critical for those who choose to use it.

    If you deliberately decide to use a system that introduces a single point of failure into a decentralised system, you have to live with the consequences.

    From their point of view, unless they start losing paying users over this, they have no incentive to improve. I assume customers are happy with the SLA, otherwise why use Github?

I miss the days where downtime would be like half a day or more and you could use it as an excuse to go home or do something else.

Weirdly people were less angry about it back then than we seem to be today.

  • That's because people can't handle speed. With a natural delay, they could cool down or at least become more detached. Society needs natural points where people are forced to detach from what they do. That's one reason why AI and high-speed communications are so dangerous: they accelerate what we do too quickly to remain balanced. (And I am speaking in general here, of course there will be a minority who can handle it.)

Does not impact me, because my team and I self-host Forgejo for all our work.

People seem to forget Git was meant to be decentralized.

  • Yes, but you may work with other people, other organizations, or at least depend on open source code that's hosted on GitHub.

    I agree with the sentiment though.

    • I found this hilariously confusing when I first heard about DVCSs.

      I'm like ok... So they're "distributed".. how do I share the code with other people? Oh..I push to a central repository? So it's almost exactly like SVN? Cool cool.

    • Do work in and rely on self hosted forks so you are not blocked, and upstream when upstream code submissions become possible again.

Props to Github for having an accurate status page. AWS and Google should take note.

  • Status page says "Incident with Pull Requests". Pull requests status is listed as "Normal". Status text says issue with degraded performance for Webhooks and Issues, does not mention Pull Requests.

    I would give that a 5/10 accuracy at best!

  • As someone who is partially responsible for supporting github at a very large organization, no it isn't. At least not until the incident is at least 30m old if ever.

Does GitHub development happen on GitHub? And if the fix for broken pull requests requires a pull request would they have a way to review it...

  • I worked there for 3 years and yes GitHub development happens on github.com. Of course there’s ways to deploy and rollback changes while the site is down but that’s very unusual. The typical flow happens in github.com and uses the regular primitives everybody uses: prs, ci checks, etc.

    The pipeline for deploying the monolith doesn’t happen in GitHub Actions though but in a service based in jenkins.

    Fun fact: playbooks for incidents used to be hosted in GitHub too but we moved them after an incident that made impossible to access them while it lasted.

    • > that made impossible to access them

      Couldn't they just be checked out by cron on any number of local machines hosting Apache?

      1 reply →

  • if GitHub Enterprise Server is anything to go by, they build (almost) everything for containers, and the entire site is hosted by containers managed by Nomad. So there are probably lots of older images around that they can fall back on if the latest image of any container causes problems.

    How they would deploy the older container, I don't know.

    A lot of this is guesswork, I don't work for them or anything. And I know that GHES in the way that my employer manages it is very unlike the way that GitHub host github.com, so everything i've assumed could be wrong.

Give your best estimate on how much dollar value of creation is wasted every hour GitHub PRs are down

  • I estimate that on some days an outage like this could ultimately save some businesses money.

    There's a lot of cowboy development going on out there. Why not take this opportunity to talk to your customers for a bit? Make sure you're still building the right things.

This is why I recommend decentralized protocols like radical or I guess I hope that tangled.sh could fix this stuff too.

I am not sure about tangled.sh, I might ask them in their discord about this now y'know.

It used to take a whole team of developers to take down production, now, one programmer with a fleet of agents can do it in 1/10th the time!

I’ll be waiting expectantly for the post mortem of this. How ironic would it be if this issue was caused by a pull request itself?

At first I thought this mean that the absolute count of pull requests was trending down and this could be a new BLS data point.

Email-based workflow, does have a few benefits like mitigation from this issue.