I've got this feeling that the endless feature creep of Github has begun to cause rot of core essential features. Up until only recently, the PR review tab performed so poorly it was practically useless for large PRs.
Bets on where everything/everyone goes next? Will it be like the transition from SourceForge to GitHub, where the center of gravity moves from one big place to another big place? Or more like Twitter, where factions split off to several smaller places?
> I've got this feeling that the endless feature creep of Github has begun to cause rot of core essential features.
Tangential, but... I was so excited by their frontend, which was slowly adopting web components, until after acquisition by Microsoft they started rewriting it in React.
I often miss entire files in the review process because the review page collapses them by default and makes them hard to spot. If they’re going to be collapsed by default at least make it very visible. This is critical for security too, you don’t want people sneaking in code.
GitHub in essence is still pretty much the same, there's products that have feature creep but I wouldn't say GitHub does that.
I can't say that I'm having issues with the performance either. I work with large PRs too (Especially if there's vendored dependencies) but I never ran into a show stopping performance issue that would make it "useless".
> GitHub in essence is still pretty much the same, there's products that have feature creep but I wouldn't say GitHub does that.
I think we're using two different products. Off the top of my head, I can think of Github Projects (the Trello-like feature), Github Marketplace, Github Discussions, the complete revamp of the file-viewer/editor, and all the new AI/LLM-based stuff baked into yet another feature known as Codespaces.
> I can't say that I'm having issues with the performance either. I work with large PRs too
HN sure has changed. A few years ago there would be at least a dozen comments about installing Gitlab, including one major subthread started by someone from Gitlab.
I've used self-hosted GitLab a bunch at work, it's pretty good there still. In my opinion GitLab CI is also a solid offering, especially for the folks coming from something like Jenkins, doubly so when combined with Docker executors and mostly working with containers.
I used to run a GitLab instance for my own needs, however keeping up with the updates (especially across major versions) proved to be a bit too much and it was quite resource hungry.
My personal stack right now is Gitea + Drone CI + Nexus, though I might move over to Woodpecker CI in the future and also maybe look for alternatives to Nexus (it's also quite heavyweight and annoying to admin).
Having tried gitlab, it's a very poor product almost unmaintainable as a self hosted option. Reminds me of Eclipse IDE - crammed with every other unnecessary feature/plugin and the basic features are either very slow or buggy.
At this point Gitlab is just there because being even a small X% of a huge million/billion dollar market is good enough as a company even if the product is almost unusable.
> Instead of selling products based on helpful features and letting users decide, executives often deploy scare tactics that essentially warn people they will become obsolete if they don't get on the AI bandwagon. For instance, Julia Liuson, another executive at Microsoft, which owns GitHub, recently warned employees that "using AI is no longer optional."
So many clowns. It's like everyone's reading from the same script/playbook. Nothing says "this tool is useful" quite like forcing people to use it.
> It's like everyone's reading from the same script/playbook.
I'd assume that many CEO are driven by the same urge to please the board. And depending on your board, there might be people on it who spend many hours per week on LinkedIn, and see all the success stories around AI, maybe experienced something first hand.
Good news: It's, from my estimate, only a phase. Like when blockchain hit, and everyone wanted to be involved. This time - and that worries me - the ressources involved are more expensive, though. There might be a stronger incentive for people to "get their money back". I haven't thought about the implications yet.
People are biased to using tools they are familiar with. The idea that if a tool was useful people would use it simply false. In order to avoid being disrupted, extra effort needs to be made to get people to learn new tools.
From the CEO's article referenced in that post [1]:
> the rise of AI in software development signals the need for computer science education to be reinvented as well.
> Teaching in a way that evaluates rote syntax or memorization of APIs is becoming obsolete
He thinks computer science is about memorizing syntax and APIs. No wonder he's telling developers to embrace AI or quit their careers if he believes the entire field is that shallow. Not the best person to take advice from.
It's also hilarious how he downplays fundamental flaws of LLMs as something AI zealots, the truly smart people, can overcome by producing so much AI slop that they turn from skeptics into ...drumroll... AI strategists. lol
If you deliberately decide to use a system that introduces a single point of failure into a decentralised system, you have to live with the consequences.
From their point of view, unless they start losing paying users over this, they have no incentive to improve. I assume customers are happy with the SLA, otherwise why use Github?
That's because people can't handle speed. With a natural delay, they could cool down or at least become more detached. Society needs natural points where people are forced to detach from what they do. That's one reason why AI and high-speed communications are so dangerous: they accelerate what we do too quickly to remain balanced. (And I am speaking in general here, of course there will be a minority who can handle it.)
I found this hilariously confusing when I first heard about DVCSs.
I'm like ok... So they're "distributed".. how do I share the code with other people? Oh..I push to a central repository? So it's almost exactly like SVN? Cool cool.
Status page says "Incident with Pull Requests". Pull requests status is listed as "Normal". Status text says issue with degraded performance for Webhooks and Issues, does not mention Pull Requests.
As someone who is partially responsible for supporting github at a very large organization, no it isn't. At least not until the incident is at least 30m old if ever.
I worked there for 3 years and yes GitHub development happens on github.com. Of course there’s ways to deploy and rollback changes while the site is down but that’s very unusual. The typical flow happens in github.com and uses the regular primitives everybody uses: prs, ci checks, etc.
The pipeline for deploying the monolith doesn’t happen in GitHub Actions though but in a service based in jenkins.
Fun fact: playbooks for incidents used to be hosted in GitHub too but we moved them after an incident that made impossible to access them while it lasted.
if GitHub Enterprise Server is anything to go by, they build (almost) everything for containers, and the entire site is hosted by containers managed by Nomad. So there are probably lots of older images around that they can fall back on if the latest image of any container causes problems.
How they would deploy the older container, I don't know.
A lot of this is guesswork, I don't work for them or anything. And I know that GHES in the way that my employer manages it is very unlike the way that GitHub host github.com, so everything i've assumed could be wrong.
I estimate that on some days an outage like this could ultimately save some businesses money.
There's a lot of cowboy development going on out there. Why not take this opportunity to talk to your customers for a bit? Make sure you're still building the right things.
Radicle.xyz fixes this with COBs (Collaborative Objects). They're stored inside your git repo as normal objects, and benefit from its p2p mechanism as well. It's the true sovereign forge.
don't wanna be spreading fake news, but i wonder if this is related to a cloudflare issue? i've been unable to login to cloudflare for the past ~30 minutes. and: https://www.cloudflarestatus.com/
I've got this feeling that the endless feature creep of Github has begun to cause rot of core essential features. Up until only recently, the PR review tab performed so poorly it was practically useless for large PRs.
GitHub isn't focusing on creating a good Git platform anymore, they are an AI company now
Bets on where everything/everyone goes next? Will it be like the transition from SourceForge to GitHub, where the center of gravity moves from one big place to another big place? Or more like Twitter, where factions split off to several smaller places?
4 replies →
[dead]
> I've got this feeling that the endless feature creep of Github has begun to cause rot of core essential features.
Tangential, but... I was so excited by their frontend, which was slowly adopting web components, until after acquisition by Microsoft they started rewriting it in React.
(Design is still very solid though!)
I often miss entire files in the review process because the review page collapses them by default and makes them hard to spot. If they’re going to be collapsed by default at least make it very visible. This is critical for security too, you don’t want people sneaking in code.
GitHub in essence is still pretty much the same, there's products that have feature creep but I wouldn't say GitHub does that.
I can't say that I'm having issues with the performance either. I work with large PRs too (Especially if there's vendored dependencies) but I never ran into a show stopping performance issue that would make it "useless".
> GitHub in essence is still pretty much the same, there's products that have feature creep but I wouldn't say GitHub does that.
I think we're using two different products. Off the top of my head, I can think of Github Projects (the Trello-like feature), Github Marketplace, Github Discussions, the complete revamp of the file-viewer/editor, and all the new AI/LLM-based stuff baked into yet another feature known as Codespaces.
> I can't say that I'm having issues with the performance either. I work with large PRs too
Good for you. I suffered for maybe 4 years from this, and so have many others: https://github.com/orgs/community/discussions/39341
> there's products that have feature creep but I wouldn't say GitHub does that.
I remember GitHub from years ago. I still find myself looking for things that were there years ago but have since moved.
Also, GitHub search is (still) comically useless. I just clone and use grep instead.
2 replies →
I noticed this recently too when using Firefox.
Really?
https://github.com/features
The same since when?
2 replies →
Still doesn't read email, but it's close to that.
https://news.ycombinator.com/item?id=20165602
You can interact with a lot of GitHub via email
Yeah i've switched to doing pr reviews in goland because their ui is dogshit slow if there are more than like 10 files to diff.
HN sure has changed. A few years ago there would be at least a dozen comments about installing Gitlab, including one major subthread started by someone from Gitlab.
We recommend Codeberg/Forgejo now since it is better in every way, and Gitlab went corpo.
Gitlab was always for profit.
And forgejo doesn't have feature parity at all with gitlab. Neither does github, for that matter.
Just take a look at how to push container images from a cicd pipeline in gitlab vs. Forgejo.
3 replies →
> We recommend Codeberg/Forgejo now since it is better in every way...
Lol.
> ...and Gitlab went corpo.
How else will they sustain/maintain such a product and compete with the likes of GitHub? With donations? Good luck.
Are those any better than self-hosted gitlab, or do you only mean central-hosted usage?
1 reply →
I've used self-hosted GitLab a bunch at work, it's pretty good there still. In my opinion GitLab CI is also a solid offering, especially for the folks coming from something like Jenkins, doubly so when combined with Docker executors and mostly working with containers.
I used to run a GitLab instance for my own needs, however keeping up with the updates (especially across major versions) proved to be a bit too much and it was quite resource hungry.
My personal stack right now is Gitea + Drone CI + Nexus, though I might move over to Woodpecker CI in the future and also maybe look for alternatives to Nexus (it's also quite heavyweight and annoying to admin).
Having tried gitlab, it's a very poor product almost unmaintainable as a self hosted option. Reminds me of Eclipse IDE - crammed with every other unnecessary feature/plugin and the basic features are either very slow or buggy.
At this point Gitlab is just there because being even a small X% of a huge million/billion dollar market is good enough as a company even if the product is almost unusable.
Not just HN, Gitlab has perhaps changed as well.
I wouldn't touch Gitlab at this point. I didn't change. They did.
Which is probably good, as otherwise they would be dead. Building products for self-hosting HN users isn't really a big money maker.
I guess they let copilot review their code
Well, the CEO did say to embrace AI or get out of code, 2 days ago... And MS previously said AI is not-optional for their devs...
Maybe they are trying vibeops now.
At Microsoft vibeops is an age old tradition.
After writing it :)
They do actually
did all of the devs leave?
https://www.businessinsider.com/github-ceo-developers-embrac...
> Instead of selling products based on helpful features and letting users decide, executives often deploy scare tactics that essentially warn people they will become obsolete if they don't get on the AI bandwagon. For instance, Julia Liuson, another executive at Microsoft, which owns GitHub, recently warned employees that "using AI is no longer optional."
So many clowns. It's like everyone's reading from the same script/playbook. Nothing says "this tool is useful" quite like forcing people to use it.
It definitely feels like the imbecility of the corporate class has reached new levels.
2 replies →
> It's like everyone's reading from the same script/playbook.
I'd assume that many CEO are driven by the same urge to please the board. And depending on your board, there might be people on it who spend many hours per week on LinkedIn, and see all the success stories around AI, maybe experienced something first hand.
Good news: It's, from my estimate, only a phase. Like when blockchain hit, and everyone wanted to be involved. This time - and that worries me - the ressources involved are more expensive, though. There might be a stronger incentive for people to "get their money back". I haven't thought about the implications yet.
9 replies →
People are biased to using tools they are familiar with. The idea that if a tool was useful people would use it simply false. In order to avoid being disrupted, extra effort needs to be made to get people to learn new tools.
2 replies →
From the CEO's article referenced in that post [1]:
> the rise of AI in software development signals the need for computer science education to be reinvented as well.
> Teaching in a way that evaluates rote syntax or memorization of APIs is becoming obsolete
He thinks computer science is about memorizing syntax and APIs. No wonder he's telling developers to embrace AI or quit their careers if he believes the entire field is that shallow. Not the best person to take advice from.
It's also hilarious how he downplays fundamental flaws of LLMs as something AI zealots, the truly smart people, can overcome by producing so much AI slop that they turn from skeptics into ...drumroll... AI strategists. lol
[1]: https://ashtom.github.io/developers-reinvented
Reminder that Github _still_ does not support IPv6: https://github.com/orgs/community/discussions/10539
I contacted GitHub support about this and they assured me they understand it's a priority and are working on it. Three years ago.
Surely their LLM can work this out
cheaper than layoffs.
[flagged]
Devs leaving can often be a stability boost :)
But if that's what they want, they may be driving out the exact wrong subset of their devs.
Right up until it isn't.
I use gitea on a server in my basement because I don't trust these hosted solutions to not use my code for LLM training or who knows what else.
Me too. I have it mirroring stuff from github too for occasions just like this.
Given Github's critical role in software engineering delivery, their SLA commitments are really quite poor, perhaps unacceptable.
luckily, git itself works pretty well when there's an outage
sucks for people that use issues/PRs for coordination and had a planning meeting scheduled, though
It is critical for those who choose to use it.
If you deliberately decide to use a system that introduces a single point of failure into a decentralised system, you have to live with the consequences.
From their point of view, unless they start losing paying users over this, they have no incentive to improve. I assume customers are happy with the SLA, otherwise why use Github?
Network effects are quite strong
I miss the days where downtime would be like half a day or more and you could use it as an excuse to go home or do something else.
Weirdly people were less angry about it back then than we seem to be today.
That's because people can't handle speed. With a natural delay, they could cool down or at least become more detached. Society needs natural points where people are forced to detach from what they do. That's one reason why AI and high-speed communications are so dangerous: they accelerate what we do too quickly to remain balanced. (And I am speaking in general here, of course there will be a minority who can handle it.)
It was like a snow day! So fun.
Guess it's time to embrace AI.
Context: https://www.finalroundai.com/blog/github-ceo-thomas-dohmke-w...
... or get out.
Good thing I always commit directly to the main branch.
this is broken too!
Good thing we're using a shared Samba drive and editing files directly without locks!
15 replies →
Good thing we just SSH into production and make the changes live.
6 replies →
They must mean their local main branch.
1 reply →
(Presumably?) related ongoing thread:
Why is GitHub UI getting slower? - https://news.ycombinator.com/item?id=44799861 - Aug 2025 (76 comments)
I didn’t think the code I just merged was that bad
Why is this linking to a merged PR, or a PR at all, and not a status page?
It must be back up!
Does not impact me, because my team and I self-host Forgejo for all our work.
People seem to forget Git was meant to be decentralized.
Yes, but you may work with other people, other organizations, or at least depend on open source code that's hosted on GitHub.
I agree with the sentiment though.
I found this hilariously confusing when I first heard about DVCSs.
I'm like ok... So they're "distributed".. how do I share the code with other people? Oh..I push to a central repository? So it's almost exactly like SVN? Cool cool.
Do work in and rely on self hosted forks so you are not blocked, and upstream when upstream code submissions become possible again.
Props to Github for having an accurate status page. AWS and Google should take note.
Status page says "Incident with Pull Requests". Pull requests status is listed as "Normal". Status text says issue with degraded performance for Webhooks and Issues, does not mention Pull Requests.
I would give that a 5/10 accuracy at best!
The status page has been updated. PR and webhook statused red and now listed as "Incident".
(Disclosure: GitHub employee)
they've updated the page since then. Take a look
As someone who is partially responsible for supporting github at a very large organization, no it isn't. At least not until the incident is at least 30m old if ever.
Wait, wasn't GitHub a company ran by the guy who just two days said that devs should either embrace AI or leave the field?
https://www.developer-tech.com/news/embrace-ai-or-leave-care...
Maybe his developers embraced AI a bit too much? Or maybe they left the field?
Does GitHub development happen on GitHub? And if the fix for broken pull requests requires a pull request would they have a way to review it...
I worked there for 3 years and yes GitHub development happens on github.com. Of course there’s ways to deploy and rollback changes while the site is down but that’s very unusual. The typical flow happens in github.com and uses the regular primitives everybody uses: prs, ci checks, etc.
The pipeline for deploying the monolith doesn’t happen in GitHub Actions though but in a service based in jenkins.
Fun fact: playbooks for incidents used to be hosted in GitHub too but we moved them after an incident that made impossible to access them while it lasted.
> that made impossible to access them
Couldn't they just be checked out by cron on any number of local machines hosting Apache?
1 reply →
if GitHub Enterprise Server is anything to go by, they build (almost) everything for containers, and the entire site is hosted by containers managed by Nomad. So there are probably lots of older images around that they can fall back on if the latest image of any container causes problems.
How they would deploy the older container, I don't know.
A lot of this is guesswork, I don't work for them or anything. And I know that GHES in the way that my employer manages it is very unlike the way that GitHub host github.com, so everything i've assumed could be wrong.
Give your best estimate on how much dollar value of creation is wasted every hour GitHub PRs are down
I estimate that on some days an outage like this could ultimately save some businesses money.
There's a lot of cowboy development going on out there. Why not take this opportunity to talk to your customers for a bit? Make sure you're still building the right things.
At a startup, sure.
At any decently-sized org, the developers are not allowed to talk to customers on their own accord.
2 replies →
> There's a lot of cowboy development going on out there
This has been the case before VCSes existed.
$2000
One MILLION dollars puts pinky to corner of mouth
https://www.githubstatus.com/ "git operations: degraded", my git operations are degraded by default
This is why I recommend decentralized protocols like radical or I guess I hope that tangled.sh could fix this stuff too.
I am not sure about tangled.sh, I might ask them in their discord about this now y'know.
Git is a decentralized protocol, it's just incomplete IMO
There is git format-patch to create a diff and git send-email [2] to mail it to another developer and git-am [3] to apply the patches from a mailbox.
The Linux kernel developers have been using that workflow for a lot of time. Maybe still now.
[1] https://git-scm.com/docs/git-format-patch
[2] https://git-scm.com/docs/git-send-email
[3] https://git-scm.com/docs/git-am
1 reply →
Communication layer agnostic text files is a killer feature of git. What MS is doing with Github is typical EEE.
Git and GitHub are not the same thing. git repos can live independently of GitHub
What features do you feel like git is missing?
2 replies →
Git has a protocol called email.
1 reply →
Radicle.xyz fixes this with COBs (Collaborative Objects). They're stored inside your git repo as normal objects, and benefit from its p2p mechanism as well. It's the true sovereign forge.
> it's just incomplete
Why?
Set up a second remote on Bitbucket or other and synchronize through that. Pipelined, etc might be missing but at least development can proceed.
Not a good look when they're heavily pushing AI agents.
It used to take a whole team of developers to take down production, now, one programmer with a fleet of agents can do it in 1/10th the time!
I’ll be waiting expectantly for the post mortem of this. How ironic would it be if this issue was caused by a pull request itself?
At first I thought this mean that the absolute count of pull requests was trending down and this could be a new BLS data point.
https://radicle.xyz is the future!
weird... this is redirecting me to `Privacy Statement Updates September 2022 #582`
https://github.com/github/site-policy/pull/582
It was probably just an example.
Dupe: https://github.com/github/site-policy/pull/582
Thread on HN: https://news.ycombinator.com/item?id=44799435
Since the current submission has the clearer URL, we'll merge the comments hither. Thanks!
don't wanna be spreading fake news, but i wonder if this is related to a cloudflare issue? i've been unable to login to cloudflare for the past ~30 minutes. and: https://www.cloudflarestatus.com/
https://www.githubstatus.com
(+WebHooks) (+Issues)
This is strange: I was just having issues with Pull Requests on BitBucket too. Coincidence, actually?
It's all a central svn in AWS
Email-based workflow, does have a few benefits like mitigation from this issue.
No excuse. git-send-email out and stop slacking :)
Seems total downtime was from 15:51 to 16:14 UTC
Right in the middle of a huge rebase. Great!
How does an outage of a remote repo affects your local rebase.
It’s never a good time for GitHub to be down!
GitHub gives everyone an extra long lunch.
Early EOD for me!
how many more years of this before people realize its actually not good at all?
It's up now.
Uh.. pub?
Already in it. Was a premonition.
[dead]
[dead]
[dead]
[dead]