I had just last year prepared a detailed guide for reliable postgre backups to local volume as well as cloud storage, using pgBackRest, for my own projects.. pgBackRest have worked so well for me
One thing people are not taking into account is that many developers now have less time and are working a lot more because AI makes it seem it should be possible to hit those deadlines, etc.
Also, many programers have spent their entire funds on tokens, so neither are left with extra money nor time.
Acquisitions change priorities and layoffs put the squeeze on people. AI is for sure in the mix there, but open source decay is a result of no room in budgets for anything but maximizing revenue.
True.. I truly wish wish we had better open-source license and more open-source projects adopt it..
Tiered pricing license... tiering based upon annual company revenues... should start super low for small companies (free for individuals), and jump to thousands of dollars per year for 10+ milion revenue companies.
I understand that this might not fully be in the spirit of open-source, but, what's happening currently is way worse.. where giant companies rip off the hardwork of open-source software maintainers without compsensating them adequately.
The project is being abandoned because the maintainer is tired of working for free. They said that they hoped someone would fork it, change the name, and pick up where it was left off.
Why would anyone do that? If the person who was most passionate about it for over a dozen years has given up because it was never worth the trouble; what fool would think things will be different going forward?
Open Source has worked fine here. The author doesn't find financial support for the work, so they just want to change winds and that's a perfectly fine path forward.
If this is really much more than a personal project "for fun, on my leisure time", and it became an actually serious product-level project that provides good value in commercial environments for people, there's clearly an opportunity for a for-profit company to step in and cover that niche. But that'd require that users became customers and actually departed from their money to pay for it :)
I guess most will switch instead to asking who's the next project maintainer to work on it, to whom the new bug reports and complaints can continue to be sent for free. But if there's money to be made by using a tool, there should be money paid for using it too. We "just" need to find the new generation of FOSS Financial Sustainability solutions that actually work! Donations don't make the cut.
I wonder whether the author has considered taking the product to a paid level and what would be necessary for it.
Obviously, all contributors have some form of copyright, which may or may not have been waived depending on whether there was an ACL in place and jurisdiction. So he would need to get permission from the copyright holders, maybe in exchange for a percentage of the profit.
Changing the license of already existing code? You might not be able to do that without permission from other contributors, I agree.
But it's MIT license. We can open a company tomorrow, take that code, and start selling it. Further development and improvements of the code could be trivially done openly or behind closed doors. FWIW the author themselves could do that if they wanted.
The Crunchy Data part is what people should pay more attention to here. He had corporate sponsorship and it was working. Company got acquired, new owners didn't prioritize the same things, and now 3.8k-star critical infrastructure goes dark. Your backup tool's funding depended on someone else's M&A strategy and you had no idea.
I've been gradually moving my own stuff to SQLite and git-tracked files partly because of this. Every managed Postgres setup has a dependency tree of tools maintained by people whose funding situation you know nothing about.
The favourite model I've seen is the main branch is free, licensed MIT or whatever, but if you want release artifacts that are tested - then you pay for it. You can always compile your own.
SQLite doesn’t depend on donations. They have a consortium, sell licenses (it is open source but some companies like the explicit CYA), sell support contracts, sell an aviation-grade test harness, and sell extensions.
Of course there is always the risk it goes out of business like any other company, but it’s not funded like your typical small open source project and doesn’t even allow open contributions (not necessarily a bad thing IMO but it’s just a totally different type of project).
They have more sponsors/clients so a single company changing direction wouldn't kill them. They also sell directly if you want to buy from them. But ultimately the risk still exists.
For me the sad part about the story is that someone who clearly knows what they are doing wasn’t able to find a job that would have permitted him to continue working on the project and that there were insufficient sponsors from companies.
Database backup tools are used primarily in enterprise context. (In)Ability to donate is not a function of personal spending preferences
A fair amount of people work here at orgs on here would absolutely be able to swing couple of hundred bucks per month in sponsorship or licensing or donations for a critical tool in their infra toolkit without lot of effort.
Particularly so, with the rising frequency of AI deleted my prod posts.
I dunno how they compare, but we have been using barman for a long time very happily. We test our backups every night, by restoring from barman into a _nightly DB. which we then give out to users as a training/testing spot, so that we know when it breaks. It hasn't broken in many years now. <3
I'm one of many wal-g maintainers, it's comparable. I've been inactive for past few years, but back in managed postgres game. Hoping to get support for pg17 incremental backups alongside wal-g's existing delta backups where wal-g compares blocks itself. Be sure to use daemon mode
Sad to see competitor go, I think there's lots of room for improvement here, & C over Golang is particularly nice when postgres wants to run on system without overcommit
Are you using WAL archiving? As far as I understand, pgbackrest and Barman can also use direct streaming from the DB (same mechanism as replication), I didn't find any mention of this in the WAL-G documentation.
With WAL archiving you need to wait for a WAL segment to finish before it's backed up. With streaming backups the deadtime is minimized. At least that's as far as I understand this, I didn't get to try this out in practice yet.
I have a moderately sized 2TB production database I have enjoyed using pgBackRest on, and was—this week—going to set it up on another 8TB database we have.
What's the next-closest thing? wal-g? barman? databasus? I only get to cosplay as a DBA.
We recently moved from Barman to pgBackrest. Our main complaints with barman were that incremental backups utilized hardlinks. Which was great, we could have our 7TB database backed up, and the next day, only 20GB in changes. But, when replicating that data to cloud storage, there is no concept of hardlinks, so now we had to push 14TB to cloud storage. Also, at least last time we looked a while back, file compression was only the WAL files, unless you used the newer barman-cloud-backup tool, which we did not.
Also, pgBackrest lets you do the majority of the backup from a physical standby, which is VERY nice for removing the load off production.
None of these seemed like issues, until we looked at pgBarman, and suddenly realized how nice that would be.
I can beat you on the timing - I'd never used pgBackRest before, but started setting it up on a project about 2 hours ago, by the time I'd finished the README had been updated.
Not for PostgreSQL, but for MariaDB we run replicas in FreeBSD jails on a server with lots of ZFS space. The jailed Maria instances just stop every hour (so the DB flushes everything to disk), the host snapshots all of their data volumes, and then starts the jails back up. Within a minute or so they're fully caught up to the primaries again. Gives us months and months of recovery checkpoints.
It's great because it's a completely clean save from a shutdown state, so when we need a scratch copy of a database it only takes as long as cloning whatever snapshot we want (depending on how far back we need to to), then starting a scratch jail that runs from those clone filesystems. When finished, just shutdown scratch and delete the clones, it's like it never happened.
A previous company I was at did this on the primary. It always seemed to work, but no one was really comfortable with it, largely because there wasn't too much ZFS experience at the time and also because the process did not coalesce the database before doing it. I think it's still a valid strategy, but not one I have had time to verify thoroughly.
**Backup types**
- **Logical** — Native dump of the database in its engine-specific binary format. Compressed and streamed directly to storage with no intermediate files
- **Physical** — File-level copy of the entire database cluster. Faster backup and restore for large datasets compared to logical dumps
- **Incremental** — Physical base backup combined with continuous WAL segment archiving. **Enables Point-in-time recovery (PITR)** — restore to any second between backups. Designed for disaster recovery and near-zero data loss requirements
EDIT: It seem PITR has been added this March (for PostgreSQL)
I was about to set up Postgres backups with pgbackrest very soon. It looked like the most mature solution for my use case. What I was aiming for was continuous backups to an object storage provider, without a central DB server but the backup tool directly installed on the Postgres server.
I'll have to look at the alternatives again, I think that was mostly WAL-G and Barman. It looks like Barman doesn't support direct backup to object storage, unfortunately. And I find the WAL-G documentation very confusing. What I'm looking for is WAL streaming and object storage support, to minimize the amount of data that can be lost and so I don't have to run my own backup server.
This is exactly what I was setting it up to do this morning. My research came down to this and WAL-G for the same reasons, and I picked pgBackRest over WAL-G because the documentation was clearer.
I think many people might be overlooking pgmoneta, it’s a powerful alternative and one of the most active backup projects in Google Summer of Code each year.
The number of maintainers is always smaller than the number of users for any successful project. GitHub displays the number of contributors as 57, I don't know if that's small or not.
How often are the consumers and users of tools like this also in positions to contribute financially? It's silly, but I can spin up $10000 worth of azure resources and nobody would mind (as long as they actally had a purpose etc). In contrast I doubt I'd ever get a decisionmaker to sign off on supporting an OSS project with even $50, even if we have tech that depends on it.
I’d think the lesson here is obvious, but maybe not.
If you thought this project had value, you could’ve contributed to it. You probably still could.
Or, if you think its value is worth $0 (to you), maybe it’s not really that sad (to you).
People are expressing sadness as if there was nothing to be done about it, but, of course, there’s a really straight-forward thing that could’ve been done about it (possibly still could).
It's such a strawman to claim that you cannot be sad if something disappears where you have not financially or you work contributed. Someone can say that they are sad that the Notre Dame burned down even if they haven't personally contributed to Notre Dame.
Something burning down is a tragedy, beyond anyone's control. It's also possible to love something for its beauty, and be sad that a globally historic monument suffered such an act of god that the irreplaceable art and craftsmanship is gone forever.
Something closing down, perhaps because there was not enough money to sustain its continued operation, when tens of thousands or hundreds of thousands of people were using it? That's a perfectly appropriate time to remind folks, "if you like free software, consider donating to help sustain the almost full-time effort it takes to keep packages like this alive."
Op said, "this is sad [because] I've been using this," and the implication is, "I want to keep using this but now I can't because it's gone" and making the connection that "one way to prevent this from happening to other packages you like is to contribute financially."
I can use Pgbackrest in my side project which does not generate any money. Maybe my side project is another open source project where no one give me money, but I'm still contributing to the open source ecosystem, maybe I reported bugs which help everyone.
There are so may details and possible reasons to not give money and use open source software, but your negative and naive comment totally miss them.
I wish it was easier to know which projects are in desperate need of funding because I love pgbackrest and totally would have donated here, and I suspect many others would have too :/
It's funny how developer time is considered free, but tokens are not.
In other words when it comes to FOSS contribution, developer time can be donated but tokens can't - so as we move into agentic code era all FOSS development carries a cost unless it is purely done by hand (which more often it isn't).
Not saying this is what is going on here but it's presumably a factor if the author was looking for an employer to sponsor development with his labor (and tokens).
I'm also using this project. Easy to configure and operate.
I am feeling a slight unease using such a recent project for things as important as the database. But the polished interface combined with the easy docker deployment made me use it anyway. Restores need some permission tuning on PostgreSQL but otherwise happy.
They are very proud of their github star acquisition curve [0], the "blessing" by Anthropic [1]
This is scary as a solo dev who builts on postgresql. You pick a tool trust it, build around it, one day it stops. Oss sustainability is a real problem
Really sad to see this. I had only recently learnt about this project, and was really impressed by it. I was planning to set it up this weekend (via autobase). I've also been under the impression that it's likely to be what powers the backups in RDS, Cloud SQL, etc., but I may have misunderstood.
pgbackrest is awesome, truly. Thank you so much for the work you've put into this project over the years, and I'm sad the crunchy data acquisition couldn't keep the project alive.
I won't say He should be working on it no matter what but I believe its a very good project and I think as always community forks will be the only option when it won't work in future
> Since Crunchy Data was sold, I have been maintaining pgBackRest and looking for a position that would allow me to continue the work, but so far I have not been successful. Likewise, my efforts to secure sponsorship have also fallen far short of what I need to make the project viable.
So this was the problem, I thought Snowflake would pick up the sponsorship of this project but since it is a competing database it doesn't really make much sense.
I really wish many critical OSS projects get the sponsorship they need to continue.
Otherwise the software industry is in real trouble.
Forking it just passes the buck onto another maintainer with the same problem, this time without the original creator maintaining it.
Very simple. Name it to pgbackrest-AI and add the line:
"AI driven backups with smartest world class models optimizing every byte stored via deep AI analysis."
With that added, a million dollars is just chimp change. YC alone would be adding them to all the seasons multiple times over summer, winter and monsoon etc.
Postgres doesn't compete with Snowflake. Snowflake recently announced a Postgres DBaaS offering that integrates with Snowflake (actually has competitive pricing with AWS RDS Postgres)
They're two non competing verticals. It's a shame Snowflake decided to shrink Crunchy Data's community presence.
From what I can find Postgres 17 [1] introduced incremental backups to pg_basebackup, refined in 18, but nowhere near the full featureset of pgBackRest. Is that what you meant? Having builtin incremental replication to a S3-compatible storage would be great.
all of these various 3rd party backup tools use these things. Mostly it's QOL stuff that you get from a 3rd party tool. We use barman, very happily: https://pgbarman.org/
Postgres is very "unix-y" in that everything is a separate tool. It has backup interfaces and commands but doesn't ship with a comprehensive backup management solution.
I find it shocking (not really) that among the many BILLION dollar companies built on the back of Postgres there isn't enough sense to pay the salary of one dude to keep a project like this going forever.
thirteen years of blood, sweat, and late nights shipped into the void - respect to David Steele for keeping it real and pulling the plug clean rather than letting it rot in maintenance burden
This is the message the author posted on LinkedIn:
After a lot of thought, I have decided to stop working on pgBackRest. I did not come to this decision lightly. pgBackRest has been my passion project for the last thirteen years, and I was fortunate to have corporate sponsorship for much of this time, but there were also many late nights and weekends as I worked to make pgBackRest the project it is today, aided by numerous contributors. Every open-source developer knows exactly what I mean and how much of your life gets devoted to a special project.
Since Crunchy Data was sold, I have been maintaining pgBackRest and looking for a position that would allow me to continue the work, but so far I have not been successful. Likewise, my efforts to secure sponsorship have also fallen far short of what I need to make the project viable.
Like everyone else, I need to make a living, and the range of pgBackRest-related roles is very limited. I can now consider a wider variety of opportunities, but those will not leave me time to work on pgBackRest, which requires a fair amount of time for maintenance, bug fixes, PR reviews, answering issues, etc. That does not even include time to write new features, which is what I really love to do. Rather than do the work poorly and/or sporadically, I think it makes more sense to have a hard stop.
I will post a notice of obsolescence and archive the repository. I imagine at some point pgBackRest will be forked, but that will be a new project with new maintainers, and they will need to build trust the same way we did.
Again, many thanks to all the pgBackRest contributors over the years. It was a pleasure working with you!
Damn this sucks. Snowflake bought crunchydata right? I know they largely did that because they wanted to push crunchydata's datalake extension past some sort of proverbial finish line as they've been competing with databricks for features, but snowflakes pressers about commitment to open source and postgres in general (which are ofc something no one should take that seriously) feel even more sad when it blows out the floor underneath projects like this which are undoubtedly part of the same postgres extension ecosystem. Snowflake went after crunchydata for that _one_ extension while neglecting the broader world that crunchydata was keeping alive. They can champion support OSS and postgres all they want but they hurt the ecosystem here, kind of a slap in the face to the postgres world.
many people here don't read the articles, and that's not going to change. (on today's internet, jumping from the site you want to be on to a site with unknown UX patterns is fraught)
but people here do read the comments, so having important details from the articles in comments here improves the quality of comments here, at least if you value staying on topic.
i wish the guy could have made a paid version so he could have continued it. Unfortunately, most people do not want to financially contribute to open source and especially when that open source project becomes a paid product.
do not yell at me, but... this is where genAI may be useful.
what if, bare with me, what if, after a certain amount of time, a certain amount of "requests", a code library can be given to a genAI to maintain; no improvements, no extra features, just bug fixes? This could continue until either someone picks it up, or the open source solution becomes irrelevant, not enough "requests".
Metrics would help others who may want to rescue the project consider the options. Eg user base would make it clear if there’s an immediate opportunity to work with the author to launch a paid backup service around the project, funding continued work on it.
Why not try to find a successor instead of archiving the repo and forbidding the use of the name? I'm sure with a 3.8k stars repo you'll find competent people willing to continue the work.
Sometimes you want to hang things to your wall, and be done with it.
I'd personally do the same. I wouldn't want to be bothered by the future maintainers' choices and get feedback/flak for it. It's a well-known and well-respected way to cycle the name with a "-ng" or "-nx" prefix to signal that this is the newer project with a different set of maintainers.
Being MIT, while is not my favorite license, doesn't give free license to grab and run with things.
Honestly, in my eyes, 3.8K or 38K stars mean nothing, because Open Source is not about you [0], to begin with.
It is reasonable to ask for a follow-up project/fork to take a different name. Naming your project, e. G., pgbackrest-ng, does not sound too onerous of a requirement and clearly communicates to users that maintainers have changed (see also paperless ng/ngx as good examples of such a change).
Finding a successor is also not easy nor cheap (in regards to time).
You'll also find plenty of potential malware injectors too, and who would want the responsibility of trying to vet a successor and have to work out the difference?
Because you will attract people who will want to take advantage of the trust these 3.8k stars signal to some people, for example, by means of supply chain attacks.
The Apache Foundation used to help with this sort of governance problem didn't it? Thugh maybe pgbackrest isn't quite big and official enough to be the kind of software which Apache takes on, and one certainly hears (increasing?) grumbles about Apache's stewardship.
There's no way to know if a new maintainer will live up to whatever standards they've kept to date. Archiving should be the default decision, unless there's formal and elaborate handover.
A maintainer that is mainly motivated by the 3.8k stars aspect is probably not the person you want. Working on critical OSS software is fun until it's not, especially when you are not paid for that work.
Those people can just as easily fork it and make a new name then. Otherwise you end up with situations where it's actually an entirely new thing under new developers under the same name. Even riskier in the age of the "AI clean rewrite"
> TL;DR: pgBackRest is no longer being maintained. If you fork pgBackRest, please select a new name for your project.
> I imagine at some point pgBackRest will be forked, but that will be a new project with new maintainers, and they will need to build trust the same way we did.
I completely understand having to back out of maintenance on an OSS project, but why also slam the door closed on someone taking over? There may be someone very qualified willing to step up, and that could give your existing users continuity.
This feels analgous to deciding to stop maintaining a community garden, but rather than let your neighbor step up, you decide to salt the ground so it can never grow there again, telling your neighbors "you can pull up my plants and move them, but you can't use all the ground and roots that are already there." It just feels bitter.
To me it reads as being worried that someone malicious could step in and use the project's name to do harm. If you don't have someone within the project with trust built ready-to-go, establishing that trust enough to hand over the project is a big task.
I totally agree, that is a huge risk. But what if someone from the postgres team decided to step up and maintain it? I'm not saying that's likely, but it is possible for a very popular tool like this. With the way the project exited now, that would not at all be an option. Obviously if postgres themselves decided to do it, they wouldn't need the previous credibility so this isn't the best example
From the story told in the README it is clear this is a project ran by a single person. There is no wider maintenance team that can be trusted with continuing the project. So anyone who offers to take up the maintenance will be unknown to the current maintainer and cannot automatically be trusted.
The alternative to this seemingly bitter approach is handing over the trust they built to some unknown person who can do whatever they want with the data in a lot of PostgreSQL databases around the world. I think I prefer the bitterness here over blind trust.
Sure, but what if someone from the postgres team decided they wanted to step up? The door is completely shut for that now. And if we can't trust someone from the postgres team to do it, then who can we trust?
I think this is overly harsh. After the guy has been working on the project for such a long period a handover would inevitably be a long process, not least to ensure whoever took over didn't abuse the existing user-base. Completely fair if the existing maintainer doesn't want to take on this work, and arguably a fork forces consumers to properly consider that someone else is in charge now.
Are you suspecting your partner of cheating or having an extramarital affair? I’ll advice you to get proof first before confronting him/her. As that could result in unnecessary confusion in your relationship or marriage. it’s always advisable to consult a professional hacker to help you get concrete evidence by discreetly getting access to their phone or computer. he has forked for me a couple of times and he never disappoints. he provides Accurate results and can be trusted for 100% privacy and untraceable.
Is it really that much effort to maintain something? I’ll admit I haven’t the foggiest, my most maintained thing having like 200 stars or something, but if I leave it alone for half a year it doesn’t suddenly combust into flame.
So sad to see this happening..
I had just last year prepared a detailed guide for reliable postgre backups to local volume as well as cloud storage, using pgBackRest, for my own projects.. pgBackRest have worked so well for me
https://github.com/freakynit/postgre-backup-and-restore-guid...
Thanks to the author for all the time and effort he put into this project..
One thing people are not taking into account is that many developers now have less time and are working a lot more because AI makes it seem it should be possible to hit those deadlines, etc.
Also, many programers have spent their entire funds on tokens, so neither are left with extra money nor time.
Acquisitions change priorities and layoffs put the squeeze on people. AI is for sure in the mix there, but open source decay is a result of no room in budgets for anything but maximizing revenue.
I really wish projects like this didn't fall through the cracks and continued to be funded. The struggles of OSS are too real.
True.. I truly wish wish we had better open-source license and more open-source projects adopt it..
Tiered pricing license... tiering based upon annual company revenues... should start super low for small companies (free for individuals), and jump to thousands of dollars per year for 10+ milion revenue companies.
I understand that this might not fully be in the spirit of open-source, but, what's happening currently is way worse.. where giant companies rip off the hardwork of open-source software maintainers without compsensating them adequately.
16 replies →
The project is being abandoned because the maintainer is tired of working for free. They said that they hoped someone would fork it, change the name, and pick up where it was left off.
Why would anyone do that? If the person who was most passionate about it for over a dozen years has given up because it was never worth the trouble; what fool would think things will be different going forward?
This is the curse of OSS.
13 replies →
The struggles of living in an economic system while completely rejecting that system and pretending it isn't there.
1 reply →
Open Source has worked fine here. The author doesn't find financial support for the work, so they just want to change winds and that's a perfectly fine path forward.
If this is really much more than a personal project "for fun, on my leisure time", and it became an actually serious product-level project that provides good value in commercial environments for people, there's clearly an opportunity for a for-profit company to step in and cover that niche. But that'd require that users became customers and actually departed from their money to pay for it :)
I guess most will switch instead to asking who's the next project maintainer to work on it, to whom the new bug reports and complaints can continue to be sent for free. But if there's money to be made by using a tool, there should be money paid for using it too. We "just" need to find the new generation of FOSS Financial Sustainability solutions that actually work! Donations don't make the cut.
Something I learned about being a part of an ecosystem: if you want it, you need to support it and help it stay alive.
That applies to local shops as it does open source projects.
The project has never even had a donation button on its page, only a link with a few sponsors.
2 replies →
I wonder whether the author has considered taking the product to a paid level and what would be necessary for it.
Obviously, all contributors have some form of copyright, which may or may not have been waived depending on whether there was an ACL in place and jurisdiction. So he would need to get permission from the copyright holders, maybe in exchange for a percentage of the profit.
Changing the license of already existing code? You might not be able to do that without permission from other contributors, I agree.
But it's MIT license. We can open a company tomorrow, take that code, and start selling it. Further development and improvements of the code could be trivially done openly or behind closed doors. FWIW the author themselves could do that if they wanted.
ANd that gets rather looked on here as the authors being deceitful and not really Open Source doing a bait and switch.
6 replies →
The Crunchy Data part is what people should pay more attention to here. He had corporate sponsorship and it was working. Company got acquired, new owners didn't prioritize the same things, and now 3.8k-star critical infrastructure goes dark. Your backup tool's funding depended on someone else's M&A strategy and you had no idea.
I've been gradually moving my own stuff to SQLite and git-tracked files partly because of this. Every managed Postgres setup has a dependency tree of tools maintained by people whose funding situation you know nothing about.
It didn't go dark, and doesn't seem that critical in general.
General idea still stands, but it is not like this just disappeared and backups will stop working.
The favourite model I've seen is the main branch is free, licensed MIT or whatever, but if you want release artifacts that are tested - then you pay for it. You can always compile your own.
Why does sqlite not suffer from the same risk?
SQLite doesn’t depend on donations. They have a consortium, sell licenses (it is open source but some companies like the explicit CYA), sell support contracts, sell an aviation-grade test harness, and sell extensions.
Of course there is always the risk it goes out of business like any other company, but it’s not funded like your typical small open source project and doesn’t even allow open contributions (not necessarily a bad thing IMO but it’s just a totally different type of project).
9 replies →
Its an LLM comment, don't search too deeply for logical consistency
They have more sponsors/clients so a single company changing direction wouldn't kill them. They also sell directly if you want to buy from them. But ultimately the risk still exists.
Because it's a single file you can back up like any other?
2 replies →
"so sad to see this"
The source is still available. Maintaining your own copy and/or paying someone to do it is an option.
While you're at it, look at all the projects you depend on that you would similarly be sad about losing, and set up those donations today.
This is the right attitude. All the "this is sad" comments make me want to ask, "How sad are you? Sad enough to donate?"
For me the sad part about the story is that someone who clearly knows what they are doing wasn’t able to find a job that would have permitted him to continue working on the project and that there were insufficient sponsors from companies.
Not the fact that he made the decision he made.
Database backup tools are used primarily in enterprise context. (In)Ability to donate is not a function of personal spending preferences
A fair amount of people work here at orgs on here would absolutely be able to swing couple of hundred bucks per month in sponsorship or licensing or donations for a critical tool in their infra toolkit without lot of effort.
Particularly so, with the rising frequency of AI deleted my prod posts.
Wow! pgbackrest was definitely the premier backup solution for postgres when I last looked at the ecosystem properly.
It was the only solution that seemed to take restoring and validating as seriously as “taking a backup” which lead to an unfortunate situation with my employer. (details here: https://blog.dijit.sh/that-time-my-manager-spend-1m-on-a-bac...)
This is really a major loss. :(
Wow, this is pretty surprising, I was under the impression that this is the leading PG backup/recovery tool.
Anybody know how WAL-G and Barman compare?
https://github.com/wal-g/wal-g
https://github.com/EnterpriseDB/barman
I dunno how they compare, but we have been using barman for a long time very happily. We test our backups every night, by restoring from barman into a _nightly DB. which we then give out to users as a training/testing spot, so that we know when it breaks. It hasn't broken in many years now. <3
I'm one of many wal-g maintainers, it's comparable. I've been inactive for past few years, but back in managed postgres game. Hoping to get support for pg17 incremental backups alongside wal-g's existing delta backups where wal-g compares blocks itself. Be sure to use daemon mode
Sad to see competitor go, I think there's lots of room for improvement here, & C over Golang is particularly nice when postgres wants to run on system without overcommit
We've been happy with WAL-E and now WAL-G (successor). The streaming PITR nature of these won over pgbackrest when we did the analysis ~9 years ago.
Are you using WAL archiving? As far as I understand, pgbackrest and Barman can also use direct streaming from the DB (same mechanism as replication), I didn't find any mention of this in the WAL-G documentation.
With WAL archiving you need to wait for a WAL segment to finish before it's backed up. With streaming backups the deadtime is minimized. At least that's as far as I understand this, I didn't get to try this out in practice yet.
2 replies →
>Wow, this is pretty surprising, I was under the impression that this is the leading PG backup/recovery tool.
https://xkcd.com/2347/
I have a moderately sized 2TB production database I have enjoyed using pgBackRest on, and was—this week—going to set it up on another 8TB database we have.
What's the next-closest thing? wal-g? barman? databasus? I only get to cosplay as a DBA.
I've used barman on somewhat large-ish DBs (30+ TB), and had no complaints with it. I am a DBRE, if that holds any weight.
We recently moved from Barman to pgBackrest. Our main complaints with barman were that incremental backups utilized hardlinks. Which was great, we could have our 7TB database backed up, and the next day, only 20GB in changes. But, when replicating that data to cloud storage, there is no concept of hardlinks, so now we had to push 14TB to cloud storage. Also, at least last time we looked a while back, file compression was only the WAL files, unless you used the newer barman-cloud-backup tool, which we did not.
Also, pgBackrest lets you do the majority of the backup from a physical standby, which is VERY nice for removing the load off production.
None of these seemed like issues, until we looked at pgBarman, and suddenly realized how nice that would be.
1 reply →
barman seems to cover "Natural disaster" in their docs. Seems good.
I'll take a look. Thanks!
Backing up multi terabyte production postgres databases is not merely cos playing ha ha
The "closest" would be using Barman with hook scripts (https://docs.pgbarman.org/release/3.18.0/user_guide/hook_scr...) if you rely on cloud storage for storing backups.
https://github.com/aiven-open/pghoard seems like a good option too, but I haven’t tested it yet to have a solid opinion.
I can beat you on the timing - I'd never used pgBackRest before, but started setting it up on a project about 2 hours ago, by the time I'd finished the README had been updated.
Anyone put the standby on ZFS or other filesystems that can take snapshots for backup?
Not for PostgreSQL, but for MariaDB we run replicas in FreeBSD jails on a server with lots of ZFS space. The jailed Maria instances just stop every hour (so the DB flushes everything to disk), the host snapshots all of their data volumes, and then starts the jails back up. Within a minute or so they're fully caught up to the primaries again. Gives us months and months of recovery checkpoints.
It's great because it's a completely clean save from a shutdown state, so when we need a scratch copy of a database it only takes as long as cloning whatever snapshot we want (depending on how far back we need to to), then starting a scratch jail that runs from those clone filesystems. When finished, just shutdown scratch and delete the clones, it's like it never happened.
A previous company I was at did this on the primary. It always seemed to work, but no one was really comfortable with it, largely because there wasn't too much ZFS experience at the time and also because the process did not coalesce the database before doing it. I think it's still a valid strategy, but not one I have had time to verify thoroughly.
databasus does not do PITR.
Is that info up-to-date? Their readme states:
EDIT: It seem PITR has been added this March (for PostgreSQL)
https://github.com/databasus/databasus/issues/411
pg_probackup seems to be another one.
pgbackrest is the most versatile piece of backup technology for PostgreSQL and in my experience the other products do not come close.
I am therefore quite sad to see this happen. It won't be easy to get feature parity with this great product.
I sincerely hope this is a reversible decision, or perhaps the postgres project could even absorb it into contrib.
It still works, you can just keep using it.
I think that’s what the author would want. People to keep using it until it doesn’t work anymore.
And hopefully someone wants to stand up then. Not sure whether it needs to be a fork or that they can join as contributor on the repo.
I was about to set up Postgres backups with pgbackrest very soon. It looked like the most mature solution for my use case. What I was aiming for was continuous backups to an object storage provider, without a central DB server but the backup tool directly installed on the Postgres server.
I'll have to look at the alternatives again, I think that was mostly WAL-G and Barman. It looks like Barman doesn't support direct backup to object storage, unfortunately. And I find the WAL-G documentation very confusing. What I'm looking for is WAL streaming and object storage support, to minimize the amount of data that can be lost and so I don't have to run my own backup server.
This is exactly what I was setting it up to do this morning. My research came down to this and WAL-G for the same reasons, and I picked pgBackRest over WAL-G because the documentation was clearer.
I think many people might be overlooking pgmoneta, it’s a powerful alternative and one of the most active backup projects in Google Summer of Code each year.
Highly recommended. Definitely worth taking a look: https://pgmoneta.github.io/
Plenty of comments of "So sad I have been using this".
How many actually contributed back to keep it going?
The number of maintainers is always smaller than the number of users for any successful project. GitHub displays the number of contributors as 57, I don't know if that's small or not.
How often are the consumers and users of tools like this also in positions to contribute financially? It's silly, but I can spin up $10000 worth of azure resources and nobody would mind (as long as they actally had a purpose etc). In contrast I doubt I'd ever get a decisionmaker to sign off on supporting an OSS project with even $50, even if we have tech that depends on it.
> How many actually contributed back to keep it going?
Or why not hire the guy?!
Seriously. Is nobody using this at a level where hiring the primary maintainer is a good idea?
If I didn't use Pgbackrest and never contributed to it, am I entitled to feel sadness?
I am not sure why are you gatekeeping this? People can't comment now that they are sad because of what happened?
I’d think the lesson here is obvious, but maybe not.
If you thought this project had value, you could’ve contributed to it. You probably still could.
Or, if you think its value is worth $0 (to you), maybe it’s not really that sad (to you).
People are expressing sadness as if there was nothing to be done about it, but, of course, there’s a really straight-forward thing that could’ve been done about it (possibly still could).
Gatekeeping?!?
Those that paid, or did any kind of contributions upstream are entitled to be sad.
Others should consider this is what happens to that lego piece in Nebraska, when no one contributes, and everyone uses it.
5 replies →
It's such a strawman to claim that you cannot be sad if something disappears where you have not financially or you work contributed. Someone can say that they are sad that the Notre Dame burned down even if they haven't personally contributed to Notre Dame.
That comparison is fallacious too, I think.
Something burning down is a tragedy, beyond anyone's control. It's also possible to love something for its beauty, and be sad that a globally historic monument suffered such an act of god that the irreplaceable art and craftsmanship is gone forever.
Something closing down, perhaps because there was not enough money to sustain its continued operation, when tens of thousands or hundreds of thousands of people were using it? That's a perfectly appropriate time to remind folks, "if you like free software, consider donating to help sustain the almost full-time effort it takes to keep packages like this alive."
Op said, "this is sad [because] I've been using this," and the implication is, "I want to keep using this but now I can't because it's gone" and making the connection that "one way to prevent this from happening to other packages you like is to contribute financially."
8 replies →
People can't be sad now?
This is such hackernews comment.
Not everything is about money.
I can use Pgbackrest in my side project which does not generate any money. Maybe my side project is another open source project where no one give me money, but I'm still contributing to the open source ecosystem, maybe I reported bugs which help everyone.
There are so may details and possible reasons to not give money and use open source software, but your negative and naive comment totally miss them.
I wish it was easier to know which projects are in desperate need of funding because I love pgbackrest and totally would have donated here, and I suspect many others would have too :/
> know which projects are in desperate need of funding
keyword: desperate... until the metric becomes the target, and stops being a good metric.
It's funny how developer time is considered free, but tokens are not.
In other words when it comes to FOSS contribution, developer time can be donated but tokens can't - so as we move into agentic code era all FOSS development carries a cost unless it is purely done by hand (which more often it isn't).
Not saying this is what is going on here but it's presumably a factor if the author was looking for an employer to sponsor development with his labor (and tokens).
https://claude.com/contact-sales/claude-for-oss open source maintainers can get this.
Chatgpt also has similar https://developers.openai.com/community/codex-for-oss
I don’t understand this? I donate plenty of tokens towards OSS.
I guess it’s anthropic donating the tokens because they give me about $5k of API tokens for the $200 I pay them.
You could also donate money to allow the devs to spend it on tokens, right?
been using databasus(https://github.com/databasus/databasus) works pretty well so far.
I'm also using this project. Easy to configure and operate.
I am feeling a slight unease using such a recent project for things as important as the database. But the polished interface combined with the easy docker deployment made me use it anyway. Restores need some permission tuning on PostgreSQL but otherwise happy.
They are very proud of their github star acquisition curve [0], the "blessing" by Anthropic [1]
But I have yet to verify the Anthropic claim.
[0] https://www.reddit.com/r/selfhosted/comments/1q94uu9/selfhos... [1] https://www.reddit.com/r/ClaudeAI/comments/1rklvr7/anthropic...
This project looks nice, albeit a bit young for a backup tool.
Did you encounter any issues or limitations?
Same, was really easy to set up.
This is scary as a solo dev who builts on postgresql. You pick a tool trust it, build around it, one day it stops. Oss sustainability is a real problem
I would say "unpaid tool use" is a real problem. Or perhaps "contractless" is actually touching the core issue?
True but as a solo dev, you cant afford to pay for every tool you depend on. That's the trap.
props to the author for such fine work.
hopefully some of the big co's step up & pay a retainer to keep the author going.
Really sad to see this. I had only recently learnt about this project, and was really impressed by it. I was planning to set it up this weekend (via autobase). I've also been under the impression that it's likely to be what powers the backups in RDS, Cloud SQL, etc., but I may have misunderstood.
Anyone looking for an alternative can try UFO Backup aka pgbackweb https://github.com/eduardolat/pgbackweb
pgbackrest is awesome, truly. Thank you so much for the work you've put into this project over the years, and I'm sad the crunchy data acquisition couldn't keep the project alive.
I won't say He should be working on it no matter what but I believe its a very good project and I think as always community forks will be the only option when it won't work in future
> Since Crunchy Data was sold, I have been maintaining pgBackRest and looking for a position that would allow me to continue the work, but so far I have not been successful. Likewise, my efforts to secure sponsorship have also fallen far short of what I need to make the project viable.
So this was the problem, I thought Snowflake would pick up the sponsorship of this project but since it is a competing database it doesn't really make much sense.
I really wish many critical OSS projects get the sponsorship they need to continue.
Otherwise the software industry is in real trouble.
Forking it just passes the buck onto another maintainer with the same problem, this time without the original creator maintaining it.
Very simple. Name it to pgbackrest-AI and add the line:
"AI driven backups with smartest world class models optimizing every byte stored via deep AI analysis."
With that added, a million dollars is just chimp change. YC alone would be adding them to all the seasons multiple times over summer, winter and monsoon etc.
Even with sponsorship, it's not always appreciated such as Vercel backing Svelte, Vue, etc. https://www.reddit.com/r/reactjs/comments/1g4lu5p/am_i_seein...
The responses in there are dumb and childish.
I doubt that they have sponsored an OSS project or made it sustainable.
Postgres doesn't compete with Snowflake. Snowflake recently announced a Postgres DBaaS offering that integrates with Snowflake (actually has competitive pricing with AWS RDS Postgres)
They're two non competing verticals. It's a shame Snowflake decided to shrink Crunchy Data's community presence.
Ah, sad to read this. Does anyone know of good alternatives?
Postgres has built-in backups starting with version 18.
From what I can find Postgres 17 [1] introduced incremental backups to pg_basebackup, refined in 18, but nowhere near the full featureset of pgBackRest. Is that what you meant? Having builtin incremental replication to a S3-compatible storage would be great.
[1]: https://www.postgresql.org/docs/release/17.0/#:~:text=pg%5Fb...
doesn't it still work?
Yes! But I'm assuming it will prevent me from upgrading to Postgres 19 in the future.
1 reply →
Does Postgres no have online backup built in? All of the other major DBMSes do.
See the documentation: https://www.postgresql.org/docs/current/backup.html
all of these various 3rd party backup tools use these things. Mostly it's QOL stuff that you get from a 3rd party tool. We use barman, very happily: https://pgbarman.org/
Hopefully barman has some longevity being under EDB assuming some hyperscaler doesn't gobble them up
1 reply →
Postgres is very "unix-y" in that everything is a separate tool. It has backup interfaces and commands but doesn't ship with a comprehensive backup management solution.
I find it shocking (not really) that among the many BILLION dollar companies built on the back of Postgres there isn't enough sense to pay the salary of one dude to keep a project like this going forever.
wild isn't it
We're going to see a lot of this over the next 1-2 years.
Software Engineers suddenly feel like they're fighting for their lives for employment, and time won't be "wasted" maintaining OSS for free.
We all need to eat.
I use pgbackrest for some databases in production, and it has been VERY good.
Sorry to hear this. Well done for maintaining a successful project for so long.
So sad. We have been using this amazing project extensively
Another one bites the dust...
Is it me ore I am seeing more and more projects being unmaintained due to financial and/or mental fatigue?
[1] https://blogs.gnome.org/chergert/author/chergert/
[2] https://github.com/nvim-treesitter/nvim-treesitter/discussio...
[3] https://discourse.gnome.org/t/stepping-down-as-libxml2-maint...
Waiting for all the C-level execs saying that "anyway this is not needed, we're going to vibe-code a solution to our production database backups" lol
The backups will then be hyper-optimized from three hours down to 5 minutes using devnull compression technologies. Its super effective!
Why even waste all this time and money on backups in the first place? Just don't make mistakes.
Only for their AI to delete the production database and all the backups, and be forced to write an apology.
https://news.ycombinator.com/item?id=47911524
The A.I will probably steal the code and make it an unmaintainable mess that deletes backups when someone tries to restore
Mentioned this on X but CockroachDB should sponsor this - their audience is Postgres people and open source contributions can be great marketing.
thirteen years of blood, sweat, and late nights shipped into the void - respect to David Steele for keeping it real and pulling the plug clean rather than letting it rot in maintenance burden
This is the message the author posted on LinkedIn:
After a lot of thought, I have decided to stop working on pgBackRest. I did not come to this decision lightly. pgBackRest has been my passion project for the last thirteen years, and I was fortunate to have corporate sponsorship for much of this time, but there were also many late nights and weekends as I worked to make pgBackRest the project it is today, aided by numerous contributors. Every open-source developer knows exactly what I mean and how much of your life gets devoted to a special project.
Since Crunchy Data was sold, I have been maintaining pgBackRest and looking for a position that would allow me to continue the work, but so far I have not been successful. Likewise, my efforts to secure sponsorship have also fallen far short of what I need to make the project viable.
Like everyone else, I need to make a living, and the range of pgBackRest-related roles is very limited. I can now consider a wider variety of opportunities, but those will not leave me time to work on pgBackRest, which requires a fair amount of time for maintenance, bug fixes, PR reviews, answering issues, etc. That does not even include time to write new features, which is what I really love to do. Rather than do the work poorly and/or sporadically, I think it makes more sense to have a hard stop.
I will post a notice of obsolescence and archive the repository. I imagine at some point pgBackRest will be forked, but that will be a new project with new maintainers, and they will need to build trust the same way we did.
Again, many thanks to all the pgBackRest contributors over the years. It was a pleasure working with you!
Damn this sucks. Snowflake bought crunchydata right? I know they largely did that because they wanted to push crunchydata's datalake extension past some sort of proverbial finish line as they've been competing with databricks for features, but snowflakes pressers about commitment to open source and postgres in general (which are ofc something no one should take that seriously) feel even more sad when it blows out the floor underneath projects like this which are undoubtedly part of the same postgres extension ecosystem. Snowflake went after crunchydata for that _one_ extension while neglecting the broader world that crunchydata was keeping alive. They can champion support OSS and postgres all they want but they hurt the ecosystem here, kind of a slap in the face to the postgres world.
Thank you for adding this here.
That text is right there in the link, we don't need to read it twice.
>"we don't need to read it" [here]
many people here don't read the articles, and that's not going to change. (on today's internet, jumping from the site you want to be on to a site with unknown UX patterns is fraught)
but people here do read the comments, so having important details from the articles in comments here improves the quality of comments here, at least if you value staying on topic.
3 replies →
Why did you read it twice if you didn't need to? Seems unnecessary. I only read it once and just ignored it on subsequent encounters.
1 reply →
i wish the guy could have made a paid version so he could have continued it. Unfortunately, most people do not want to financially contribute to open source and especially when that open source project becomes a paid product.
adding to that, lots of devs don’t want the hassle of running a software business
do not yell at me, but... this is where genAI may be useful.
what if, bare with me, what if, after a certain amount of time, a certain amount of "requests", a code library can be given to a genAI to maintain; no improvements, no extra features, just bug fixes? This could continue until either someone picks it up, or the open source solution becomes irrelevant, not enough "requests".
Yes, lots of details to work out.
*bear, not bare.
No. I meant bare. As in "... what if, expose/uncover [this topic] with me, what if, ..."
I have recently configured pgbackrest for our app. :(
[dead]
[dead]
[flagged]
Metrics would help others who may want to rescue the project consider the options. Eg user base would make it clear if there’s an immediate opportunity to work with the author to launch a paid backup service around the project, funding continued work on it.
Why not try to find a successor instead of archiving the repo and forbidding the use of the name? I'm sure with a 3.8k stars repo you'll find competent people willing to continue the work.
Sometimes you want to hang things to your wall, and be done with it.
I'd personally do the same. I wouldn't want to be bothered by the future maintainers' choices and get feedback/flak for it. It's a well-known and well-respected way to cycle the name with a "-ng" or "-nx" prefix to signal that this is the newer project with a different set of maintainers.
Being MIT, while is not my favorite license, doesn't give free license to grab and run with things.
Honestly, in my eyes, 3.8K or 38K stars mean nothing, because Open Source is not about you [0], to begin with.
[0]: https://gist.github.com/richhickey/1563cddea1002958f96e7ba95...
It is reasonable to ask for a follow-up project/fork to take a different name. Naming your project, e. G., pgbackrest-ng, does not sound too onerous of a requirement and clearly communicates to users that maintainers have changed (see also paperless ng/ngx as good examples of such a change).
Finding a successor is also not easy nor cheap (in regards to time).
You'll also find plenty of potential malware injectors too, and who would want the responsibility of trying to vet a successor and have to work out the difference?
Because you will attract people who will want to take advantage of the trust these 3.8k stars signal to some people, for example, by means of supply chain attacks.
The Apache Foundation used to help with this sort of governance problem didn't it? Thugh maybe pgbackrest isn't quite big and official enough to be the kind of software which Apache takes on, and one certainly hears (increasing?) grumbles about Apache's stewardship.
There's no way to know if a new maintainer will live up to whatever standards they've kept to date. Archiving should be the default decision, unless there's formal and elaborate handover.
A maintainer that is mainly motivated by the 3.8k stars aspect is probably not the person you want. Working on critical OSS software is fun until it's not, especially when you are not paid for that work.
Because that rug pulls your users.
3.8k stars and the name is years of built up trust with you, not with the person you gave it to.
> I'm sure with a 3.8k stars repo you'll find competent people willing to continue the work.
Oh yeah, I'm sure you will find lots of competent people. Like Jia Tan, for example. I've heard he is very competent.
Those people can just as easily fork it and make a new name then. Otherwise you end up with situations where it's actually an entirely new thing under new developers under the same name. Even riskier in the age of the "AI clean rewrite"
Why is it the responsibility of the person working for free?
Why is it never the responsibility of the people using it?
If anyone cares enough they will. People didn’t care enough to pay, so maybe no one cares enough to fork and be the new unpaid custodian
They are not really forbidding the use of the name (unless they have registered a trademark), they probably simply want to avoid confusion.
> TL;DR: pgBackRest is no longer being maintained. If you fork pgBackRest, please select a new name for your project.
> I imagine at some point pgBackRest will be forked, but that will be a new project with new maintainers, and they will need to build trust the same way we did.
I completely understand having to back out of maintenance on an OSS project, but why also slam the door closed on someone taking over? There may be someone very qualified willing to step up, and that could give your existing users continuity.
This feels analgous to deciding to stop maintaining a community garden, but rather than let your neighbor step up, you decide to salt the ground so it can never grow there again, telling your neighbors "you can pull up my plants and move them, but you can't use all the ground and roots that are already there." It just feels bitter.
To me it reads as being worried that someone malicious could step in and use the project's name to do harm. If you don't have someone within the project with trust built ready-to-go, establishing that trust enough to hand over the project is a big task.
I totally agree, that is a huge risk. But what if someone from the postgres team decided to step up and maintain it? I'm not saying that's likely, but it is possible for a very popular tool like this. With the way the project exited now, that would not at all be an option. Obviously if postgres themselves decided to do it, they wouldn't need the previous credibility so this isn't the best example
2 replies →
From the story told in the README it is clear this is a project ran by a single person. There is no wider maintenance team that can be trusted with continuing the project. So anyone who offers to take up the maintenance will be unknown to the current maintainer and cannot automatically be trusted.
The alternative to this seemingly bitter approach is handing over the trust they built to some unknown person who can do whatever they want with the data in a lot of PostgreSQL databases around the world. I think I prefer the bitterness here over blind trust.
Sure, but what if someone from the postgres team decided they wanted to step up? The door is completely shut for that now. And if we can't trust someone from the postgres team to do it, then who can we trust?
1 reply →
It can still be forked. There is no salting the ground here. If you maintain the project and have for a long time, and you wish to stop, you can stop.
If no one cared enough to support the project, why does anyone care enough now? It all sounds hollow. Nothing bitter about it.
When you work on a project, any project, you have a responsibility. At some point we all can stop, and become free to not have that responsibility.
I think this is overly harsh. After the guy has been working on the project for such a long period a handover would inevitably be a long process, not least to ensure whoever took over didn't abuse the existing user-base. Completely fair if the existing maintainer doesn't want to take on this work, and arguably a fork forces consumers to properly consider that someone else is in charge now.
See: "It's OK to abandon your side-project (2024)": https://news.ycombinator.com/item?id=47918961 (also on today's frontpage)
Are you suspecting your partner of cheating or having an extramarital affair? I’ll advice you to get proof first before confronting him/her. As that could result in unnecessary confusion in your relationship or marriage. it’s always advisable to consult a professional hacker to help you get concrete evidence by discreetly getting access to their phone or computer. he has forked for me a couple of times and he never disappoints. he provides Accurate results and can be trusted for 100% privacy and untraceable.
Contact him via spyhackelite@ gmai1 c om
Is it really that much effort to maintain something? I’ll admit I haven’t the foggiest, my most maintained thing having like 200 stars or something, but if I leave it alone for half a year it doesn’t suddenly combust into flame.
> Is it really that much effort to maintain something?
yes
see https://news.ycombinator.com/item?id=47921198 for start
lol so a backup system was brittle enough that it needs a guy constantly working on it? which implies I need to constantly update?
Motte: if you stop maintaining a project, it won’t become unusable in six months.
Bailey: maintaining a popular project is not that much work.
What?