F-Droid build servers can't build modern Android apps due to outdated CPUs
2 days ago
On August 7, 2025, a new build problem started hitting many Android apps on F-Droid. Many Android apps on F-Droid have been unable to publish updates if they use Android Gradle Plugin (AGP) 8.12.0 or Gradle 9.0.
The root cause: Google’s new aapt2 binary in AGP 8.12.0 started requiring CPU instructions (SSE4.1, SSSE3) that F-Droid’s build farm hardware doesn’t support. This is similar to a 2021 AGP 4.1.0 issue, but it has returned, and now affects hundreds of apps.
As an example, my open-source app MBCompass hit this issue. I downgraded to AGP 8.11.1 with Gradle 8.13 to make it build, but even then, F-Droid failed due to a baseline profile reproducibility bug in AGP. The only workaround was disabling baseline profiles and pushing yet another release.
This has led to multiple “maintenance” versions in a short time, confusing users and wasting developer time, just to work around infrastructure issues outside the developer’s control.
References:
- F-Droid admin issue: https://gitlab.com/fdroid/admin/-/issues/593 - Catima example: https://github.com/CatimaLoyalty/Android/issues/2608 - MBCompass case: https://github.com/CompassMB/MBCompass/issues/88
This means their servers are very old ones that do not support x86-64-v2. Intel Core 2 Duo days?
https://developers.redhat.com/blog/2021/01/05/building-red-h...
Think of how much faster their servers would be with one of those Epyc consumer cpus.
I was about to ask people to donate, but they have $80k in their coffers. I realize their budget is only $17,000 a year, but I am curious why they haven't spent $2-3k on one of those Zen4 or Zen5 matx consumer Epyc servers as they are around under $2k under budget. If they have a fleet of these old servers I imagine a Zen5 one can replace at least a few of them and consume far less power and space.
https://opencollective.com/f-droid#category-BUDGET
Not sure if this includes their Librapay donations either:
https://liberapay.com/F-Droid-Data/donate
> This means their servers are very old ones that do not support x86-64-v2. Intel Core 2 Duo days?
This is not always a given. In our virtualization platform, we have upgraded a vendor supplied VM recently, and while it booted, some of the services on it failed to start despite exposing a x86_64v2 + AES CPU to the said VM. Minimum requirements cited "Pentium and Celeron", so it was more than enough.
It turned out that one of the services used a single instruction added in a v3 or v4 CPU, and failed to start. We changed the exposed CPU and things have returned to normal.
So, their servers might be capable and misconfigured, or the binary might require more that what it states, or something else.
A developer on the ticket writes: "Our machines run older server grade CPUs, that indeed do not support the newer SSE4_1 and SSSE3"
4 replies →
$2-3k ? That’s barely the price of a lower end Threadripper bare cpu not a full Epyc server ???
At our supplier $2k would pay for a 1U server with a 16 core 3GHz Epyc 7313P with 32GB RAM, a tiny SSD and non-redundant power.
$3k pays for a 1U server with a 32 core 2.6GHz Epyc 7513 with 128GB RAM and 960GB of non-redundant SSD storage (probably fine for build servers).
All using server CPUs, since that was easier to find. If you want more cores or more than 3GHz things get considerably more expensive.
7 replies →
Low end EPYC (16-24 cores) especially for older generations are not that expensive 800-1.2K ime. Less when in a second hand server.
Perhaps the servers run Coreboot / Libreboot?
I'm not even sure mainline Linux supports machines this old at this point. The cmpxchg16b instruction isn't that old, and I believe it's required now.
CMPXCHG8B is required as of a month or two ago, not 16B (i.e., the version from the 90's is now required)
See https://lkml.org/lkml/2025/4/25/409
32 bit Linux is still supported by the kernel and Debian, Arch, and Fedora still supports baseline x86_64.
RHEL 8 is still supported and Ubuntu is still baseline x86_64 I believe for commercial distros. Not sure about SuSE.
2 replies →
> about to ask people to donate, but they have $80k in their coffers
I'd still ask folks to donate. £80k isn't much at all given the time and effort I've seen their volunteers spend on keeping the lights on.
From what I recall, they do want to modernize their build infrastructure, but it is as big as an investment they can make. If they had enough in their "coffers", I'm sure they'd feel more confident about it.
It isn't like they don't have any other things to fix or address.
I would too but do you have a link to them talking about it?
>they have $80k in their coffers but I am curious why they haven't spent $2-3k on one of those Zen4 or Zen5 matx consumer Epyc servers
I would also like to know this.
I would much rather they spent that on having the devs network and travel, the servers work.
6 replies →
Yeah and everybody was complaining how slow the builds are for years. I really want to know too
Probably a case of "don't fix it if it ain't broke" keeping old machines in service too long, so now they broke.
1 reply →
This is pretty concerning, especially as FDroid is by far the largest non-google android store at the moment, something that I feel is really needed, regardless of your feelings about google.
Does anyone know of plans to resolve this? Will FDroid update their servers? Are google looking into rolling back the requirement? (this last one sounds unlikely)
I agree it’s a bit concerning but please keep in mind F-Droid is a volunteer-run community project. Especially with some EU countries moving to open source software, it would be nice to see some public funding for projects like F-Droid.
> please keep in mind F-Droid is a volunteer-run community project.
To, me, that's the worrying part.
Not that it's ran by volunteers. But that all there's left between a full-on "tech monopoly" or hegemony, and a free internet, is small bands of underfunded volunteers.
Opposition to market dominance and monopolies by multibillion multinationals shouldn't just come from a few volunteers. If that's the case, just roll over and give up; the cause is lost. (As I've done, hence my defaitism)
Aside from that: it being "a volunteer ran community" shouldn't be put as an excuse for why it's in trouble/has poor UX/is hard to use/is behind/etc. It should be a killer feature. Something that makes it more resilient/better attuned/easier/earlier adopting/etc.
8 replies →
Hope I didn't come across as criticising FDroid here- It seems sucky to have build requirements change under your feet.
It's just I think that FDroid is an important project, and hope this doesn't block their progress.
> Nice to see some public funding for projects like F-Droid
Definitely, SSE4.1 instruction set based CPU, for building apps in 2025, No way!!
Maybe if f-droid is important to you, donate, so they can buy newer build server?
I'm not quite sure if I'm over reading into this, but this comes across as a snarky response as if I've said "boo, fdroid sucks and owes me a free app store!".
Appologies if I came across like that, here's what I'm trying to convey:
- Fdroid is important
- This sounds like a problem, not necessarily one that's any fault of fdroid
- Does anyone know of a plan to fix the issue?
For what it's worth, I do donate on a monthly basis to fdroid through liberapay, but I don't think that's really relevant here?
1 reply →
This has now become a major issue for F-Droid, as well as for FOSS app developers. People are starting to complain about devs because they haven't been able to release the new version for their apps (at least it doesn't show up on F-Droid) as promised
Is Westmere the minimum architecture needed for the required SSE?
Server hardware at the minimum v2 functionality can be found for a few hundred dollars.
A competent administrator with physical access could solve this quickly.
Take a ReaR image, then restore it on the new platform.
Where are the physical servers?
8 replies →
Did and doing regularly.
> FDroid is by far the largest non-google android store at the moment
Not even sure it's in the top 10
Wait really? What other ones are there!? Somebody's already pointed out Samsumg Galaxy store, but I don't think I know of others?
Edit: searching online found this if anyone else is interested https://www.androidauthority.com/best-app-stores-936652/
9 replies →
I think we only know about F-Droid because it's the only high quality one.
Low quality software tends to be popular among the general public because they're very bad at evaluating software quality.
1 reply →
> Are google looking into rolling back the requirement? (this last one sounds unlikely)
That's apparently what they did last time. From the ticket:
"Back in 2021 developers complained that AAPT2 from Gradle Plugin 4.1.0 was throwing errors while the older 4.0.2 worked fine. \n The issue was that 4.1.0 wanted a CPU which supports SSSE3 and on the older CPUs it would fail. \n This was fixed for Gradle Plugin 4.2.0-rc01 / Gradle 7.0.0 alpha 9"
>FDroid is by far the largest non-google android store at the moment
Samsung Galaxy Store is much much bigger.
Funny true story: I got my first smartphone in 2018, a Samsung Galaxy A5. I have it to this day, and it is the only smartphone I ever used. This is the first time I hear about Samsung Galaxy store! (≧▽≦)
Largest not run by the corporations then ;)
Yup! I missed that one because I didn't realise it still existed. Woops!
why you read "google build tools cannot be built from source and it was compiled with an optional optimizations as required" and assume the right thing to do is to buy newer servers?
I'm not assuming anything, this is from a ticket for fdroid on google:
> Our machines run older server grade CPUs, that indeed do not support the newer SSE4_1 and SSSE3.[0]
I.e. the problem is because fdroid have older CPUs, newer ones would be able to build. I only mentioned it in terms of what the plans to fix might be. I have zero idea if upgrading servers is the best way to go.
[0] https://issuetracker.google.com/issues/438515318?pli=1
Why not recompile aapt2 to correct target? It seems to be source available.
https://android.googlesource.com/platform/frameworks/base/+/...
Have you tried building AOSP from available sources?
Binaries everywhere. Tried to rebuild some of them with the available sources and noped the f out because that breaks the build so bad it's ridiculous.
"Binaries everywhere"
So much for "Open Source"
8 replies →
Yes. Sources available means nothing without a reproducible build process.
So open source is only in the name, noted
Debian also seems to have given up.
Using Docker with QEMU CPU emulation would be a more maintainable solution than recompiling aapt2, as it would handle future binary updates automatically without requiring custom patches for each release.
https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions#Late...
Even my last, crazy long in the tooth, desktop supported this and it lived to almost 10 years old before being replaced.
However at the same time, not even offering a fallback path in non-assembly?
> However at the same time, not even offering a fallback path in non-assembly?
There's probably not any hand-written assembly at issue here, just a compiler told to target x86_64-v2. Among others, RHEL 9 and derivatives were built with such options. (RHEL 10 bumped up the minimum spec again to x86_64-v3, allowing use of AVX.)
Or even, a compiler told to target nothing in particular, and a default finally toggled over from "Oh, we're 'targeting x86'? So CPUs from the early 2000s then" to "Oh, we're 'targeting x86'? So CPUs from the mid-2010s then."
Looking at the issue their builders seem to be Opterons G3 (K10?)[0]
[0] https://en.wikipedia.org/wiki/AMD_10h
at this point they're guzzling so much power the electricity is more expensive than replacement platform
I can imagine this has to be like that as they usually get $1500 per month in donations.
You could buy a newer one but I guess they have other stuff they have to pay for.
7 replies →
I have a home server with a 9th gen i7 that's doing jack sh!t most of the time, is there a way to donate some compute time to build F-Droid packages?
The problem with offering fallbacks is testing -- there isn't any reasonable hardware which you could use, because as you say it's all very old and slow.
I'm sure theyll appreciate your old desktop donation
I don't fully understand: aren't gradle and aapt2 open-source ?
If you want to build buildroot or openwrt, the first thing it will do is compiling your own toolchain (rather than reusing the one from your distro) so that it can lead to predictable results. I would have the same rationale for f-droid : why not compile the whole toolchain from source rather than using a binary gradle/aapt2 that uses unsupported instructions?
SDK binaries provided by Google are still used, see https://forum.f-droid.org/t/call-for-help-making-free-softwa...
I agree, this should be the case, but Gradle specifically relies on downloading prebuilt java libraries and such to build itself and anything you build with it, and sometimes these have prebuilt native code inside. Unlike buildroot and any linux distribution, there's no metadata to figure out how to build each library, and the process for them is different between each library (no standards like make, autotools and cmake), so building the gradle ecosystem from source is very tedious and difficult.
having worked with both mvn and gradle, i always have a good chuckle when i hear about npm "supply chain" hacks.
Apparently it was fixed upstream by Google?
https://gitlab.com/fdroid/admin/-/issues/593#note_2681207153
Not sure how long it will take to get resolved but that thread seems reassuring even if there isn't a direct source that it was fixed.
It is not fixed.
In the thread you linked to people are confusing a typo correction ("mas fixed" => "was fixed") as a claim about this new issue being fixed.
The one that was fixed is this similar old issue from years ago: https://issuetracker.google.com/issues/172048751
Oh, that's unfortunate, very confusing thread.
Still haven't. Currently, most of the devs aren't aware of this underlying issue!
As far as I can see, sse4.1 has been introduced in CPUs in 2011. That's more than 10 years ago. I wonder why such old servers are still in use. I'd assume that a modern CPU would do the same amount of work with a fraction of energy so that it does not even make economical sense to run such outdated hardware.
Does anyone know the numbers of build servers and the specs?
It has been introduced in Intel Penryn, in November 2007.
However the AMD CPUs did not implement it until Bulldozer, in mid 2011.
While they lacked the many additional instructions provided by Bulldozer, also including AVX and FMA, for many applications the older Opteron CPUs were significantly faster than the Bulldozer-based CPUs, so there were few incentives for upgrading them, before the launch of AMD Epyc in mid 2017.
SSE 4.1 is a cut point in supporting old CPUs for many software packages, because older CPUs have a very high overhead for divergent computations (e.g. with if ... else ...) inside loops that are parallelized with SIMD instructions.
> I'd assume that a modern CPU would do the same amount of work with a fraction of energy so that it does not even make economical sense to run such outdated hardware.
There are 8,760 hours in a non-leap year. Electricity in the U.S. averages 12.53 cents per kilowatt hour[1]. A really power-hungry CPU running full-bore at 500 W for a year would thus use about $550 of electricity. Even if power consumption dropped by half, that’s only about 10% of the cost of a new computer, so the payoff date of an upgrade is ten years in the future (ignoring the cost of performing the upgrade, which is non-negligible — as is the risk).
And of course buying a new computer is a capital expense, while paying for electricity is an operating expense.
1: https://www.eia.gov/electricity/monthly/epm_table_grapher.ph...
You can buy a mini pc for less than $550. For $200 on Amazon you can get an N97 based box with 12 GB RAM and 4 cores running at 3 GHz and a 500 GB SATA SSD. That’s got to be as fast as their current build systems and supports the required instructions.
4 replies →
I haven’t seen the real answer that I suspect here - the build servers are that one dual socket AMD board which runs open firmware and has no ME/PSP .
On the server side, probably not, but I'd like to point out that old hardware is not uncommon, and it's going to be more and more likely as time passes especially in the desktop space.
I was hit by this scenario in the 2000s with an old desktop pc I had, also in the 10ys range, I was using just for boring stuff and random browsing, which was old, but perfectly adequate for the purpose. With time programs got rebuilt with some version of SSE it didn't support. When even firefox switched to the new instruction set, I had to essentially trash a perfectly working desktop pc as it became useless for the purpose.
I was going to say that I assume that the reason for such old CPUs is the ability to use Canoeboot/GNU Boot. But you absolutely can put an SSE4.2 CPU in a KGPE-D16 motherboard. So IDK.
Because setting up servers is an annoying piece of grunt-work that people avoid doing more than absolutely necessary, there's an reason the expensive options of AWS,Azure and Google cloud make money because much "just works" when focusing on applications rather than the infra (until you actually need to do something advanced and the obscure commands or clicking bites you in the ass).
Hardware after the first couple of generations of x86_64 muliticore processors are perfectly capable machines to use as servers, even for tasks you want to put off to a build farm.
A few months ago Adobe finally updated Lightroom Classic to require these processor extensions. To squeeze all of the matrix mults it can for AI features also in CPU mode.
It's amazing how long of a run top end hardware from ~2011 has had (just missed the cutoff by a few months). It's taken this long for stuff to really require these features.
References:
F-Droid admin issue: https://gitlab.com/fdroid/admin/-/issues/593
Catima example: https://github.com/CatimaLoyalty/Android/issues/2608
MBCompass case: https://github.com/CompassMB/MBCompass/issues/88
The Catima thread makes FDroid sound like a really difficult commmunity to work with. Although I'm basing this on one person's comment and other people agreeing, not on any knowledge or experience.
> But this is like everything with F-Droid: everything always falls on a deaf man's ears. So I would rather not waste more time talking to a brick wall. If I had the feeling it was possible to improve F-Droid by raising issues and trying to discuss how to solve them I wouldn't have left the project out of frustration after years of putting so much time and energy into it.
F-droid are thoroughly understaffed and yet incredibly ambitious and shrewd around their goals - they want to build all the apps in a reproducible manner. There’s lots of friction around deviating from builds that fit within their model. The system is also slow, takes a long while before a build shows up. I think f-droid could benefit immensely from more funding, saying that as someone who has never seen f-droid’s side, but have worked on an app that was published there.
I saw that too and was wondering what kind of drama happened in the past
3 replies →
There's a bunch of stupid behaviors all around (running AGP in alpha being one), but F-Droid asking maintainers to disable baseline profiles because it breaks reproductibility for them is thoroughly stupid and demanding.
> Google’s new aapt2 binary in AGP 8.12.0
Given F-Droid's emphasis on isolating and protecting their build environment, I'm kind of surprised that they're just using upstream binaries and not building from source.
Relatedly, we don't really have any up to date free software build of the Android SDK AFAIK. To build Android apps, we all rely on the Google binaries, which are non-free.
https://forum.f-droid.org/t/call-for-help-making-free-softwa...
It seems quite implausible that F-Droid is actually running on hardware that predates those instruction set extensions. They're seeing wider adoption by default these days precisely because hardware which doesn't support them is getting very rare, especially in servers still in production use. Are you sure this isn't simply a matter of F-Droid using VMs that are configured to not expose those instructions as supported?
This is sort of like a bug I hit last year when the mysql docker container suddenly started requiring x86-64-v2 after a patch level upgrade and failed to start: https://github.com/docker-library/mysql/issues/1055
Their servers are so old, even an entirely different architecture emulating x86_64 would still see a performance increase... So there's no OSS argument here - they could even buy a Talos, have no closed firmware, and still see a performance increase with emulation. If they don't care about the firmware, there are plenty of very cheap x86 options which are still more modern.
> Their servers are so old
When I read this, pop culture has trained me to expect an insult, like: “Their servers are so old, they sat next to Ben Franklin in kindergarten.”
My home server is so old, it gets its driver's license next year
Fortunately the source code is available:
https://android.googlesource.com/platform/frameworks/base/+/...
If I had the time, I'd try to compile a binary of it that will run on Win95 just to give my fuckings to the planned obsolescence crowd.
There is no point for Google to push planned obsolescence on the PC or server space. They don't have a market there.
It does benefit them to make it harder for competitors.
9 replies →
"If I had the time, I'd try to compile a binary of it that will run on Win95 just to give my fuckings to the planned obsolescence crowd"
The idea that not supporting a 20+ year old system is "planned obsolescence" is a bit shallow
Like it is a one-off thing to support some system. You must maintain it and account it for all the features you bring in going forward.
The Win95 API is pretty incomplete. That was actually a terrible OS. The oldest I'd go playing this game with anything serious is probably XP.
It can read files, write files, and allocate memory. Is there anything else you need to compile software?
3 replies →
But you don't, so you won't, scoring one for the planned obsolescence crowd.
And so won't anyone else who has time to complain about planned obsolescence, and that includes myself.
I'm a bit lost in this thread, but I've written up what I know for other dummies like me
Aapt2 is an x86_64 standalone binary used to build android APKs for various CPU targets
Previous versions of it used a simpler instruction set, but the new version requires an extra SIMD instruction SSE4. A lot of CPUs after 2008 support this, but not F-droid's current server farm?
> Our machines run older server grade CPUs
So a bit of both of older hardware, and not-matched-with-consumer-featureset hardware. I'd imagine some server hardware vendors supported SSE4 way earlier than most, and some probably supported it way later than most too.
I’ve got an old Ivy Bridge-EP Dell workstation they can borrow goddamn SSE4.1 is nearly old enough to drink.
SSE4.1 can legally buy lightly alcoholic beverages in various European countries already. Next year, it can buy strong spirits.
Using AMD hardware that's "only" 13 years old can also cause this problem, though.
Yeah I was kind of shocked too. Core 2 could do both of those instruction sets. A used Dell Precision can be had for very little and probably would be grossly more efficient than whatever they're using.
Non-hacker here. The title says "modern". I don't need modern, have a 10 year old phone, can I still get the occasional simple app from F-Droid?
I upped my (small) monthly contribution. Hope more people contribute, and also work to build public support.
Also, for developers .. please include old fashioned credit cards as a payment method. I'd like to contribute but don't want to sign up for yet another payment method.
That F-Droid even requires to do the build is one of the reasons I created Discoverium.
https://github.com/cygnusx-1-org/Discoverium/
That F-Droid requires to do the build ensures all apps provided by F-Droid are free software (as in freedom) and proven to be buildable by someone other than the app developer
The issue is more complicated than that.
2 replies →
> and proven to be buildable by someone other than the app developer
Yup. That's a huge, huge issue - IME especially once Java enters the scene. Developers have all sorts of weird stuff in their global ~/.m2/settings.xml that they set up a decade ago and probably don't even think about... real fun when they hand over the project to someone else.
So I should take a binary from a random stranger because trust me bro?
It is a modified version of Obtainium. You get it from the author via GitHub.
Man, Android could have been way cooler if it actually used real virtual machines, or at least the JVMs.
I stood by Oracle, because in the long term as it has been proven, Android is Google's J++, and Kotlin became Google's C#.
Hardly any different from what was in the genesis of .NET.
Nowadays they support up to Java 17 LTS, a subset only as usual, mostly because Android was being left behind accessing the Java ecosystem on Maven central.
And even though now ART is updatable via PlayStore, all the way down to Android 12, they see no need to move beyond Java 17 subset, until most likely they start again missing on key libraries that decided to adopt newer features.
Also stuff like Panama, Loom, Vector, Valhala (if ever), don't count them ever being supported on ART.
At least, they managed to push into mainstream the closest idea of OSes like Oberon, Inferno, Java OS and co, where regardless of what think about the superiotity of UNIX clones, here they have to contend themselves with a managed userspace, something that Microsoft failed at with Longhorn, Singularity and Midori due to their internal politics.
> Kotlin became Google's C#
Are Google buying Jetbrains?
1 reply →
ARM phones didn't have virtualisation back in the day so that would've been impossible.
Modern Android has virtual machines on devices with supported hardware+bootloader+kernels: https://source.android.com/docs/core/virtualization
JVM??? hell no, native FTW
I think thats part of the problem. The JVM rarely runs interpreted code; nearly everything is compiled to native code.
I thought SSE 4.1 dates back to 2008 or so?
The build servers appear to be AMD Opteron G3s, which only support part of SSE4 (SSE4a). Full SSE4 support didn't land until Bulldozer (late 2011).
I appreciate that this is a volunteer project, but my back of the hand math suggests that if they upgraded to a $300 laptop using a 10nm intel chip, it would pay for itself in power usage within a few years. Actually, probably less, considering an i3-N305 has more cores and substantially faster single thread.
And yes, you could get that cost down easily.
14 replies →
it's insane, i would give them my old xeon haswell machine for free, but the shipping cost is likely more than the cost of the machine itself.
Yes, SSE4.1 and SSSE3 have been introduced in ~2006. The F-Droid build server still uses that to build modern and some of the most popular FOSS apps.
That’s a tough one. It’s ironic that the very platform meant to keep apps open and accessible is now bottlenecked by outdated hardware.
Upgrading the build farm CPUs seems like the obvious fix, but I’m guessing funding and coordination make it less straightforward. In the meantime, forcing devs to downgrade AGP or strip baseline profiles just to ship feels like a pretty big friction point.
Long term, I wonder if F-Droid could offer an optional “modern build lane” with newer hardware, even if it means fewer guarantees of full reproducibility at first. That might at least keep apps from stalling out entirely.
I've said this before, but I'll say it again. Running on donations is not a viable strategy for any long-term goal. FOSS needs to passively invest the donations. That is a viable long-term strategy. Now when things like this happen, it becomes a major line item moment, and not a limp-along situation, with yet another WE NEED YOUR HELP banner blocking off 1/2 their website.
This is super annoying how SW vendors forcefully deprecate good enough hardware.
Genuinely hate that, as Mozilla has deprived me from Firefox's translation feature because of that.
The problem is that your "good enough" is someone else's "woefully inadequate", and sticking to the old feature sets is going to make the software horribly inefficient - or just plain unusable.
I'm sure there's someone out there who believe their 8086 is still "good enough", so should we restrict all software to the features supported by an 8086: 16-bit computations only, 1 MB of memory, no multithreading, no SIMD, no floats, no isolation between OS and user processes? That would obviously be ludicrous.
At a certain point it just doesn't make any sense to support hardware that old anymore. When it is cheaper to upgrade than to keep running the old stuff, and only a handful of people are sticking with the ancient hardware for nostalgic reasons, should that tiny group really be holding back basically your entire user base?
Ah, com'on, spare me from these strawman arguments. Good enought is good enough. If F-Droid wasn't worried about that, you definitely have no reasons to do that for them.
"A tiny group is holding back everyone" is another silly strawman argument - all decent packaging/installation systems support providing different binaries for different architectures. It's just a matter of compiling just another binary and putting it into a package. Nobody is being hold back by anyone, you just can't make a more silly argument than that...
4 replies →
OTOH, if software wants to take advantage of modern features, it becomes hell to maintain if you have to have flags for every possible feature supported by CPUID. It's also unreasonable to expect maintainers to package dozens of builds for software that is unlikely to be used.
There's some guidelines[1][2] for developers to follow for a reasonable set of features, where they only need to manage ~4 variants. In this proposal the lowest set of features include SSE4.1, which is basically includes nearly any x86_64 CPU from the past 15 years. In theory we could use a modern CPU to compile the 4 variants and ship them all in a FatELF, so we only need to distribute one set of binaries. This of course would be completely impractical if we had to support every possible CPU's distinct features, and the binaries would be huge.
[1]:https://lists.llvm.org/pipermail/llvm-dev/2020-July/143289.h...
[2]:https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...
In most cases (and this was the case of Mozilla I referred to) it's only a matter of compiling code that already have all support necessary. They are using some upstream component that works perfectly fine on my architecture. They just decided to drop it, because they could.
4 replies →
The F-Drois builds have been slow for years and with how old their servers apparently are that isn't even surprising in retrospective.
Requiring (supposedly) universally available CPU instructions is one thing. Starting to require it in a minor version update (8.11.1 -> 8.12.0) is a whole different thing. What the heck happened to semantic versioning? We can't even trust patch updates anymore these days. The version numbers might as well be git commit IDs.
Note: the underlying blame here fundamentally belongs to whoever built AGP / Gradle with non-universal flags, then distributed it.
It's fine to ship binaries with hard-coded cpu flag requirements if you control the universe, but otherwise not, especially if you are in an ecosystem where you make it hard for users to rebuild everything from source.
Exactly. Everything should be compiled to target i386.
/s (should be obvious but probably not for this audience)
They should be compiled for the CPU baseline of the ABI they are using, and check if newer instructions are available before using them. This is what Debian does, so they can have maximum hardware support.
https://wiki.debian.org/InstructionSelection
1 reply →
control the universe
Guess what the company behind Android wants to do...
Perhaps there should be more than one F-Droid
For example, if they published their exact setup for building Android apps so others could replicate it
How many Android users compile the own apps they use
Perhaps increasing that number would be a goal worth pursuing
Might be worth noting that several devs have suggested users use IzzyOnDroid instead. Due to IzzyOnDroid distributing official upstream builds (after scanning), they're not dependent on any build server.
Although they do have build servers for the purpose of confirming upstream APKs match the source code using reproducible builds, but those are separate processes that don't block each other (unlike F-Droid's rather monolithic structure).
IzzyOnDroid has been faster with updates than F-Droid for years, releasing app updates within 24 hours for most cases.
Google should be compiling for the CPU baseline of the ABI their binaries are for, and then check if newer instructions are available before using them. Just like glibc and other projects do. The Debian documentation for this mentions tools to do this, like SIMDe and GCC/clang FMV.
https://wiki.debian.org/InstructionSelection
Am I missing something, or does SIMDe only help for cases where a program is using instruction intrinsics, and it doesn't do anything to address cases where the compiler decides to use SIMD as a result of auto-vectorization?
Thats correct, but usually compilers don't do that if you use the CPU baseline.
2 replies →
Do I get it correctly, that they run their build infrastructure on at least 15 year old hardware?
There are even some "Unknown problem" on IzzyOnDroid repo for app publishing, even ensuring reproducible build, izzy says >>Not necessarily "your fault" – baseline often has such issues: https://github.com/CompassMB/MBCompass/issues/90
Seems like he is talking about the developer being responsible for that also!
IzzyOnDroid can publish updates even if it's not reproducible, this is not an "app publishing" issue at all. IzzyOnDroid can deal with AGP 8.12 fine.
Also "not necessarily your fault" means "probably not your fault", the opposite of "your fault"
I don't know how much servers are they using or server specs besides ancient Opterons, but how is this even an issue in 2025?
On Hetnzer (not affiliated), at this moment, i7-8700 (AVX2 supported) with 128 GB RAM, 2x1 TB SSD and 1 Gbit uplink costs 42.48 eur per month, VAT included, in their server auction section.
What are we missing here, besides that build farm was left to decay?
Either they want to run on ideologically pure hardware too, without pesky management bits in it (or even indeed UEFI), or they are just "it used to work perfectly" guys.
In the former case, I fail to see how ME or its absence is relevant to building Android apps, which they do using Google-provided binaries that have even more opportunity to inject naughty bits into the software. In the latter case, I better forget they exist.
I agree with you. Unfortunately usually, the simplest explanation is often the truth, so they just probably ignored this issue, until it surfaced up.
1 reply →
Well if you wanted to compromise F-Droid you could target their build server's ME or a cloud vm's hypervisor.
To do a supply-chain attack on Google's SDK would be much more expensive and less likely to succeed. Google isn't going to be the attacker.
The recent attack on AMI/Gigabyte's ME shows how a zero-day can bootkit a UEFI server quite easily.
There are newer Coreboot boards than Opteron, though. Some embedded-oriented BIOS'es let you fuse out the ME. You are warned this is permanent and irreversible.
F-Droid likely has upgrade options even in the all-open scenario.
QEMU static on linux supports automatic emulating of missing instructions. Depending on details that I haven't figured out it can be a lot slower running this way or close enough to native. I have got that working, but it was a pain and I don't remember what was needed (most of the work was done by someone else, but I helped)
Is it the CPUs or the compilers? Or possibly a CI/CD runner that has to run something that can’t run on these CPUs?
> The root cause: Google’s new aapt2 binary in AGP 8.12.0 started requiring CPU instructions (SSE4.1, SSSE3) that F-Droid’s build farm hardware doesn’t support. This is similar to a 2021 AGP 4.1.0 issue, but it has returned, and now affects hundreds of apps.
I don't know why they have enabled modern CPU flags for a simple intermediary tool that compiles the apk resources files, it was so unneccesary
Welp there goes my plans on savaging an old laptop to build my android apps.
I don't get the issue, binary target is completely independent from host target on all but the most basic setups
On the other hand, we have "personal" data centers for AI and mining farms for crypto.
wtf they cannot be still running opterons. it was to be that they are using qemu with g3 as a cpu profile.. right?
Can't cross compilation help for that? The CPU compiling doesn't need to match the target.
It's not the target that is now requiring new instructions, but one of the components in the build tools.
I see.
I think this might give Google some ideas...
> (SSE4.1, SSSE3)
This means their build infrastructure burns excessive amounts of power being run by volunteers in basements/homelabs on vintage museum grade (15 year old Opterons/Phenoms) hardware.
Gamers have been there 14 years ago with 'No Man's Sky' being the first big game requiring SSE 4.1 for no particular reason.
Put another way, Google is requiring you to have 65nm Intel chips. 2009-ish.
now that i think of it, is this because they want to run without blobs and without ME/ PSP ?
[dead]
[dead]
> The root cause: Google’s new aapt2 binary in AGP 8.12.0 started requiring CPU instructions (SSE4.1, SSSE3) that F-Droid’s build farm hardware doesn’t support.
Very intelligent move from Google. Now you can't compile "Hello World" without SSE4.1, SSSE3. /s
Are there any X86 tablets with Android ?
There are very few 17+ years old build servers at this point. Or laptops and desktops for that matter.
[flagged]
Half the point is that I trust this middleman more than the app devs. When app developers turn evil ( https://news.ycombinator.com/item?id=38505229 ), I explicitly want someone reviewing things and blocking software that works against my interests before it gets to me.
Obtainium assumes that the app developer is a trustworthy entity, when the reality behind the mobile ecosystem being as fucked up as it is primarily comes from the app developer. (Due to bad incentives made by mobile platform makers, mainly Apple.)
You need a middleman in place in case the app developer goes bad.
I have it installed. But the only thing I get updates for is Obtainium itself. There's no catalogue of apps, so I haven't installed anything via Obtainium.
I would uninstall. Author and app seem sketchy.
1 reply →
Here's a catalog of apps from the Obtainium wiki.
https://apps.obtainium.imranr.dev/
They put the disclaimer on top that this list is not meant as an app store or catalog. It's meant for apps with somewhat complex requirements for adding to Obtainium. But it serves well as a catalog since most of the major open source apps are listed.
Try Discoverium
this seems to be a general app finder and tracker. useful, but entirely different from what f-droid does, namely verify that apps are actually Free Software or Open Source and buildable from source.
How is this not another middleman (with a political banner in its README no less)?
At this point it is not political, the banner mention a fact and a tragedy and link for donations to reputable NGOs.
3 replies →
I think it acts more as an rss feed reader rather than building and hosting apps on it's own.
[flagged]
5 replies →
[flagged]
A shitton of people, not to mention including all F-Droid users, would take FOSS ideology over new fangled bloated "non-decrepit" development tools _any day_.
But in any case, this is false dichotomy, and likely exaggerated one to begin with.
I think it's extremely useful to have more strict requirements on how programs are built, to make sure that developers don't do stupid things that makes code harder for others to compile.
The tools in question in OP should be easy to build from source and not rely on the host's architecture, to be usable on platforms like ARM and RISCV. It's clear that in the android ecosystem, people don't care, so F-Droid can't do miracles (the java/gradle ecosystem is just really bad at this), but this would not happen if the build tools had proper build recipes themselves.
As a user, i'm glad when devs use old tools so that my battery has a chance of lasting the whole day and my apps don't take 10 seconds just to open.
> As a user, i'm glad when devs use old tools so that my battery has a chance of lasting the whole day and my apps don't take 10 seconds just to open.
Yup, same here! The story is as old as time, and the examples are plentiful. First Slashdot, then Reddit, then now GitHub, all became far-far-far slower and less usable, once they've been "improved" by the folk engaging in the resume-driven development:
Why is GitHub UI getting slower? - https://news.ycombinator.com/item?id=44799861 - Aug 2025 (115 comments)
I am, too, as a user, quite pleased that F-Droid is keeping it cool and reliable for the actual users.
1 reply →
>> This has led to multiple “maintenance” versions in a short time, confusing users and wasting developer time, just to work around infrastructure issues outside the developer’s control.
What an entitled conclusion.