An iOS zero-click radio proximity exploit odyssey

5 years ago (googleprojectzero.blogspot.com)

I read the entire thing, and honestly the heap grooming is very interesting, but really that's the boring part -- lots of trial and error, padding memory, etc. Also interesting that linked-lists aren't used by Apple† (and Ian Beer's suggestion that they ought to use them), but that's neither here nor there. Getting kernel memory read/write is also very interesting, albeit (again) a bit tedious. At the end of the day, it all started with this:

> Using two MacOS laptops and enabling AirDrop on both of them I used a kernel debugger to edit the SyncTree TLV sent by one of the laptops, which caused the other one to kernel panic due to an out-of-bounds memmove.

How did this even pass the _smell_ test? How did it get through code reviews and auditing? You're allocating from an untrusted source. It's like memory management 101. I mean, my goodness, it's from a wireless source, at that.

† In this specific scenario, namely the list of `IO80211AWDLPeer`s.

  • > How did this even pass the _smell_ test?

    Because attackers only have to find one place that was unlucky in implementation, and hence defenders are burdened with eliminting every last one of them.

    This is why implementing your network protocols in unsafe languages is bad. Testing can just find some bugs, not ensure absence of bugs.

    • If it's not one thing, it's another.

      https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=rust

      Now I know it's deeply comforting to think if you just had "safety" you could write all the code you want with abandon and the computer would tell you if you did it wrong, but this is a sophomoric attitude that you will either abandon when you have the right experiences, or you will go into management where the abject truth in this statement will be used to keep programmer salaries in the gutter, and piss-poor managers in a job. Meanwhile, these "safe" languages will give you nothing but shadows you'll mistake for your own limitations.

      My suggestion is just learn how to write secure code in C. It's an unknown-unknown for you at the moment, so you're going to have to learn how to tackle that sort of thing, but the good news is that (with the right strategy) many unknown-unknowns can be attacked using the same tricks. That means if you do learn how to write secure code in C, then the skills you develop will be transferable to other languages and other domains, and if you still like management, those skills will even be useful there.

      26 replies →

  • Finding the bug that allowed this exploit took this researcher weeks. QA can't find all defects without somehow testing every conceivable scenario without knowing every conceivable scenario, and code review can only catch defects if at least one reviewer is able to somehow know that specific methods make an exploit possible. Given that the exact code of underlying methods used may not be known to code reviewers, or that a reviewer might simply not know the full potential use cases for new code at the time of review, it is entirely understandable that defects and resulting exploits happen.

    This is why researchers like the OP exist. They find exploits and report them to the manufacturer (hopefully) before they can be used. The fact that this is an effective way of protecting us is also why major software companies offer bug/exploit bounties to researchers.

    To demand that all possible exploits of this nature never find their way into production builds is to demand perfection from humans. There is too much to know and think about, and definitely too many unknowns about the future, to make such a fantasy possible while still meeting release deadlines. We software developers often have a hard enough time just meeting feature and documentation deadlines, and adding more people just makes organizing your efforts more complex and difficult which then requires even more people until you reach the point that the scope of organizing your development teams is financially impossible.

  • Apple uses tons of linked lists, I'm not sure why you got the impression that they don't?

    • I was referring specifically to the list of `IO80211AWDLPeer`s the author was reverse-engineering. His assumption was that the `IO80211AWDLPeer`s were in a linked-list type of data structure (which is a pretty sensible guess). In fact, it ended up being more akin to a priority queue:

      > The data structure holding the peers is in fact much more complex than a linked list, it's more like a priority queue with some interesting behaviours when the queue is modified and a distinct lack of safe unlinking and the like.

      I amended my post for clarification, I'm sure Apple uses linked lists all the time :)

The scary thing is that even though this sounds like a monstrous effort to pull off this hack, its not out of reach for large governments. Its basically known as a fact they have loads of these exploits sitting in their toolbox ready to use when they have a enticing enough target.

Short of rewriting the whole of iOS in a memory safe language I'm not sure how they could even solve this problem. Assigning a researcher to search for 6 months only to find one bug is financially prohibitive.

  • The research would've been much shorter if Apple would actually provide researchers with debug symbols. Or you know, if Apple open sourced their security-critical software.

    > One of the most time-consuming tasks of this whole project was the painstaking process of reverse engineering the types and meanings of a huge number of the fields in these objects. Each IO80211AWDLPeer object is almost 6KB; that's a lot of potential fields. Having structure layout information would probably have saved months.

    > Six years ago I had hoped Project Zero would be able to get legitimate access to data sources like this. Six years later and I am still spending months reversing structure layouts and naming variables.

    • It’s intensely frustrating, because for some reason Apple thinks it’s a good idea to strip out security code from the source that they do release (months late), and they tend to strip (and until recently, encrypt) kernel code. This is what a company from the last decade might do to hide security issues, except it’s coming from the world’s largest company with a highly skilled security team. Is there some old-school manager with so much influence that they’re able to override any calls from internal and external sources? It’s gotten to the point where Apple engineers privately brag about their new proprietary security mitigations after researchers who scrounge for accidentally symbolicated kernels (thank you, iOS 14 beta) do the work to find them. Why does this situation exist?

      25 replies →

    • Believe it or not, open sourcing the security code is actually not a great idea. Most of the worlds bot nets run on Wordpress which is open source. Most of the time legitimate actors are not going to read through an entire code base because they have better things to do. Illegitimate actors however have a very high incentive to read through a widely used public code base and do so.

      1 reply →

    • He could just have sent in a bug report. Said that the length was not validated.

      No need to dig so much if you just want to fix the problem.

      But he wanted to prove something. That is a different thing.

      1 reply →

  • This is a weird statement, since the premise of this blog post is that these kinds of attacks aren't out of reach for a single talented researcher on a Google salary. It's not out of reach for any government. Nauru, Grenada, Tonga, the Comoros --- they can all afford this.

    • I believe the point of SulfurHexaFluri's final sentence is that it is cost prohibitive for Apple to dedicate a bunch of employees to search for bugs in order to fix them all. That is, it's cost-effective to find 1 bug, but not to find all of them. The sentence could have been worded better.

  • I'd personally phrase things a bit differently: an _individual_ was able to pull this off while surrounded by screaming children. A large government, with all its resources and hundreds+ of people, would pull this off regularly and without breaking a sweat.

  • > Short of rewriting the whole of iOS in a memory safe language I'm not sure how they could even solve this problem. Assigning a researcher to search for 6 months only to find one bug is financially prohibitive.

    Note that memory safe languages won't solve security. They only eliminate a class of security bugs, which would be amazing progress, but not all of them.

  • Didn't they move WiFi drivers, among other things, into the userspace in macOS Big Sur? I've heard somewhere that they're going in the direction towards microkernel for this particular reason of reducing the attack surface.

    (yes I know I'm talking about macOS but the vulnerability was found in iOS, but there's a lot of shared code between them, especially on levels this low)

  • >Its basically known as a fact they have loads of these exploits sitting in their toolbox ready to use when they have a enticing enough target.

    Do you have a source for this?

  • It is not just not out of reach for large governments, it probably not even out of reach for most organizations with between 5-10 people. As the author says, 6 months of "one person, working alone in their bedroom, was able to build a capability which would allow them to seriously compromise iPhone users they'd come into close contact with". Even if we assume the author is paid $1,000,000 a year that is still only $500,000 of funding which is an absolute drop in the bucket compared to most businesses.

    The average small business loan is more than that at $633,000 [1]. Hell, a single McDonalds restaurant [2] costs more than that to setup. In fact, it is not even out of the reach of vast numbers of individuals. Using the net worth percentiles in the US [3], $500,000 is only the 80th percentile of household net worth. That means in the US alone, which has 129 million households, there are literally 25.8 million households with the resources to bankroll such an effort (assuming they were willing to liquidate their net worth). You need to increase the cost by 1,000x to 10,000x before you get a point where it is out of reach for anybody except for large governments and you need to increase the cost by 100,000x to 1,000,000x before it actually becomes infeasible for any government to bankroll such attacks.

    tl;dr It is way worse than you say. Every government can fund such an effort. Every Fortune 500 company can fund such an effort. Every multinational can fund such an effort. Probably ~50% of small businesses can fund such an effort. ~20% of people in the US can fund such an effort. The costs of these attacks aren't rookie numbers, they are baby numbers.

    [1] https://www.fundera.com/business-loans/guides/average-small-...

    [2] https://www.mcdonalds.com/us/en-us/about-us/franchising/new-...

    [3] https://dqydj.com/average-median-top-net-worth-percentiles/

    • For those who don't see why a company would want to use such exploits, consider how valuable it would be to know if a company's employees were planning to organize or strike.

      There are also paranoid people in positions of power, and bureaucracies that can justify spying on employees. One of the interesting things about this lockdown was finding out that many companies put spyware on their employee-issued computers to monitor their usage.

Unfortunately, it's the same old story. A fairly trivial buffer overflow programming error in C++ code in the kernel parsing untrusted data, exposed to remote attackers. In fact, this entire exploit uses just a single memory corruption vulnerability to compromise the flagship iPhone 11 Pro device. With just this one issue I was able to defeat all the mitigations in order to remotely gain native code execution and kernel memory read and write.

Yes, same old buffer C/C++ overflow problem. We have mainstream alternatives now. C#. Go. Rust. It's time to move on.

  • The code where the bug happens is legal C++, but it uses absolutely none of the memory safety improvements which were added to the language in the past... twenty years probably. It's basically C with classes.

    If they haven't kept up with the changes in their current language, what makes one think that they would "move on" to the alternatives, two of which aren't even alternatives?

    Before they switch to Rust it would be much faster and more efficient to use smart pointers, std::array, std::vector and stop using memcpy.

    • Note that this code is shipping as a kernel extension, which uses Embedded C++, not standard C++. Notably, things like templates and exceptions are not available. It would be nice if they could work on this instead, but looking at the dyld and Security sources (which has no such limitations, as the run in userspace) I don't have much confidence.

      3 replies →

  • As much as I like to bash security critical code written in memory-unsafe languages, I don't think that this is the crux of the problem here.

    To me it's that this extremely trivial bug (the heap overflow, let's ignore the rest for now) passed through code review, security review, security audits, fuzzing... Or that Apple didn't have these in place at all. Not sure which option is worse.

    • We have 30 years of experience showing that ordinary heap overflows are not in fact easy to spot in code review, security review, security audits, and fuzzing. Each of those modalities eliminates a slice of the problem, and some of them --- manual review modalities --- will remove different slices every time they're applied; different test team, different bugs.

      To me, this strongly suggests that the problem is in fact memory-unsafe languages, and not general engineering practices.

      Apple, by the way, has all the things you're talking about in place, and in spades.

      7 replies →

    • Such bugs are extremely difficult to prevent at scale. Even the most talented engineers make such mistakes and programming quality varies significantly even within top engineering teams which are usually comprised of people with different skill sets (+ junior engineers that need training).

      Safe languages are the only way forward to drastically reduce the problem. It can’t be guaranteed to be eliminated 100% obviously because there are still escape hatches available, but it will be significantly improved because you can limit the surface area where such bugs can live.

      3 replies →

    • Does any software producer do fuzzing on their own product? I have never heard of this being done by software developers. Usually it's done by exploit developers. Of course there are static analysis tools that should uncover a problem like this, and I know that high-reliability embedded software developers use them, but I don't know if the likes of Apple does.

  • IMO this is huge.

    Thankfully, Apple is starting to hire Rust developers as well as AWS.

    The tide is changing, one day we will see some Rust code in iOS/macOS so that these issues are a thing of the past.

    • By the time we all retire in a few decades they'll be a thing of the past, probably.

      There's so much low-hanging fruit to pick in that code and switching to Rust is like saying that we should go to Mars to pick fruit instead.

  • C#? You've got to be joking.

  • C# is GC'd so massive memory hit, and also not a language you can have in a kernel.

    Go: GC again, so no go.

    Rust: most sane of the examples you've given.

    Apple has already started migrating to Swift which is a memory safe language.

    However the real reasons Rust and Go aren't feasible is that they're both essentially all-or-nothing, and neither offers even the most basic semblance of ABI compatibility. Their only nod to ABI stability is "use FFI to C" which means your APIs remain unsafe, and doesn't work for non-C languages without all your system APIs having other languages layered on top.

    Swift at least lets you replace individual objc classes one at a time, and is ABI stable, but has no C++ interaction.

    • Swift is far more like C# than Rust in terms of memory management. Sure it uses ARC but arguably that makes it not suitable for kernel level stuff.

      2 replies →

  • Yes, but what about these huge legacy codebases like the iOS kernel? I assume we will have to deal with this type of vulnerability for years to come...

  • Could also fire everyone on the C/C++standards bodies and replace them with people willing to add arrays as a first class data type.

    • I had that argument with the C standards people a decade ago. [1] Consensus was that it would work technically but not politically. The C++ people are too deep into templates to ever get out.

      The basic trick for backwards compatibility is that all arrays have sizes, but you get to specify the expression which represents the size and associate it with the array. So you don't need array descriptors and can keep many existing representations.

      Also, if you have slices, you rarely need pointer arithmetic. Slices are pointer arithmetic with sane semantics.

      I'm tired of seeing decade after decade of C/C++ buffer overflows. It speaks badly of software engineering as a profession.

      [1] http://www.animats.com/papers/languages/safearraysforc43.pdf

      8 replies →

    • I'm not sure what exactly your are trying to say. As far as I can tell, there are indeed safe variants for arrays in the standard - both static and dynamic. People just choose to not use them for some arbitrary reasons.

> What's more, with directional antennas, higher transmission powers and sensitive receivers the range of such attacks can be considerable.

I'm reminded of ye olde Gumstix BlueSniper rifle. Back in the early 2000's there were a series of exploits against bluetooth stacks. The standard response by the industry was that they attacks weren't practically exploitable due to the low power of typical bluetooth devices.

The BlueSniper was a cantenna + gumstix SBC specifically constructed for the purpose of demonstrating the low cost of the threat.

What I don’t understand is:

Apple sits on this giant stack of unused money [1]. Why don’t they get the best security researchers in the world, pay each of them north of $1M / year in salary and create the ultimate red team where their only task is to try to hack Apple devices.

If they get a team of 1000(!) people, each with $1M(!) in salary that would be less than 0.5%(!) of their revenue in 2019 [2].

Wouldn’t that be worth it?

[1] https://fortune.com/2018/01/18/apple-overseas-cash-repatriat...

[2] https://www.statista.com/statistics/265125/total-net-sales-o...

  • There are dozens, perhaps hundreds of people working at the level we're talking here --- vulnerability research is highly specialized. So the better question is perhaps why Apple doesn't build a program to train 1000 researchers to compete.

    • I get the impression that while Apple is world-class at HW ops, they are very mediocre at people ops. (and I get the impression that Google is the opposite)

      5 replies →

    • That’s of course another option.

      I am just surprised because there are so many problems in tech where throwing money at it is not going to improve things.

      However in this case, shouldn’t they be able to attract the best in the world just by turning the money gauge up?

      If you are one of the most highly specialised vulnerability researchers in the world, would you seriously reject a $10m / year offer from Apple where you’d be able to spend all your time doing what you love with the only condition being that you report findings to Apple?

  • It mystifies me too. I'm an independent security researcher that currently has a vulnerability in macOS with grave implications. I'd like to sell it to Apple for a fair price, but their security email is a dead end. Every time I've reached out they want me to disclose all of my research up front, no price negotiation. After doing as many bug bounties as I have, I've been burned one too many times by companies giving ~$200 for weeks or months of effort (less than minimum wage of course) on P1/P2 vulnerabilities in their infrastructure. I'm talking to a few groups who are willing to negotiate a price with me, but I can't be sure of their intent. I want to get it patched, but it's difficult when Apple themselves are disinterested.

    • They set out what they think is a fair price here: https://developer.apple.com/security-bounty/

      Do you have any reason to think that Apple could stiff people that submit vulnerabilities to them?

      My understanding of game theory says that Apple’s incentives are to try to act with integrity and to pay their bounties. There may be corner cases where confusion reigns, and where Apple mistake someone for a fraud, but I would presume they need to be very rare – otherwise Apple’s reputation as a buyer would suffer and people would sell to other buyers who cared for their reputation better (and every vulnerability sold to a third party has a high expected cost to Apple. Edit: on second thoughts maybe the cost to Apple is fairly low - certainly the maximum bounty size says that).

      Edit: I agree that Apple stating a maximum payout is hardly helpful. I presume third party buyers indicate a minimum value they will pay depending on the value of the vulnerability to them. There is a market here, and it isn’t clear that Apple is willing to pay market prices, perhaps because too many people/teams give their vulnerabilities to Apple for $0 (e.g. projectzero!)

      1 reply →

  • You can hire all of the smart people willing to work for you, but there will always be someone smarter not able to join you. That's either because they don't like you, or something else preventing them. Either way, you cannot guarantee that you will catch 100% of the vulns 100% of the time.

  • No, because there is no reason to assume that would materially improve security. Do you think a bulletproof vest manufacturer hiring the best gunmakers in the world would dramatically improve their bulletproof vests? It could help, and it is certainly essential to have good bullet/gun engineers on staff, but you would probably be better off hiring people who know materials science and the actual job of making bulletproof vests.

    It would be far more beneficial for them to just use the tried-and-true techniques that have already been deployed for decades in high-reliability/high-security systems. In the event that such things are too onerous, they could run development methodology tests to remove the elements that provide the least security ROI to produce lesser, but still good, systems at a reduced cost. This would be far more likely to produce a good outcome than taking the standard high development velocity commercial methodology that has failed to produce meaningful security despite decades of attempts and enhancing it to be a high security process. At least in the former you can be reasonably confident you get good security, though possibly at a higher cost than desired. In the latter, although the cost may be less, the security is a complete unknown since you are using a new process invented by people who have never used, let alone made, a high security process before and it is a class of strategy that has literally never succeeded over multiple decades of attempts. Not to say it could not happen, it took hundreds or possibly even thousands of years of failed attempts before heaver-than-air flight was cracked, but they would probably be better served just using the existing techniques that are known to solve the problem.

  • Because there are always more bugs to be found in unsound software.

    This finding is not about this single bug, it's just that someone bothered to scrape the surface.

    (Note that 99% of the effort went into crafting the demo exploit once the vulnerability was found, which is basically wasted effort in the context of eliminating vulnerabilities - the vulnerability finding was easy)

  • well they might be trying. they recently hired Brandon Azad from p0, who definitely is up there. The problem is, that a lot of high calibre security people simply don't want to work for Apple. It suppose its out of spite for all their shitty policies..

  • You already know the answer. Shoveling billions of dollars into a pit that doesn't help Apple make even more money, is never going to happen.

    • I am actually not convinced about your assumption that it wouldn’t make them any money in the long-term.

      My theory is: people that are quite tech savvy (like the HN crowd) would look at such an effort quite favourably and these folks are often micro-influencers when it comes to buying decisions of their direct peers.

      Just an anecdote, but my entire family uses Apple devices, because I am the go-to computer guy in that circle and I advised them to buy Apple. The company that I co-founded used Apple hardware and so on.

      Maybe that is just wishful thinking and it is hard to quantify, but I’d like to believe that increasing your reputation with developers (who in itself are a niche) helps you grow revenue in the long-term nevertheless.

      2 replies →

> A fairly trivial buffer overflow programming error in C++ code in the kernel parsing untrusted data, exposed to remote attackers.

Apparently Apple failed in their hiring process to get those mythical developers that never write such kind of errors in production C or C++ code. /s

  • People need to accept that the problem is the language. We will never solve the developer problem, but we will/can/have produced languages that make these types of errors impossible/extremely unlikely.

'AWDL is an Apple-proprietary mesh networking protocol designed to allow Apple devices like iPhones, iPads, Macs and Apple Watches to form ad-hoc peer-to-peer mesh networks. ... And even if you haven't been using those features, if people nearby have been then it's quite possible your device joined the AWDL mesh network they were using anyway.'

Wow, so Apple was ahead of Amazon's Sidewalk with AWDL. Can you disable this?

  • > Wow, so Apple was ahead of Amazon's Sidewalk with AWDL.

    Not exactly. The wording in the article implies that AWDL forms some kind of multi-hop network topology, but it doesn’t - it just enables nearby devices to communicate with each other directly at Wi-Fi speeds without the burden of pairing (like Wi-Fi Direct) or being associated with the same Wi-Fi network.

    This is used not just in AirDrop but also in the Multipeer Connectivity Framework, AirPlay 2 and the Continuity framework. The standard discovery mechanism for these services is mDNS over AWDL, so for a device to browse for or advertise these services, it needs to be aware of other nearby AWDL neighbours first. (For example, you can browse for and discover other nearby AirDrop devices even if you don’t allow incoming AirDrop enabled yourself.)

    It’s also worth noting that Apple devices very strictly do not send or receive AWDL traffic when they are locked/asleep, and will often even stop listening on the AWDL social channels when there are no services being advertised or in use.

  • It looks like disabling airdrop doesn't do anything:

    > All iOS devices are constantly receiving and processing BLE advertisement frames like this. In the case of these AirDrop advertisements, when the device is in the default "Contacts Only" mode, sharingd (which parses BLE advertisements) checks whether this unsalted, truncated hash matches the truncated hashes of any emails or phone numbers in the device's address book.

    Then follows the section on brute-forcing 2 bytes (only) of a SHA256 hash.

    Spray noise on channels 6 and 44?

    • I don't think that proves what you think it does - that's with AirDrop on, but in a limited mode.

      If you turn AirDrop/Bluetooth off, you may well disable this.

      On my phone:

      - If you disable Bluetooth in the notification tray, then it goes to Bluetooth "Not Connected", but not Off. - If you disable Bluetooth in settings, AirDrop automatically goes into "Receiving Off". - If you then enable AirDrop, it'll automatically turn Bluetooth on.

      So I don't think it's true that you can't disable it - unless the UI is misleading about Off.

A bit OT - how do I work on developing the skill set necessary to find vulnerabilities like these? Should I take some particular courses, or some other “track” of sorts? At the moment, I have an undergraduate in Computer Sciences, and I’d say I’m a fairly OK programmer.

  • Check out LiveOverflow on YT. Maybe play some CTFs, but don't do that super seriously, just enough to get you hooked on binary exploitation. They're fun, especially if you find some teammates to cooperate with.

    And then just, well, practice. A lot of practice. Mostly driven by curiosity about how things work - bugs will then just start to pop up and you are free to investigate whatever piques your interest. The more likely you are to just open up a debugger when a piece of software annoys you and try to binary patch it, the closer you are to being a security researcher :).

    There's not much books/courses on this, low-level hacking is something that you kind of just learn as you go. But, for instance, if you never touched gdb/lldb, or never looked at assembly code, or never wrote C - you should investigate that first as base skills.

    • As for books, The Art of Software Security Assessment is frequently recommended, including by members of Project Zero.

  • There is an excellent pre-packaged VM with levels of challenges that take you through the basics of exploitation to quite advanced levels called "Modern Binary Exploitation" [0]. I would highly recommend it.

    You can also do the challenges using IDA/Ghidra instead of looking at the source for a proper challenge and I recommend doing this initially for each challenge.

    [0] https://github.com/RPISEC/MBE

  • I'd recommend CTF'ing a bit stronger than the other commenter. While there can be a distinct gap between the vulnerabilities in ctfs and real world applications, CTFs provide a great means of deliberate practice (work on a problem, potentially figure it out, and then read other peoples' write-ups after the competition ends).

    Checkout https://ctftime.org/ for a list of ctfs. There are also intro ctfs like https://picoctf.org/

    • I didn't meant to discourage from playing CTFs, I just became jaded by seeing the same kind of heap feng shui tasks over and over and over again :). You know, the note-management linked list task with a simple CLI menu. Not to mention the proliferation of 0/1day tasks, which are IMO just lazy.

      Do play CTFs. Just pick the fun challenges. pwnable.kr used to have some good stuff if you want to level up.

      1 reply →

It would be amazing to plot the 2.4 GHz amplitude vs. time series plot of this exploit.

Think about it, an ocean of electrons in the copper WiFi antenna bump along with a certain guiding EM wave and in so doing, they inadvertently cause the information moving electrons in the silicon crystal to disconnect from the electrons being pushed out of the Li-ion battery.

This amplitude fluctuation in principal could have been broadcast by motions of stars in the universe, as astronomy does peer into the deep with these frequencies [0].

In the future, one could imagine a bad actor with control over a global network of low orbit satellites spewing out this code for decades preventing the such devices from being turned on long enough to receive updates, deactivating billions of dollars of human capital.

[0]: http://www.astrosurf.com/luxorion/radioastro-frequencieslist...

How many people on earth can find and exploit something like this? Less than 100, maybe less than 1000?

  • Probably more than a hundred; there are teams of dozens at the good corporate security groups and an unknown number working for governments and other organizations that don’t appear as publicly.

  • More than thousands should be able to find this kind of security bugs, potentially many of them from state-backed hackers from China and Russia.

I'd be really curious to know whether the phone can be exploited while on flight mode.

  • I'm pretty sure that AirDrop works when you turn on Wi-Fi and Bluetooth while using airplane mode.

    • I checked and airplane mode seems to disable wifi and 4g but not bluetooth. Airdrop refuses to work without wifi. Not sure to what extent wireless is actually turned off for airplane mode now though.

      3 replies →

Despite the rather explicitly explanation I still have absolutely no idea how people go about deciding how and wear to start on such insane exploits.

Perhaps a dumb question, but why don't things like signed pointers prevent this? Are they just not that good of a security measure?

  • The article explains bypassing exactly this (PA/PAC).

    > Vulnerability discovery remains a fairly linear function of time invested. Defeating mitigations remains a matter of building a sufficiently powerful weird machine. Concretely, Pointer Authentication Codes (PAC) meant I could no longer take the popular direct shortcut to a very powerful weird machine via trivial program counter control and ROP or JOP. Instead I built a remote arbitrary memory read and write primitive which in practise is just as powerful and something which the current implementation of PAC, which focuses almost exclusively on restricting control-flow, wasn't designed to mitigate.

    Signed pointers are just a mitigation. With enough time to find other primitives/constructs (from less severe but more common bugs) you will work around them.

Can someone summarize the expoit?

  • AWDL is a wireless protocol that Apple used for things like AirDrop. In the AWDL handling code in the kernel there is a 60-byte buffer that gets copied over by an up-to 1024 byte buffer supplied by an attacker. Using other bugs and poor address randomization Ian Beer from Google Project Zero discloses kernel memory, then constructs a kernel read and write primitive. Then he demonstrates how this can be used to gain privileged code execution in userspace by launching the calculator and making a program to extract user photos.

Are Androids without crapware as insecure as iPhones?

I wonder if the daily HN article about Apple failing to be secure is a result of 1 OS, 1 phone. Where as no one is going to put the effort to find an exploit on a phone with 1% market share.

Similar question for desktops.

  • Android has has several critical flaws recently. The ones I can remember are stagefright and dirtyCoW. Stagefright was easily remotely triggerable since it was in a media library that runs when getting sent media.

    The main difference between the two I have seen is ios users get an update that fixes the issue often after their device has stopped getting feature updates while many android users are on kernels that haven't received an update in the last few years.

    • Stagefright was ~5 years ago now, though. I remember it, because the company I worked for at the time flipped out and banned all Android phones from their network for over a year.

      It was fantastic; I got a whole year of not being able to see work emails after I went home, and then they let us opt out of the invasive MDM software that they wanted to put on Android phones to let them access corporate email. All for a bug that my phone wasn't even vulnerable to.

      By the time I left, I had gone 4 years without ever responding to unexpected evening emails. And now that I know it's possible, I'm never going back! :)

    • Wow that dirty cow exploit affects Linux too? My server was at risk...

      Although none of those are recent like daily security flaws we see on HN.

      1 reply →

  • The problem with android is that there is a lot of software in the end OS that's not open source and is delivered as binaries from component manufacturers (the GPU drivers tend to be the worst, they almost universally come from Qualcomm since most phones now use the same series of SoCs.) Once the hardware is released these are rarely updated if ever which means the vulnerabilities aren't patched. The phone manufacturers are just helpless as the community is in this situation. Project treble mitigates this to some degree but the individual software components still can't be updated.

> After a day or so of analysis and reversing I realize that yes, this is in fact another exploitable zero-day in AWDL. This is the third, also reachable in the default configuration of iOS.

Holy shit.