Comment by mpeg

7 days ago

"Google Chromium CSS contains a use-after-free vulnerability that could allow a remote attacker to potentially exploit heap corruption via a crafted HTML page. This vulnerability could affect multiple web browsers that utilize Chromium, including, but not limited to, Google Chrome, Microsoft Edge, and Opera."

That's pretty bad! I wonder what kind of bounty went to the researcher.

> That's pretty bad! I wonder what kind of bounty went to the researcher.

I'd be surprised if it's above 20K$.

Bug bounties rewards are usually criminally low; doubly so when you consider the efforts usually involved in not only finding serious vulns, but demonstrating a reliable way to exploit them.

  • Here is a comment that really helped me understand bug bounty payouts: https://news.ycombinator.com/item?id=43025038

    • Everyone should read this comment, it does a really eloquent job explaining the situation.

      The fundamental thing to understand is this: The things you hear about that people make $500k for on the gray market and the things that you see people make $20k for in a bounty program are completely different deliverables, even if the root cause bug turns out to be the same.

      Quoted gray market prices are generally for working exploit chains, which require increasingly complex and valuable mitigation bypasses which work in tandem with the initial access exploit; for example, for this exploit to be particularly useful, it needs a sandbox escape.

      Developing a vulnerability into a full chain requires a huge amount of risk - not weird crimey bitcoin in a back alley risk like people in this thread seem to want to imagine, but simple time-value risk. While one party is spending hundreds of hours and burning several additional exploits in the course of making a reliable and difficult-to-detect chain out of this vulnerability, fifty people are changing their fuzzer settings and sending hundreds of bugs in for bounty payout. If they hit the same bug and win their $20k, the party gambling on the $200k full chain is back to square one.

      Vulnerability research for bug bounty and full-chain exploit development are effectively different fields, with dramatically different research styles and economics. The fact that they intersect sometimes doesn't mean that it makes sense to compare pricing.

      8 replies →

    • This underestimates the adaptability of threat actors. Massive cryptocurrency thefts from individuals have created a market for a rather wide range of server-side bugs.

      Got a Gmail ATO? Just run it against some of the leaked cryptocurrency exchange databases, automatically scan for wallet backups and earn hundreds of millions within minutes.

      People are paying tens of thousands for “bugs” that allow them to confirm if an email address is registered on a platform.

      Even trust isn’t much of a problem anymore, well-known escrow services are everywhere.

  • I think a big part of "criminally low" is that you'll make much more money selling it on the black market than getting the bounty.

    • I read this often, and I guess it could be true, but those kinds of transaction would presumably go through DNM / forums like BF and the like. Which means crypto, and full anonymity. So either the buyer trusts the seller to deliver, or the seller trusts the buyer to pay. And once you reveal the particulars of a flaw, nothing prevents the buyer from running away (this actually also occurs regularly on legal, genuine bug bounty programs - they'll patch the problem discreetly after reading the report but never follow up, never mind paying; with little recourse for the researcher).

      Even revealing enough details, but not everything, about the flaw to convince a potential buyer would be detrimental to the seller, as the level of details required to convince would likely massively simplify the work of the buyer should they decide to try and find the flaw themselves instead of buying. And I imagine much of those potential buyers would be state actors or organized criminal groups, both of which do have researchers in house.

      The way this trust issue is (mostly) solved in drugs DNM is through the platform itself acting as a escrow agent; but I suspect such a thing would not work as well with selling vulnerabilities, because the volume is much lower, for one thing (preventing a high enough volume for reputation building); the financial amounts generally higher, for another.

      The real money to be made as a criminal alternative, I think, would be to exploit the flaw yourself on real life targets. For example to drop ransomware payloads; these days ransomware groups even offer franchises - they'll take, say, 15% of the ransom cut and provide assistance with laundering/exploiting the target/etc; and claim your infection in the name of their group.

      6 replies →

  • > but demonstrating a reliable way to exploit them

    Is this a requirement for most bug bounty programs? Particularly the “reliable” bit?

So basically Firefox is not affected ?

  • The listed browsers are basically skins on top of the same chromium base.

    It’s why Firefox and Safari as so important despite HN’a wish they’d go away.

    • HN doesn't want firefox to go away. HN wants firefox to be better, more privacy/security focused, and to stop trying to copy chrome out of the misguided hope that being a poor imitation will somehow make it more popular.

      Sadly, mozilla is now an adtech company (https://www.adexchanger.com/privacy/mozilla-acquires-anonym-...) and by default firefox now collects your data to sell to advertisers. We can expect less and less privacy for firefox users as Mozilla is now fully committed to trying to profit from the sale of firefox users personal data to advertisers.

      9 replies →

    • HN wants Firefox but with better stewardship and fewer misdirected funds.

      Mozilla - wrongly - believes that the majority of FF users believe in Mozilla's hobby projects rather than that they care about their browser.

      That's why - as far as I know - to this day it is impossible to directly fund Firefox. They'd rather take money from google than to be focusing on the one thing that matters.

      12 replies →

    • Particularly weird impulse for technically inclined people…

      Although I must admit to the guilty pleasure of gleefully using Chromium-only features in internal apps where users are guaranteed to run Edge.

      1 reply →

  • It's pretty hard to have an accidental a use after free in the FireFox CSS engine because it is mostly safe Rust. It's possible, but very unlikely.

    • That came to my mind as well. CSS was one of the earliest major applications of Rust in FireFox. I believe that work was when the "Fearless Concurrency" slogan was popularized.

      1 reply →

    • Firefox and Safari developers dared the Chromium team to implement :has() and Houdini and this is the result!

      /s

Presumably this affects all electron apps which embed chrome too? Don’t they pin the chrome version?

  • Yes, but it's only a vulnerability if the app allows rendering untrusted HTML or visiting untrusted websites, which most Electron apps don't.

    • Lots of apps like slack and discord will show you an opengraph preview of a website if you post a link. I could of course be wrong but expect you could craft an exploit that just required you to be able to post the link - then it it would render the preview and trigger the problem.

      Secondly as a sibling pointed out lots of apps have html ads so if you show a malicious ad it could also trigger. I’m old enough to remember the early google ads which which google made text-only specifically because google said that ads were a possible vector for malware. Oh how the turns have tabled.

    • pretty sure I've had slack show me whole web pages without kicking me out to the mobile browser.

    • Except: Spotify (through ads), Microsoft Teams (through teams apps), Notion (through user embedded iframes), Obsidian (through user embedded iframes), VSCode (through extensions), etc...

Yeah, but lets keeping downplaying use-after-free as something not worth eliminating in 21st century systems languages.

  • I love rust but honestly I am more scared about supply chain attacks through cargo than memory corruption bugs. The reason being that supply chain attacks are probably way cheaper to pull off than finding these bugs

    • But this is irrelevant. If you're afraid of third-party code, you can just... choose not to use third-party code? Meanwhile, if I'm afraid of memory corruption in C, I cannot just choose not to have memory corruption; I must instead simply choose not to use C. Meanwhile, Chromium uses tons of third-party Rust code, and has thereby judged the risk differently.

      6 replies →

    • I'm sympathetic to the supply chain problem I even wrote a whole thing on it https://vincents.dev/blog/rust-dependencies-scare-me/

      That being said as many above have pointed out you can choose not to bring in dependencies. The Chrome team already does this with the font parser library they limit dependencies to 1 or 2 trusted ones with little to no transitive dependencies. Let's not pretend C / C++ is immune to this we had the xz vuln not too long ago. C / C++ has the benefit of the culture not using as many dependencies but this is still a problem that exists. With the increase of code in the world due to ai this is a problem we're going to need to fix sooner rather than later.

      I don't think the supply chain should be a blocker for using rust especially when once of the best C++ teams in the world with good funding struggles to always write perfect code. The chrome team has shown precedent for moving to rust safely and avoiding dependency hell, they'll just need to do it again.

      They have hundreds of engineers many of which are very gifted, hell they can write their own dependencies!

      4 replies →

    • If you can bring in 3rd party libraries, you can be hit with a supply chain attack. C and C++ aren't immune, it's just harder to pull off due to dependency management being more complex (meaning you'll work with less dependencies naturally).

      20 replies →

    • The statistics we have on real world security exploits proves that most security exploits are not coming from supply chain attacks though.

      Memory safety related security exploits happen in a steady stream in basically all non-trivial C projects, but supply chain attacks, while possible, are much more rare.

      I'm not saying we shouldn't care about both issues, but the idea is to fix the low hanging fruit and common cases before optimizing for things that aren't in practice that big of a deal.

      Also, C is not inherently invulnerable to supply chain attacks either!

  • https://materialize.com/blog/rust-concurrency-bug-unbounded-...

    Edit: Replying to ghusbands:

    'unsafe' is a core part of Rust itself, not a separate language. And it occurs often in some types of Rust projects or their dependencies. For instance, to avoid bounds checking and not rely on compiler optimizations, some Rust projects use vec::get_unchecked, which is unsafe. One occurrence in code is here:

    https://grep.app/pola-rs/polars/main/crates/polars-io/src/cs...

    And there are other reasons than performance to use unsafe, like FFI.

    Edit2: ghusbands had a different reply when I wrote the above reply, but edited it since.

    Edit3: Ycombinator prevents posting relatively many new comments in a short time span. And ghusbands is also wrong about his answer not being edited without him making that clear.

    • Those kind of arguments is like posting news about people still dying while wearing seat belts and helmets, ignoring the lifes that were saved by having them on.

      By the way, I am having these kind of arguments since Object Pascal, back when using languages safer than C was called straighjacket programming.

      Ironically, most C wannabe replacements are Object Pascal/Modula-2 like in the safety they offer, except we know better 40 years later for the use cases they still had no answer for.

      3 replies →

    • Yes, once you use 'unsafe' to bypass the safety model, you don't get safety.

      Edit: If you reply with a reply, rather than edits, you don't get such confusion.

      1 reply →

It would also require a sandbox escape to be a meaningful vulnerability.

Unfortunately, "seen in the wild" likely means that they _also_ had a sandbox escape, which likely isn't revealed publicly because it's not a vulnerability in properly running execution (i.e., if the heap were not already corrupted, no vulnerability exists).

  • I'd bet that the sandbox escape is just in the underlying operating system kernel and therefor isn't a matter for Chromium to issue a CVE.