← Back to context

Comment by kryogen1c

12 hours ago

>proactive [...] security program

Idk how proactive patching an exploited-in-the-wild unauth RCE is, but pr statements gonna pr i guess.

>This [...] vuln is not a breach or compromise of MongoDB

IANAL, but this seems like a pretty strong stance to take? Who exactly are you blaming here?

>vulnerability was discovered internally >detected the issue

Interesting choice of words. I wonder if their SIEM/SOC discovered a compromise, or if someone detected a tweet.

>December 12–14 – We worked continuously

It took 72 clock hours, assumably hundreds of man hours, to fix a malloc use after free and cstring null term bug? Maybe the user input field length part was a major design point??

>dec 12 "detect" the issue, dec 19 cve, dec 23 first post

Boy this sure seems like a long time for a first communication for a guaranteed compromise if internet facing bug.

Not sure there's a security tool in the world that would stop data exfiltration via protocol error logs.

" >proactive [...] security program Idk how proactive patching an exploited-in-the-wild unauth RCE is, but pr statements gonna pr i guess. "

If you follow their history, especially the jepsen analysis and the whole back and forth, you will find a pattern.

> IANAL, but this seems like a pretty strong stance to take? Who exactly are you blaming here?

It's a factually statement, unless you know of some information that indicates MongoDB was breached. I think you mistook "MongoDB" there to be the software instead of the company. They meant the company, their systems and infrastructure was not compromised.

> Interesting choice of words. I wonder if their SIEM/SOC discovered a compromise, or if someone detected a tweet.

I highly doubt that. it could be a crash someone noticed, a code audit, internal bug-bounty,etc.. either way I wouldn't ascribe to them deceit without proof, if it was an external source, give them the benefit of doubt that they'd have said so.

> It took 72 clock hours, assumably hundreds of man hours, to fix a malloc use after free and cstring null term bug? Maybe the user input field length part was a major design point??

You are familiar with things like SOC and SIEM, and you're confused by this? Are you familiar with Incident Response? The act of editing the code in a text editor and committing it to a branch isn't what took 72 hours.

> Boy this sure seems like a long time for a first communication for a guaranteed compromise if internet facing bug.

It does not, far from it.

> Not sure there's a security tool in the world that would stop data exfiltration via protocol error logs.

Maybe not prevent, but certainly detect and attempt to interdict/stop is certainly possible. That's what SIEMs do if they're adequately configured. But the drawback might be considerable volume of false hits. It might be better to simply reduce exposure to the internet, or remove it entirely. Just pointing out that, at least detection is possible, even with 0 days like this.

  • >I think you mistook "MongoDB"

    I must have, the sentence does not make sense to me. Here it is, shortened: "this vuln in mongodb server does not impact mongodb, managed mongodb server, or our systems". If the first clause is referring to their systems, why do they say the same thing in the third clause?

    Also i just noticed, how come they say atlas wasn't affected but say they patched it in their timeline?

    >give them the benefit of doubt that they'd have said so

    Statements like this are basically legal admissions of guilt, i expect there to be as little truth as possible.

    >You are familiar with things like SOC and SIEM, and you're confused by this?

    I work in IT, I'm not a coder... so yes :) hundreds of hours seems excessive. Remember, this isn't a safe deployment or rollout plan, that's the next block of time. Hundreds of man hours is more than one person's full month of work. Do you expect it to take you a whole, dedicated month to fix 1 bug at a time?

    >That's what SIEMs do if they're adequately configured.

    This is a bit of a no true Scotsman. The intended error log is "error: {cstring payload nullterm} broke" and the mongobleed log is "error: {cstring payload MISSINGNULLTERM cstring payload nullterm} broke". Those two things look identical, how is any amount of configuration supposed to catch that?

    • > Do you expect it to take you a whole, dedicated month to fix 1 bug at a time?

      Like I said, the bugfix is not what takes long. They have to figure out the extent of the vulnerability, do regression testing, make sure they don't introduce more issues. And _then_ they can begin sending embargo notifications, let their customers prep, patch,etc... while in parallel they do analysis of in-the-wild exploitation. They have to support all the paying customers that are panicking and want answers. You're not the only one scrutinizing every word they say and demanding answers. They talked to lawyers plenty during that time. If you know legal admission of guilt is one of the things included, then you should know they're publicly traded and SOX plus section 8 filings are a huge deal. Their CISO could literally end up in prison if he screws this up. So yeah, it takes a couple of days. They have to have outside parties (likely) support their response, even without that, "who did what", "what was affected", "how was it abused", "how can it be prevented" , all of that needs to be answered, and then there is lots of back on forth on the specifics of the wording to the public/PR, what to tell investors, customers, etc...

      > This is a bit of a no true Scotsman.

      There are different detection strategies possible. Your approach could be done, when an error message that hasn't been seen previously suddently shows up, it could be flagged for follow-up investigations, contact mongo support,etc.. that's not what I meant though, you mentioned exfil, abnormal data transfers from 'mongod' could be caught is what I meant. Most moderns SIEMS do this out of the box if you feed them right and well.

> Boy this sure seems like a long time for a first communication for a guaranteed compromise if internet facing bug.

If you still run MongoDB facing the internet you have bigger problems.

>>This [...] vuln is not a breach or compromise of MongoDB

>IANAL, but this seems like a pretty strong stance to take? Who exactly are you blaming here?

You elide the context that explains it. It's a vulnerability in their MongoDB Server product, not a result of MongoDB the company/services being compromised and secrets leaked.

It wasn't an RCE.

  • Oh goodness, wheres my head at, thank you. Too late to edit, but you are correct. Memory exfiltration, potentially containing passwords and secrets, leading to privilege escalation. Not an RCE.

    • >Memory exfiltration, potentially containing passwords and secrets

      and potentially not, too. totally overhyped

> Idk how proactive patching an exploited-in-the-wild unauth RCE is, but pr statements gonna pr i guess.

Describing their response as "proactive" is about what you'd expect from a company that famously used unacknowledged writes to game benchmarks during their peak hype phase. Ironically, Mongo has been slower than PostgreSQL for years at JSON queries, the very thing at which it's supposed to excel, and especially relative to a "boring," "antiquated" relic like Postgres, which was started all the way back in 1985.

The real head-scratcher here is who is still using MongoDB, and why? It got to a point years ago where even "I told you so" types (like me) found it no longer necessary to pile on, given the wave of buyer's remorse postmortems from devs who bought into MongoDB's hype.