← Back to context

Comment by JohnMakin

2 days ago

Nice find, I think one of my sites actually got recently hit by something like this. And yea, this kind of thing should be trivially preventable if they cared at all.

IDK, I feel that if you're doing 5000 HTTP calls to another website it's kind of good manners to fix that. But OpenAI has never cared about the public commons.

  • Nobody in this space gives a fuck about anyone outside of the people paying for their top-tier services, and even then, they only care about them when their bill is due. They don't care about their regular users, don't care about the environment, don't care about the people that actually made the "data" they're re-selling... nobody.

  • Yeah, even beyond common decency, there's pretty strong incentives to fix it, as it's a fantastic way of having your bot's fingerprint end up on Cloudflare's shitlist.

    • Kinda disappointed by cloudflare - it feels they have quite basic logic only. Why would anomaly detection not capture these large payloads?

      There was a zip-bomb like attack a year ago where you could send one gigabyte of the letter "A" compressed into very small filesize with brotli via cloudflare to backend servers, basically something like the old HTTP Transfer-Encoding (which has been discontinued).

      Attacker --1kb--> Cloudflare --1GB--> backend server

      Obviously the servers who received the extracted HTTP request from the cloudflare web proxies were getting killed but cloudflare didn't even accept it as a valid security problem.

      AFAIK there was no magic AI security monitoring anomaly detection thing which blocked anything. Sometimes I'd love to see the old web application firewall warnings for single and double quotes just to see if the thing is still there. But maybe it's misconfiguration on side of cloudflare user because I can remember they at least had a WAF product in the past.

      1 reply →

> And yea, this kind of thing should be trivially preventable if they cared at all.

Most of the time when someone says something is "trivial" without knowing anything about the internals, it's never trivial.

As someone working close to the b2c side of a business, I can’t count the amount of times I've heard that something should be trivial while it's something we've thought about for years.

  • The technical flaws are quite trivial to spot, if you have the relevant experience:

    - urls[] parameter has no size limit

    - urls[] parameter is not deduplicated (but their cache is deduplicating, so this security control was there at some point but is ineffective now)

    - their requests to same website / DNS / victim IP address rotate through all available Azure IPs, which gives them risk of being blocked by other hosters. They should come from the same IP address. I noticed them changing to other Azure IP ranges several times, most likely because they got blocked/rate limited by Hetzner or other counterparties from which I was playing around with this vulnerabilities.

    But if their team is too limited to recognize security risks, there is nothing one can do. Maybe they were occupied last week with the office gossip around the sexual assault lawsuit against Sam Altman. Maybe they still had holidays or there was another, higher-risk security vulnerability.

    Having interacted with several bug bounties in the past, it feels OpenAI is not very mature in that regard. Also why do they choose BugCrowd when HackerOne is much better in my experience.

    • > rotate through all available Azure IPs, ... They should come from the same IP address.

      I would guess that this is intentional, intended to prevent IP level blocks from being effective. That way blocking them means blocking all of Azure. Too much collateral damage to be worth it.

      1 reply →

  • If you’re unable to throttle your own outgoing requests you shouldn’t be making any

    • I assume it'll be hard for them to notice because it's all coming from Azure IP ranges. OpenAI has very big credit card behind this Azure account so this vulnerability might only be limited by Azure capacity.

      I noticed they switched their crawler to new IP ranges several times, but unfortunately Microsoft CERT / Azure security team didn't answer to my reports.

      If this vulnerability is exploited, it hits your server with MANY requests per second, right from the hearts of Azure cloud.

      4 replies →

  • now try to reply to the actual content instead of some generalizing grandstanding bullshit