← Back to context

Comment by netdevphoenix

1 day ago

I always wonder why people not working or planning to work in infosec do this. I get giving up your free time to build open source functionality used by rich for-profit companies that will just make them rich because that's the nature of open source. But literally giving your free time to help a rich company get richer that I do not get. My only explanation is that they enjoy the process. It's like people spending their free time giving information and resources when they would not do that if that person was in front of them.

You are on hackernews. It’s curiosity not only about the flaw in their system but also how they as a system react to the flaw. Tells you a lot about companies you can later avoid when recruiters knock or you send out resumes.

  • I know I am on HN. Curiosity is one thing, investigating issues for free for a rich company is another. The former makes sense to me. The latter not as much, when we live in a world with all sorts of problems that are available to be solved.

    I think judging the future state of a company based on its present state is not really fair or reliable especially as the period between the two states gets wider. Culture change (see Google), CxOs leave (OpenAI) and the board changes over time.

    • > I know I am on HN. Curiosity is one thing, investigating issues for free for a rich company is another.

      The vulnerability https://github.com/bf/security-advisories/blob/main/2025-01-... targets other sites than OpenAI. OpenAI's crawler is rather the instrument of the crime for the attack.

      Since this "just" leads to a potential reputation damage for OpenAI (and OpenAI's reputation is by now bad), and the victims are operators of other websites, I can see why OpenAI sees no urgency for fixing this bug.

      1 reply →

> rich company get richer

They have heaps of funding, but are still fundraising. I doubt they're making much money.

I do have an extensive infosec background, just left corporate security roles because it's a recipe for burnout because most won't care about software quality. Last year I've reported a security vulnerability in a very popular open source project and had to fight tooth and nail with highly-paid FAANG engineers to get it recognized + fixed.

This ChatGPT vulnerability disclosure was a quick temperature check on a product I'm using on a daily basis.

The learning for me is that their BugCrowd bug bounty is not worth to interact with. They're tarpitting vulnerability reports (most likely due to stupidity) and ask for videos and screenshots instead of understanding a single curl command. Through their unhelpful behavior they basically sent me on an organizational journey of trying to find a human at OpenAI who would care about this security vulnerability. In the end I failed to reach anyone at OpenAI, and due to sheer luck it got fixed after the exposure on HackerNews.

This is their "error culture":

1) Their security team ignored BugCrowd reports

2) Their data privacy team ignored {dsar,privacy}@openai.com reports

3) Their AI handling support@openai.com didn't understand it

4) Their colleagues at Microsoft CERT and Azure security team ignored it (or didn't care enough about OpenAI to make them look at it).

5) Their engineers on github were either too busy or didn't care to respond to two security-related github issues on their main openai repository.

6) They silently disable the route after it pop ups on HackerNews.

Technical issues:

1) Lack of security monitoring (Cloudflare, Azure)

2) Lack of security audits - this was a low hanging fruit

3) Lack of security awareness with their highly-paid engineers:

I assume it was their "AI Agent" handling requests to the vulnerable API endpoint. How else would you explain that the `urls[]` parameter is vulnerable to the most basic "ignore previous instructions" prompt injection attack that was demonstrated with ChatGPT years ago. Why is this prompt injection still working on ANY of their public interfaces? Did they seriously only implement the security controls on the main ChatGPT input textbox and not in other places? And why didn't they implement any form of rate limiting for their "AI Agent"?

I guess we'll never know :D

  • That's really bad. But then again OpenAI was he coolest company for a year two and now it's facing multiple existential crises. Chances are that the company won't be around by 2030 or will be partially absorbed by Microsoft. My take is that GPT-5 will never come out if it ever does it will just be to mark the official downfall of the company because it will fail to live to the expectations and will drop the valuation of the company.

    LLMs are truly amazing but I feel Sama has vastly oversold their potential (which he might have done based on the truly impressive progress that we have seen in the late 10s early 20s. But the tree's apple yield hasn't increased and watering more won't result in a higher yield.

    • I've reframed ChatGPT as a google alternative without ads and am really happy when using it this way. It's still a great product and they'll be able to monetize it with ads just like google did.

      Personally it's quite disappointing because I'd have expected at least some engineer to say "it's not a bug it's a feature" or "thanks for informative vulnerability report, we'll fix it in next release".

      But just ignoring it on so many avenues feels bad.

      I remember when 15yrs ago I reported something to Dropbox and their founder Arash answered the e-mail and sent me a box of tshirts. Not that I want to chat with sama but it's still a startup, right?