Comment by bflesch
18 hours ago
> rich company get richer
They have heaps of funding, but are still fundraising. I doubt they're making much money.
I do have an extensive infosec background, just left corporate security roles because it's a recipe for burnout because most won't care about software quality. Last year I've reported a security vulnerability in a very popular open source project and had to fight tooth and nail with highly-paid FAANG engineers to get it recognized + fixed.
This ChatGPT vulnerability disclosure was a quick temperature check on a product I'm using on a daily basis.
The learning for me is that their BugCrowd bug bounty is not worth to interact with. They're tarpitting vulnerability reports (most likely due to stupidity) and ask for videos and screenshots instead of understanding a single curl command. Through their unhelpful behavior they basically sent me on an organizational journey of trying to find a human at OpenAI who would care about this security vulnerability. In the end I failed to reach anyone at OpenAI, and due to sheer luck it got fixed after the exposure on HackerNews.
This is their "error culture":
1) Their security team ignored BugCrowd reports
2) Their data privacy team ignored {dsar,privacy}@openai.com reports
3) Their AI handling support@openai.com didn't understand it
4) Their colleagues at Microsoft CERT and Azure security team ignored it (or didn't care enough about OpenAI to make them look at it).
5) Their engineers on github were either too busy or didn't care to respond to two security-related github issues on their main openai repository.
6) They silently disable the route after it pop ups on HackerNews.
Technical issues:
1) Lack of security monitoring (Cloudflare, Azure)
2) Lack of security audits - this was a low hanging fruit
3) Lack of security awareness with their highly-paid engineers:
I assume it was their "AI Agent" handling requests to the vulnerable API endpoint. How else would you explain that the `urls[]` parameter is vulnerable to the most basic "ignore previous instructions" prompt injection attack that was demonstrated with ChatGPT years ago. Why is this prompt injection still working on ANY of their public interfaces? Did they seriously only implement the security controls on the main ChatGPT input textbox and not in other places? And why didn't they implement any form of rate limiting for their "AI Agent"?
I guess we'll never know :D
That's really bad. But then again OpenAI was he coolest company for a year two and now it's facing multiple existential crises. Chances are that the company won't be around by 2030 or will be partially absorbed by Microsoft. My take is that GPT-5 will never come out if it ever does it will just be to mark the official downfall of the company because it will fail to live to the expectations and will drop the valuation of the company.
LLMs are truly amazing but I feel Sama has vastly oversold their potential (which he might have done based on the truly impressive progress that we have seen in the late 10s early 20s. But the tree's apple yield hasn't increased and watering more won't result in a higher yield.
I've reframed ChatGPT as a google alternative without ads and am really happy when using it this way. It's still a great product and they'll be able to monetize it with ads just like google did.
Personally it's quite disappointing because I'd have expected at least some engineer to say "it's not a bug it's a feature" or "thanks for informative vulnerability report, we'll fix it in next release".
But just ignoring it on so many avenues feels bad.
I remember when 15yrs ago I reported something to Dropbox and their founder Arash answered the e-mail and sent me a box of tshirts. Not that I want to chat with sama but it's still a startup, right?