Comment by atleastoptimal
5 hours ago
Almost every parent comment on this is negative. Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?
It seems that there is a constant motive to view any decision made by any big AI company on this forum at best with extreme cynicism and at worse virulent hatred. It seems unwise for a forum focused on technology and building the future to be so opposed to the companies doing the most to advance the most rapidly evolving technological domain at the moment.
People remember things and consistently behaving like an asshole gets you treated like an asshole.
OpenAI had a lot of goodwill and the leadership set fire to it in exchange for money. That's how we got to this state of affairs.
What are the worst things OpenAI has done
The number one worst thing they've done was when Sam tried to get the US government to regulate AI so only a handful of companies could pursue research. They wanted to protect their moat.
What's even scarier is that if they actually had the direct line of sight to AGI that they had claimed, it would have resulted in many businesses and lines of work immediately being replaced by OpenAI. They knew this and they wanted it anyway.
Thank god they failed. Our legislators had enough of a moment of clarity to take the wait and see approach.
3 replies →
Dude, they completely betrayed everything in their "mission". The irony in the name OpenAI for a closed, scammy, for profit company can not be lost on you.
4 replies →
> Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?
Isnt that a good thing? The comments here are not sponsored, nor endorsed by YC.
I'd expect to see a balance though, at least on the notion that people would be attracted to posting on a YC forum over other forums due to them supporting or having an interest in YC.
I think the majority of people don't care about YC. It just happens to be the most popular tech forum.
Why do you assume there would be a balance? Maybe YC's reputation has just been going downhill for years. Also, OpenAI isn't part of YC. Sam Altman was fired from YC and it's pretty obvious what he learned from that was to cheat harder, not change his behavior.
4 replies →
My takeaway is actually the opposite, major props to YC for allowing this free speech unfettered - I cant think of any other organization or country on the planet where such a free setup exists
I would call it skepticism, not cynicism. And there is a long list of reasons that big tech and big AI companies are met with skepticism when they trot out nice sounding ideas that require everyone to just trust in their sincerity despite prior evidence.
Why do you assume that a forum run by X needs to or should support X? And why is it unwise - from what metrics do you measure wisdom?
I don't want to be glib - but perhaps it is because our "context window lengths" extend back a bit further than yours?
Big tech (not just AI companies) have been viewed with some degree of suspicion ever since Google's mantra of "Don't be evil" became a meme over a decade ago.
Regardless of where you stand on the concept of copyright law, it is an indisputable fact that in order for these companies to get to where they are today - they deliberately HOOVERED up terabytes of copyrighted materials without the consent or even knowledge of the original authors.
I don’t think anyone’s disputing that these companies are evil or that when they’re changing the world it’s generally for the worse.
The question is, why are people who have a problem with that hanging out in evil technologists making the world a worse place for money HQ?
because of the repeated rugpulling?
These guys are pursuing what they believe to be the biggest prize ever in the history of capitalism. Given that, viewing their decisions as a cynic, by default, seems like a rational place to start.
True, though it seems most people on HN think AGI is impossible thus would consider OpenAI's quest a lost cause.
I don’t think one can validly draw any such conclusion.
When you call yourself "Open"AI and then turn around and backstab the entire open community, its pretty hard to recover from that.
They undermined their not-for-profit mission by changing their governance structure. This changed their very DNA.
They released a near-SOTA open source model not too long ago
open weights != open source
This. I’ve been on HN for a while. I am barely hanging on to this community. It is near constant negativity and the questioning of every potential motive.
Skepticism is healthy. Cynicism is exhausting.
Thank you for posting this.
In the current echo chamber and unprecedented hype, I'll take cynicism over hollow positivity and sycophancy
People here are directly in the line of fire for their jobs. It’s not surprising.
True, but there are many reasons besides. Meta and Anthropic attract less criticism for a reason.
I’ll bite, but not in the way you’re expecting. I’ll turn the question back on you and ask why you think they need defending?
Their messaging is just more drivel in a long line of corporate drivel, puffing themselves up to their investors, because that’s who their customers are first and foremost.
I’d do some self reflection and ask yourself why you need to carry water for them.
I support them because I like their products and find the work they've done interesting, and whether good or bad, extremely impactful and worth at least a neutral consideration.
I don't do a calculation in my head over whether any firm or individual I support "needs" my support before providing or rescinding it.