Comment by atleastoptimal

5 months ago

Almost every parent comment on this is negative. Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?

It seems that there is a constant motive to view any decision made by any big AI company on this forum at best with extreme cynicism and at worse virulent hatred. It seems unwise for a forum focused on technology and building the future to be so opposed to the companies doing the most to advance the most rapidly evolving technological domain at the moment.

> Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?

Isnt that a good thing? The comments here are not sponsored, nor endorsed by YC.

  • I'd expect to see a balance though, at least on the notion that people would be attracted to posting on a YC forum over other forums due to them supporting or having an interest in YC.

    • I think the majority of people don't care about YC. It just happens to be the most popular tech forum.

    • > posting on a YC forum over other forums due to them supporting or having an interest in YC.

      I've been posting here for over a decade, and I have absolutely no interest in YC in any way, other than a general strong negative sentiment towards the entire VC industry YC included.

      Lots of people come here for the forum, and leave the relationship with YC there.

    • Why do you assume there would be a balance? Maybe YC's reputation has just been going downhill for years. Also, OpenAI isn't part of YC. Sam Altman was fired from YC and it's pretty obvious what he learned from that was to cheat harder, not change his behavior.

      5 replies →

    • It's Saturday morning for California, where YC is centered. Everyone here should be out doing anything else (including me). It's not a random sampling of HN commenters, but a certain subset. I think we've just found out which way the subset that comments on Saturday mornings leans.

  • Well, in a way they are endorsed. They actively censor things they don’t like. Since there’s no moderation log, nobody prevents them from removing things just because they don’t like them.

When dealing with organizations that hold a disproportionate amount of power over your life, it's essential to view them in a somewhat cynical light.

This is true for governments, corporations, unions, and even non-profits. Large organizations, even well-intentioned ones, are "slow AI"[1]. They don't care about you as an individual, and if you don't treat everything they do and say with a healthy amount of skepticism and mistrust, they will trample all over you.

It's not that being openly hostile towards OpenAI on a message board will change their behavior. Only Slow AI can defeat other Slow AI. But it's our collective duty to at least voice our disapproval when a company behaves unethically or builds problematic technology.

I personally enjoy using LLMs. I'm a pretty heavy user of both ChatGPT and Claude, especially for augmenting web search and writing code. But I also believe building these tools was an act of enclosure of the commons at an unprecedented scale, for which LLM vendors must be punished. I believe LLMs are a risk to people who are not properly trained in how to make the best use of them.

It's possible to hold both these ideas in your head at the same time: LLMs are useful, but the organizations building them must be reined in before they cause irreparable damage to society.

[1]: https://www.antipope.org/charlie/blog-static/2018/01/dude-yo...

Why do you assume that a forum run by X needs to or should support X? And why is it unwise - from what metrics do you measure wisdom?

My takeaway is actually the opposite, major props to YC for allowing this free speech unfettered - I cant think of any other organization or country on the planet where such a free setup exists

  • Unfettered? Have you ever seen how many posts disappear from being flagged for the most dubious reasons imaginable? Have you been on other sites on the internet? Hell, Reddit is more unfettered and that’s terrible.

    • hmm interesting - based on the kind of posts that I see I made the presumption that this place is free but the opposite actually makes more sense. What kind of posts have you seen disappear?

I don't want to be glib - but perhaps it is because our "context window lengths" extend back a bit further than yours?

Big tech (not just AI companies) have been viewed with some degree of suspicion ever since Google's mantra of "Don't be evil" became a meme over a decade ago.

Regardless of where you stand on the concept of copyright law, it is an indisputable fact that in order for these companies to get to where they are today - they deliberately HOOVERED up terabytes of copyrighted materials without the consent or even knowledge of the original authors.

These guys are pursuing what they believe to be the biggest prize ever in the history of capitalism. Given that, viewing their decisions as a cynic, by default, seems like a rational place to start.

I’ll bite, but not in the way you’re expecting. I’ll turn the question back on you and ask why you think they need defending?

Their messaging is just more drivel in a long line of corporate drivel, puffing themselves up to their investors, because that’s who their customers are first and foremost.

I’d do some self reflection and ask yourself why you need to carry water for them.

  • I support them because I like their products and find the work they've done interesting, and whether good or bad, extremely impactful and worth at least a neutral consideration.

    I don't do a calculation in my head over whether any firm or individual I support "needs" my support before providing or rescinding it.

    • Perhaps the people you see as cynical have more research and/or experience behind their views on OpenAI than you. Many of us have been more naive in the past, including specifically towards Altman, Microsoft, and OpenAI.

      1 reply →

I would call it skepticism, not cynicism. And there is a long list of reasons that big tech and big AI companies are met with skepticism when they trot out nice sounding ideas that require everyone to just trust in their sincerity despite prior evidence.

> Why is there such an anti-OpenAI bias on a forum run by YCombinator, basically the pseudo-parent of OpenAI?

Because our views are our own and not reflective of the feelings of the company that hosts the forum?

This. I’ve been on HN for a while. I am barely hanging on to this community. It is near constant negativity and the questioning of every potential motive.

Skepticism is healthy. Cynicism is exhausting.

Thank you for posting this.

  • In the current echo chamber and unprecedented hype, I'll take cynicism over hollow positivity and sycophancy

Because at some point, HN builders mostly left the site and it’s just become /. 3.0.

People remember things and consistently behaving like an asshole gets you treated like an asshole.

OpenAI had a lot of goodwill and the leadership set fire to it in exchange for money. That's how we got to this state of affairs.

  • What are the worst things OpenAI has done

    • The number one worst thing they've done was when Sam tried to get the US government to regulate AI so only a handful of companies could pursue research. They wanted to protect their moat.

      What's even scarier is that if they actually had the direct line of sight to AGI that they had claimed, it would have resulted in many businesses and lines of work immediately being replaced by OpenAI. They knew this and they wanted it anyway.

      Thank god they failed. Our legislators had enough of a moment of clarity to take the wait and see approach.

      3 replies →

    • Dude, they completely betrayed everything in their "mission". The irony in the name OpenAI for a closed, scammy, for profit company can not be lost on you.

      4 replies →