← Back to context

Comment by Workaccount2

20 days ago

So sam let the cat out of bag (chatgpt) behind the backs of "safety review" and the board. Probably why Google was caught flat footed and how ChatGPT became the household name.

Dubious moral decision but an excellent business one. Perhaps the benefit of hindsight where ChatGPT didn't cause immediate societal collapse helps here.

ChatGPT is already out when the story picks up, it's talking about concerns about GPT-4.

And the story isn't about that single incident of Altman dodging review and working behind the backs of the board—it's about a pattern of deception and toxic management practices that culminated in Altman lying to Murati about what the legal department had said, which lie was given to the board as part of a folio of evidence that he needed to be ousted.

You're trying to distill a pattern of toxicity and distrust into a single decision, which ameliorates it more than is fair.

  • Yeah to me the overt lying is more damning than any particular decision. If he owned the decision to bypass ethics review and release a model, fine, we can argue if that was prudent or not, but at least it's honest leadership. Lying that the counsel said it was ok when they hadn't is a whole other thing! When someone starts doing that repeatedly, and it keeps getting back to you that stuff they said was just outright false, you can't work with them at all imo.

    If this is something he's been doing for years, it becomes clearer why Y Combinator fired him, though they have been kind of cagey about it.

  • The question then remains: if you have a lying, toxic, manipulative boss, who would want to work for them ? Especially the direct reports of one

    • From the story it sounds like the direct reports generally did not want to work with Altman, Brockman excluded. Even Murati was one of the primary instigators of the firing, but she changed her mind for reasons that the article doesn't really explore.

      1 reply →

Aside from becoming the opposite of the values their name suggests, there’s two main mistakes OpenAI made in my view: violate copyright when training, and rush to release the chatbot. Stealing original work is going to bite them legally (opening them to all sorts of lawsuits while killing their own ability to sue competitors piggy-backing off their model output, for example), and is a special case of them being generally shortsighted and passing on an opportunity to make a truly Apple- or Amazon-scale business by applying strategy and longer term thinking (even if someone else got to release an LLM chatbot before them, they could—as in, had the funds and the talent to—build something higher level, properly licensed, and much more difficult to commoditise).

If this was the fault of Altman, it is understandable that certain people would want him out.

  • > violate copyright when training

    If we could incrementally update our own brains by swapping cells for chips, what percentage of our brain has to be chips before us learning from a book is a violation of copyright?

    When learning to recite a recent children's poem in kindergarten, what level of accuracy can a child attain before their ability to repeat it privately to one other person at a time is a copyright violation?

    • I don't think the concern is related specifically to training on computer chips with copyrighted content.

      If you are going to use human brain cells to memorize protected content and sell it as a product, that's still an issue based on current copyright laws.

      5 replies →

    • Want to abolish economic copyright alltogether? I could get behind that. Making a legal exception because of some imagined future metaphysical property of this particular platform sounds like being fooled.

      2 replies →

    • This is one issue with Microsoft's Total Recall thing, right? I wonder how they're dealing with that.

    • Others replied to this and I am still not sure what your point is. Are you saying big tech should be able to get away with this because LLMs are just like us humans?

    • > If we could incrementally update our own brains by swapping cells for chips, what percentage of our brain has to be chips before us learning from a book is a violation of copyright?

      The same percentage at which you stop qualifying to be human and become an unthinking tool, fully controlled by its operator to do whatever they want, without free will of its own and without any ethical concerns about abuse and slavery, like is the case with all LLMs.

      (Of course, it is a moot point, because creating a human-level consciousness with chips is a thought experiment not grounded in reality.)

      > When learning to recite a recent children's poem in kindergarten, what level of accuracy can a child attain before their ability to repeat it privately to one other person at a time is a copyright violation?

      Any level thanks to the concept called human rights and freedoms, famously not applied to machines and other unthinking tools.

      3 replies →

  • Do the copyright claims have any legs at all? ianal, but I thought it was pretty settled that statistical compilation of copyrighted works (indexes, concordances, summaries, full-text search databases) were considered "facts" and not copies.

    (This would be separate from the contributory infringement claim if the model will output a copyrighted work verbatim)

    • 1. Google was, and still is in some developed countries, under fire for as much as summarising search results too much, so I think yes, the claims have legs.

      > This would be separate from the contributory infringement claim if the model will output a copyrighted work verbatim

      2. Commercial for-profit models were shown to do that, and (other legal arguments aside, such as model and/or its output being a derivative work, etc.) in some cases that was precisely the smoking gun for the lawsuit, if I recall correctly.

      I have not seen any conclusive outcome, I suppose it will depend on jurisdiction.

      2 replies →