Comment by rdtsc

3 days ago

> But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”

A step in the positive direction, at least they don't have to pretend any longer.

It's like Google and "don't be evil". People didn't get upset with Google because they were more evil than others, heck, there's Oracle, defense contractors and the prison industrial system. People were upset with them because they were hypocrites. They pretended to be something they were not.

I worked at Google for 10 years in AI and invented suggestive language from wordnet/bag of words.

As much as what you are saying sounds right I was there when sundar made the call to bury proto LLM tech because he felt the world would be damaged for it.

And I don’t even like the guy.

  • > sundar made the call to bury proto LLM tech

    Then where did nano banana and friends come from? Did Google reverse course? Or were you referring to something else being buried?

    • This was long before. Google had conversational LLMs before ChatGPT (though they weren’t as good in my recollection), and they declined to productize. There was a sense at the time that you couldn’t productize anything with truly open ended content generation because you couldn’t guarantee it wouldn’t say something problematic.

      See Facebook’s Galactica project for an example of what Google was afraid would happen: https://www.technologyreview.com/2022/11/18/1063487/meta-lar...

      2 replies →

    • Neema was running a fully fledged Turing passing chatbot in 2019. It was suppressed. Then written about in open source and openAI copied it. Then Google was forced to compete.

      This is all well known history.

No it's actually possible for organizations to work safely for long periods of time under complex and conflicting incentives.

We should stop putting the bar on the floor for some of the (allegedly) most brilliant and capable minds in the world.

  • In a capitalistic society (such as ours) I find what you’re describing close to impossible, at least when it comes to large enough organizations. The profit motive ends up conquering all, and that is by design.

    • Counterpoint: B corporations.

      It's clearly possible for companies to self-impose safeguards: ESG/DEI, Bcorp, choosing to open source, and so on. If investors squeal, find better investors or tell them to put up with it. You can make plenty of profit without making all the profit that can be made.

    • There are countless highly effective charities that achieve this

      (Yes, I know there is an even larger number of "charities" that do not achieve this ideal)

I don't really agree. People are plenty upset with palantir and broadcom for being evil for example and I don't see their motto promiong they won't be.