New funding to build towards AGI

4 days ago (openai.com)

They might need to throw a little more funding into TTS training.

Play it and first thing you hear is an enthusiastic "Today we’re announcing new funding - 40 bi-dollars at a 300 bi-dollar post-money valuation!" Hah.

  • Hard not to laugh, but you'd be surprised how often even other TTS companies are messing this kind of thing up. I think it has a lot to do with the data source they use for training, which evidently doesn't include a lot of currency amounts...

    If you're curious what's possible with <.01% of the funding, check out https://rime.ai/. We train on data recorded in our studio and specifically include a lot of currency in our scripts for this very reason.

    [disclaimer: one of the founders of Rime]

  • Oh my goodness lol!

    Thanks for pointing that out. I never would've pressed play unless I had read your comment. That gave me a genuine laugh.

  • It’s better than what I thought I heard “Today we’re announcing new funding forty five dollars at a three hundred five dollar post money valuation!”

> deliver increasingly powerful tools for the 500 million people who use ChatGPT every week.

Wasn't aware they'd hit a WAU count this high. Impressive, but then again at this kind of valuation you sure want to be heading towards 9-figure MAU numbers.

  • Do investors still not care about revenue and profits at a $300 billion valuation? Seems like the bigger problem for them is that they are losing money on the vast majority of those WAUs with no obvious route to profitability because most them will simply stop using it if forced to pay for it.

    • Its a gigantic bet on user stickiness in AI, and the monetizable value of AI users who don't pay for subscriptions. Aka low-end consumers vs high-end consumers.

      Nvidia and AMD were low-end vs high-end. In the end Nvidia won a total victory by ditching low margin distractions like building GPUs for consoles, and focused solely on higher end PC GPUs that could dually act as accessible research chips.

      1 reply →

Why is their valuation so much more than Anthropics? It's even bigger than Salesforce, SAP, and Cisco.

  • They have a lot more users than Anthropic, I believe. Less technical / mainstream users often don’t know about Claude.

  • > It's even bigger than Salesforce, SAP, and Cisco

    That's pretty incredible to think about. I recently visited SF for the first time and saw the Salesforce tower. To think that OpenAI now has a higher valuation than that is crazy to think about.

    • Salesforce pays to have their logo on top of the tower, much like stadium naming rights. They don't own the building. Very far from it. They lease like 12 floors total (and not even the higher ones).

      1 reply →

  • to be honest, OpenAI seems a bigger threat than Salesforce in the next 10 years

    • Threat to what? How do you figure they're a bigger threat to whatever you identify than any other chatbot company?

  • "ChatGPT" is now a brand, known by billions of people worldwide and it is at the equivalent level of "Google".

    The name alone is worth at least $100B+.

$40B goes in... and what comes out we'll all see. _Something_ definitely will.

I don't think we'll ever see another startup raise so much funding every time they sneeze or wag a finger. It's getting ludicrous, really. $40B at a $300B valuation?

Hoping SoftBank gets a return but I fear it’s too large a round too late in OpenAI’s lifecycle.

Maybe I lack vision. What would it take for OpenAI to join the ranks of trillion dollar companies?

  • SoftBank is usually a precursor to the bubble bursting. I’m more worried about the AI bubble bursting now than I was an hour ago.

    • SoftBank Group is not always known for the most sound funding, they did invest in the "Stargate" program that hasn't seen a whole lot of action.

  • Sure, why not? After all, Bitcoin is worth $1.6 trillion, and I have a much harder time rationalizing that.

    • I have no problem with bitcoin valuation. It isnt a company intending to create revenue. It is a relatively arbitrary store of value, not an expectation of utility.

      Openai would need 15 Billion in profit of 15 billion per year for a P/E of 20 with zero risk, or a 10% chance of 150 Billion profits per year.

      Alphabet is about 100B earnings per year. Do you think Open AI has a 10% chance of being bigger than google? it doesnt have a moat, but I guess google doesnt either, it just dominates its market.

  • > What would it take for OpenAI to join the ranks of trillion dollar companies?

    Some kind of market advantage at the dominant price point?

    • The price point in the long term is the cost of electricity to run your model locally and the amortized cost of buying some hardware that can run it. The fact that it isn't streamlined today doesn't mean it won't be in X years.

      Investors are blindly banking on everyone perpetually going to the theater to see the talkies and missing the vision that we'll all have TVs shortly thereafter...

      1 reply →

  • SoftBank sounds familiar… weren’t they a major investor in WeWork? With such poor investing acumen, why haven’t they gone bankrupt yet? Perhaps the $40 billion raised by OpenAI can only be spent on services from SoftBank’s other investments? Do investors ever restrict how their investments are spent?

And still nobody truly knows what AGI will do, or if it is even possible to achieve via massive datacenters.

Aah the final boss battle is here. Openai now completely collapses and becomes Dell or cracks AGI.

Just wondering how many others in this thread perceive this quest for "AGI" as delusional at the current time, when we don't yet understand the basis of natural general intelligence in almost any way at all? It's good to shoot for the stars, but it feels like if NASA were asking for funding for a manned mission to Andromeda before even landing a man on Mars. The belief that LLMs are the ticket feels absolutely quixotic to me.

  • The idea that LLMs have any road to AGI is much like looking at Charles Babbage's analytical engine design and decreeing that the road to creating a mind is, to borrow a quote from Henry Babbage, merely "a question of cards and time".

  • I'm not sure people are saying LLMs are the ticket. Human intelligence has many aspects apart from language. Large language models seem to do quite well with language but are not really the thing for spatial awareness, doing maths, playing go, operating robot bodies and various other tasks. Computers can do ok with that stuff too, but not generally with language models.

    If you define AGI as human level intelligence in all aspects there's a way to go yet but things seem to be getting quite close to me. I'd say the Turing test is basically passed, stuff like Woz's coffee test that a robot can go into a house, find the coffee stuff and make coffee is not there but maybe in a couple of years? With that stuff I'd say Deepmind is much closer than OpenAI.

  • Various parts of their corporate structure and previous business/financial relationships are tied to the notion of “AGI” being achieved — which is poorly defined and likely to become a semantic/legal debate more than a scientific one.

    So them pushing that language in their pr/marketing activity is not a surprise and not really even meant to be scientifically meaningful.

  • AGI doesn't have a strict definition though so I think it would depend a lot what you see "AGI" as being.

    We're well on our way to building AIs which are competent at many tasks. Assuming an AGI doesn't need to be able to do every task a human can do, and doesn't need to do all of those tasks as well as an expert human, then something which could be called AGI doesn't seem that far off at all.

    I remember a time quite recently when the idea of an AI beating a good-faith interpretation of the Turing test seemed very far away. I feel like we're much closer to AGI today than we were to beating the Turing test in the late 00s.

  • I've been saying that for some time, but you can cash in on the hype.

    All you need to do is convince the credulous and greedy.

  • Yep if it happens in 200 years and/or is LLM-like consider me a dullard future selves. I think the humans-feeding-data to the computer (web crawling, RLHF, etc, etc) as a substitute for sense organs as input is nowhere near enough data for AGI. Also convinced these sums of money put into neuroscience would bring about AGI quicker than any alternative.

    It's all about data ingestion, and the assimilatable data for computers is tiny.

  • I am wondering about why all these people think AGI will care about humans like enough to send terminators for them.

    Would be fun to watch billionaires pouring all their wealth into something that would make its own mind to go away and not give a damn about anything related to living things.

    Not calling out any books not to spoil stuff for people - just mentioning it is not my original idea but one that I find interesting.

  • One can define AGI the way it is achieved already: for example outperform avg human in large number of intellectual tasks.

    • But general intelligence has so much more to it than this. It's so overly simplistic to say "outperform on tasks."

      General intelligence means perceiving opportunities. It means devising solutions for problems nobody else noticed. It means understanding what's possible and what's valuable just from existing without being told. It means asking questions without prompting, simply for the sake of wondering and learning. It means so many things beyond "if I feed this data input to this function and hit run, can it come up with the correct output matching my expectations"?

      Sure, an LLM might pass a series of problem-solving questions, but could it look up and see the motion of stars and realize they implied something about the nature of the world and start to study them, unasked, and deduce the existence of solar systems and galaxies and gravity and all the other things?

      I just don't buy it. It's so reductive. They're hoping to skip over all the real understanding and achieve something great without doing the real legwork to understand the true mechanisms of intelligence by just pouring enough processing time into training. It won't work. They're missing integral mechanisms by overfocusing on the one thing they have a handle on. They don't know what they don't know, but worse, they're not trying to find out.

      1 reply →

    • The market itself is also arguably a massive form of AGI that well-predates the concept. I choose this interpretation when watching Terminator (any of them really).

      TBF this doesn't imply anything about OpenAI's quest to make a chatbot that gets along with people at parties.

Real version: taxpayers fund new model towards AGI at the expense of their entire lives

OpenAI had billions. Now it is asking for trillions. It's taking over.