← Back to context

Comment by apetresc

1 day ago

I've long maintained that the real indicator that AGI is imminent is that public availability stops being a thing. If you truly believed you had a superhuman, godlike mind in your thrall, renting it out for $20/month would be the last thing you would choose to do with it.

Simpler explanation : they don't have enough GPUs to release this much larger model.

  • Yep, I'm skeptical about their inference efficiency, given how much they're scrambling to reduce compute when they're already the most expensive by far (and in my experience not the best quality either).

    However we cannot observe these things directly and it could be simply that OpenAI are willing to burn cash harder for now.

  • This is actual reason. So any investors reading our system card.... write us another check and watch the $$$$$$$$ roll in. It's so dangerous we can't even release it!

That logic makes sense, but them hyping up the model is a sign that this is just another marketing stunt. Otherwise, we wouldn't even be hearing about it rather than a media blitz designed to stoke demand for their dangerous and exclusive world changing super model.

  • This is the same scheme that OpenAI has used since GPT 2. "Oh no, it's so dangerous we have to limit public access." Great for raising money from investors, but nothing more than a marketing blitz campaign. Additionally, the competitors are probably about to release their models, while Anthropic is still lagging on the necessary infrastructure to serve their old models. So they have to announce their model before the others to stay at least somewhat relevant in the news cycle.

Anthropic needs money like the 112B OpenAI got. They could be hyping and this is good hype. Who knows how benchmaxxed they are.

If they provide access to 3rd party benchmarking (not just one) than maybe I'll believe it. Until then...

  • You don't need to believe it. The real story will be if companies allowed to use it, stick with it.

You have to recoup your training costs though? But I’m sure you would have better option than renting it to the general public if you indeed have a perfected AI

  • If you truly have an artificial superhuman mind, you don't need to rent it out to profit from it. You can skip to the chase and just have it run businesses itself, instead of renting it to human entrepreneur middlemen.

    • Running businesses and dealing with customers can be a major pain. There’s a lot of soft work in any business on top of the technical work.

      Why bother with all that when you can simply charge an extortionate rate and customers will pay it anyway because it’s still profitable?

      2 replies →

    • It could be both? But renting to a few for a really large amount of money would be very low effort for massive revenue, compared to starting new businesses

      1 reply →

    • I'm curious if any models are being trained explicitly on business management.

      I'm also wondering how performance would be tested, and how much results would depend on specific surrounding contexts (law, regulations, and so on) and what happens legally if a model breaks applicable laws.

      I mean actual going-concern businesses with customers, marketing, deliverables of some kind, and support. Not toy activities like share trading.

It only makes sense to rent out tokens if you aren't able to get more value from them yourself.

I would go a step further and posit that when things appear close Nvidia will stop selling chips (while appearing to continue by selling a trickle). And Google will similarly stop renting out TPUs. Both signals may be muddled by private chip production numbers.

I think they'll just increase the price to $1k/month. I don't think they will gate it as long as they can make sure it doesn't design a nuke for you, etc.

You would if there was one other company with a just as capable god like AI. You’d undercut them by 500 which would make them undercut you. Do that a couple of times and boom. 20 dollars.

  • That's still assuming that they're competing as consumer tools, rather than competing to discover the next miracle drug or trading algorithm or whatever. The idea is that there'd more profitable uses for a super-intelligent computer, even if there were more than one.

    • But would miracle drugs and trading algorithms be as profitable as AI research/chip design/energy research? Probably if AI is by far the biggest growth in the economy majority of the AI's usage internally should (as incentivized by economics) in some way work towards making itself better.

That's the thing, when that level comes we will never know it's here. The only thing we'll have as evidence is the company who has it will always have a "public" model that is just slightly ahead of all competitors to keep market share while takeoff happens internally until they make big bang moves to lock in monopoly level/too big to fail/government protection to ensure utter victory.