OpenAI models coming to Amazon Bedrock: Interview with OpenAI and AWS CEOs

16 hours ago (stratechery.com)

https://aws.amazon.com/bedrock/openai/

https://www.aboutamazon.com/news/aws/bedrock-openai-models

https://openai.com/index/openai-on-aws/

https://x.com/amazon/status/2049178618639839427

Remember that models on different inference platforms might not necessarily give exactly the same results, adding another axis of non-determinism to development. Things like quantization, custom model serving silicon, batching, or other inference optimizations might mean a model from the original provider performs differently from the hosted one :/

This paper isn't the exact same scenario, since it's an auditable open weight llama model, but shows the symptoms of this: https://arxiv.org/pdf/2410.20247

Availability through Bedrock has been a major driver in use of Anthropic in my org. And I am betting there is actual margin in it as well.

I wonder if this is directly linked to the split up with Microsoft. Just from my anecdata, OpenAI is getting completely ignored in serious enterprise deployments because what they offer on Azure sucks and there is no other corporate friendly way to get it. They probably saw themselves getting destroyed in enterprise and realised it was existential to be able to compete with Anthropic on AWS.

As someone who works at big tech and spends countless hours in meetings hoping to get some small feature coordinated for deployment across two teams, I can't imagine the amount of meetings and 6-pagers that were involved in running these models on bedrock's hardware.

  • at this level they just decide and spin up a swat team to execute it in a couple weeks without politicking. the bureaucratic ways, reviews are just for the low levels, to keep them busy with feature scraps while they mostly do operations

    • yup, I think there are few public articles on aws mantle so you can look it up, but internally this is pretty common knowledge. The entire inference engine of bedrock is built and maintained by a handful of ec2 engineers (all principals and above). Judging by the commit history of the project they are able to just build independent of any of the traditional bureaucracy.

      3 replies →

    • Lol, spinning up swat teams because someone high up decides "drop everything this is my pet priority now" is politicking. It looks good for the leaders, meanwhile its the engineers pulling the all nighters and dealing with having to maintain systems that are operationally compromised from day 0 because there's no proper planning/scoping involved other than "Big Man says this needs to be done in 2 weeks"

      3 replies →

  • Depends on how its implemented, but Amazon already did add gpt-oss-20b so if the model is similar enough to the OSS variant of GPT, it might not have been as complicated as you might think.

The enterprise sales motion here is interesting. A lot of regulated industries (finance, healthcare) have existing AWS contracts with data residency commitments baked in. OpenAI on Bedrock basically lets those orgs skip the separate DPA negotiation with OpenAI. Could be a bigger unlock than it looks on paper.

Claude got a looooot more buy in with a lot of privacy-concerned orgs I work with because they could access it through their "trusted" intermediate Amazon. OpenAI has been banned and is not trusted. I'm not sure that I agree with these orgs' legal teams' assessments, but they definitely read the terms of service far closer than I did.

We will see if this changes the equation, but it feels like OpenAI is pretty far behind and playing catch up on all fronts. Though to be honest, "pretty far behind" is like 2-8 weeks in the AI world, so it may not matter a ton, it's mostly perception. And for me and my information bubble, perception of OpenAI is rock-bottom due to Sam Altman. From appearing unethical to appearing unhinged with demands from fabs and everything else, I'm not a fan.

  • You can sign ZDR agreements with any of the major LLM providers. Using AWS alone is also not sufficient. Even though AWS is running the model, you need to contact them for proper ZDR.[0]

    [0]: https://platform.claude.com/docs/en/build-with-claude/claude...

    • Helpful link. Thank you.

      I think that when people are worried about ZDR, what they really worry about is data governance. From what I’ve seen there’s a general distrust of OpenAI. AWS may keep your data around (without formal ZDR) but the concern of governance (using your data to train without your consent) seems like it would be much lower, because any breach of contract at AWS would have potential to destroy trust in what’s already a massively profitable company, so the incentives just aren’t there.

      I’m not claiming OpenAI is training on API data. Just that they don’t have as strong of an incentive not to as AWS.

      1 reply →

  • While Anthopic has the best model and a focussed (no disturbance, lawsuits) leadership, they got a lot of enterprise access due to AWS. It is mutual no doubt, with both sides benefitting. The culture of feedback loop of AWS customers would have helped them in getting to enterprise-ready faster. Just my hypothesis.

    • But Azure is just as big, if not equally big, in Enterprise. The argument that Azure didn’t give them enough access within enterprises doesn’t make sense to me.

      1 reply →

  • The thing they are really wildly behind on is a business model. They are losing wild amounts of money per customer and it is hard to see how the competitive situation is going to allow them to fix that.

  • It just not about AWS being some "trusted intermediary"... it's that the model runs inside the customer own AWS account under a different contract. AWS explicitly states inputs/outputs are not shared with model providers and are not used to train base models [1]

    And for OpenAI, there is a May 2025 preservation order in NYT v. OpenAI. The court is forcing OpenAI to retain ChatGPT output logs indefinitely, including chats users have deleted that would normally be purged within 30 days [2]. That makes it a non starter for HIPAA/GDPR bound orgs.

    [1] https://aws.amazon.com/bedrock/faqs/

    [2] https://openai.com/index/response-to-nyt-data-demands/

    • I'm confused, your own #2 link says that Open AI is not bound to store output logs indefinitely going forward:

      > Update on October 22, 2025:

      > After months of litigation, we are no longer under a legal order to retain consumer ChatGPT and API content indefinitely. Our obligations under the earlier order ended on September 26, 2025.

      > We’ve returned to our standard data retention practices :

      > Deleted ChatGPT conversations and Temporary Chats will be automatically deleted from our systems within 30 days (opens in a new window).

      > API data will also be automatically deleted after 30 days.

  • It's like Coca-Cola being banned at a school, and then Pepsi getting some contracts with the cafeteria because of it.

  • They're also not focused exclusively only on building an LLM, they have video and image generation too. Anthropic has one single focus, and this is why they are usually at the very top in the SWE benchmarks.

    • Isn't it the case that OpenAI and Anthropic regularly just swap for whoever is at the top of the latest benchmarks? They're also so close in scores that it's effectively a wash anyways.

      What OP is referring to is Anthropic aligning with corporate terms and conditions early, positioning themselves to be effectively resold by AWS rather than requiring orgs to procure them directly. This is huge in the enterprise world because the processes to get broad approval are generally far smaller and shorter for "just another AWS service" compared to a whole new vendor.

      2 replies →

    • IMHO the benchmarks aren't useful, and ranking among the frontier models is mostly noise. The extra features around the coding agent have a much bigger impact on productivity than having to provide slightly more specification and guidance to the models; a 90% success rate versus a 92% success rate on the tasks I ask it to do is far more influenced by what I say than what the model is capable of.

    • Didn’t they say Sora will only be used to internally create training data? Integrated image generation seems more in the neat feature category than some fundamental advantage, but maybe someone has use cases I haven’t considered.

OpenAI just gave up Azure exclusivity, killed the AGI clause, and stopped paying Microsoft revenue share to get on AWS. Anthropic figured out 18 m ago that enterprises buy from their cloud, not from the best model. OpenAI is just catching up.

Just waiting for Gemma 4, DeepSeek 4 now. Then the only thing I'll be able to complain about is the completely different API to interface with (unless they FINALLY move to full OpenAI support).

This would be a nice compliance win. One less sub-processor and all our data is already on AWS so less worrying about sending it off somewhere else

Great, I can now buy openAI through AWS with an interface that is totally incompatible with all my tools (unless AWS have finally given up and just made bedrock useful by adopting openAPI finally)

The market might be increasingly hard on AI startups in general as enterprises adopt providers like Amazon Bedrock and refuse to sign other deals.

Well that didn't take long.

This is a big news for AWS hosted products.

Microsoft Azure has been the worst interms of maintaining a highly available service and also managing predictable latency.

Their azure customer support is bad. Not ready for any real enterprise cloud offering. They behave like Comcast customer support.

It was absolutely idiotic to lock it down to Azure. It wasn't meant to be an iphone+at&t combo where the phone is an end all be all.

A cloud product depends on a lot of services and nobody would switch cloud providers for a candy.

OpenAI frontier models coming to Bedrock soon?

  • > Starting today, @awscloud and OpenAI are bringing the latest OpenAI models to Amazon Bedrock, launching Codex on Amazon Bedrock, and launching Amazon Bedrock Managed Agents, powered by OpenAI (all in limited preview). AWS and OpenAI will continue to bring the latest advances to Amazon Bedrock—so the models and agents you build with today continue to benefit from new breakthroughs as they arrive.

    https://x.com/amazon/status/2049178618639839427

  • We've updated the title above to make that clearer.

    Since the product doesn't seem to be available yet, and the other links are all press releases, we'll leave the interview up as the main link.

This doesn't mean you have the raw model weights, right? That's still entirely hidden / opaque?

You can just run "air gapped" inference?

Is this only of interest to enterprise customers already on AWS (who want "air gapped" behavior)? Is there any other use case for this?

This will be more expensive than calling OpenAI directly, right?

  • A lot of companies already have data processing agreements and compliance sign-off for using AWS. Many are hesitant to send their data to AI startups with an incentive to train their models and a history of being.... loose with how they intake training data. Even when they do give assurances otherwise. AWS is more trusted in this aspect.

    If this ends up similar to Claude on Bedrock, it's the same price.

  • This is for people who don't trust openAI with their data, but do trust Amazon.

    But it also is for Devs in a company who already have a blanket agreement with Amazon, but would have an uphill battle signing an agreement with openAI.