I read the article on archive and figured there was a big chunk missing. It really does not make any sense.
Sutskever and Murati were methodical, they waited until the board was favorable to the outcome they wanted, engaged with board members individually laying the groundwork... and then just changed their mind when it actually happened!?
The article says Sutskever was blindsided by the rank-and-file being on Sam's side. Presumably he thought the outcome was going to be business as more-or-less usual but with Murati or someone as CEO and then panicked when that didn't happen.
The board did not plan or execute their ouster well, which forced Murati and Sutskever to coup their coup to maintain the stability of the company. The board and Sutskever were expecting the general support of the company, so they had no real backup plan or evidence ready that they could publicly release.
Why couldn’t they release the evidence? At least some of it is here in the article, and it’s damaging to Sam but not particularly damaging to the company. If Murati demanded they release the evidence, why refuse?
It makes perfectly sense if you don't try to read too much logic into their actions and view it solely from the social dynamics/emotions (Murati early realised that the coup led by Sutskever, Toner and her had failed). Besides that, the board installing her as new ceo - who provided the main claims for why to oust the old ceo - wouldn't fly through with employees and partners. She knew that. Also some of the people on the board clearly weren't qualified for that job as you can see how this whole coup was carried out by them.
> And why wouldn’t the board just explain their decision if Murati herself was imploring them to do so?
I think because they were in over their heads. They were on the board to run a non-profit and then it metastasized into a high-stakes Fortune 50-sized company.
People cared about the OpenAI drama when it looked like they might have some real edge and the future of AI depended on them. Now it’s clear the tech is cool but rapidly converging into a commodity with nobody having any edge that translates into a sustainable business model.
In that reality they can drama all they want now, nobody really cares anymore.
Yes and the open source models + local inference are progressing rapidly.
This whole API idea is kind of limited by the fact that you need to RT to a datacenter + trust someone with all your data.
Imagine when OpenAI has their 23&me moment in 2050 and a judge rules all your queries since 2023 are for sale to the highest bidder.
Even worse for these LLM-as-a-service companies i that the utility of open source LLMs largely comes down to the customization: you can get a lot of utility by restricting token output, varying temperature, and lightly retraining them for specific applications.
The use-cases for LLMs seem unexplored beyond basic chatbot stuff.
There's more to business than tech. There's more to business than product.
The software behind Facebook as an app wasn't particularly unique, yet it eclipsed the competition. The same could be said for Google. Google didn't even have any real lock-in for years, but it still owned consumer mindshare, which gave it the vast majority of search traffic, which made it one of the most valuable companies in the world.
ChatGPT is in a similar position. The fact of the matter is, the average person knows what ChatGPT is and how to use it. Many hundreds of millions of normal people use ChatGPT weekly, and the number is growing. The same cannot be said of Claude, DeepSeek, Grok, or the various open source models.
And the gap is massive. It's not even close. It's like 400M weekly ChatGPT actives vs 30M monthly Claude actives.
So yes, the average Hacker News contrarian who thinks their tiny bubble represents the entire world might think that "nobody cares," in part because nobody they know cares, and in part because that assessment aligns with their own personal biases and desires.
But anyone who's been paying attention to how internet behemoths grow for the past 30 years certainly still cares about OpenAI.
You can't compare Facebook with ChatGPT because the costs per user are in totally different orders of magnitude. One $5/mo VPS can serve the traffic of several hundred thousand Facebook users, while ChatGPT needs an array of GPUs per active user. They can optimize this somewhat, but never as much as Facebook can.
This means that they're stuck with more expensive monetization plans to cover their free tier loss leader, hence the $200/mo Pro subscription. And once you're charging that kind of price to try to make ends meet, you're ripe for disruption no matter how good your name recognition.
> Google didn't even have any real lock-in for years, but it still owned consumer mindshare, which gave it the vast majority of search traffic, which made it one of the most valuable companies in the world.
This isn't correct at all. Google's search engine was an important stepping stone to the behavior that actually gave them lock-in, which was an aggressive, anti-competitive and generally illegal effort to monopolize the market for online advertising through acquisitions and boxing out competitors.
It really was only possible because for reasons we decided to completely stop enforcing antitrust laws for a decade or two.
400 million use it for free, you can give away 400 million of anything for free.
The question is, how many are willing to pay the monthly fee required to stop OpenAI from bleeding 5billion/year and return the promised trillions to investors.
Regardless of how wrong someone is or you feel they are, can you please make your substantive points thoughtfully? This was not a good Hacker News comment, and you've unfortunately been doing this repeatedly lately.
OpenAI is spending $2 for every $1 it earns. It's certainly eating its investors' lunch, but it's not a sustainable business yet and from all accounts doesn't have a clear plan for how to become one.
Meanwhile, the ZIRP policies that made this kind of non-strategy strategy feasible are gone.
>They had banked on Murati calming employees while they searched for a CEO. Instead, she was leading her colleagues in a revolt against the board.
I finally finished the 4th of Caro's books about LBJ, "The Passage of Power", largest part about how LBJ dealt with assassination. Over and over shows how LBJ made sure that nobody, meaning world leaders, citizens, others in government, and, relevant here, also those in the Kennedy administration would feel lost and want to resign. Caro made sure to note how this is a very difficult task and required LBJ to act differently than normal, but also how important it is to not have things go into disarray which easily happens.
Side note: Astounding notes of how LBJ was able to get bills that weren't going to get through congress with Kennedy were pushed through and made possible by Johnson. Quote to end a chapter by Richard Russell, southern complete segregationist and racist, says "You know, we could have beaten John Kennedy on civil rights, but we can't Johnson." Other side, Caro makes certain however about how the coming issues of Vietnam show the darker side of LBJ and not get fully caught up in his stability of power and civil rights successes.
Maybe these are all cases of those who want power are usually those who shouldn't have it.
So sam let the cat out of bag (chatgpt) behind the backs of "safety review" and the board.
Probably why Google was caught flat footed and how ChatGPT became the household name.
Dubious moral decision but an excellent business one. Perhaps the benefit of hindsight where ChatGPT didn't cause immediate societal collapse helps here.
ChatGPT is already out when the story picks up, it's talking about concerns about GPT-4.
And the story isn't about that single incident of Altman dodging review and working behind the backs of the board—it's about a pattern of deception and toxic management practices that culminated in Altman lying to Murati about what the legal department had said, which lie was given to the board as part of a folio of evidence that he needed to be ousted.
You're trying to distill a pattern of toxicity and distrust into a single decision, which ameliorates it more than is fair.
Yeah to me the overt lying is more damning than any particular decision. If he owned the decision to bypass ethics review and release a model, fine, we can argue if that was prudent or not, but at least it's honest leadership. Lying that the counsel said it was ok when they hadn't is a whole other thing! When someone starts doing that repeatedly, and it keeps getting back to you that stuff they said was just outright false, you can't work with them at all imo.
If this is something he's been doing for years, it becomes clearer why Y Combinator fired him, though they have been kind of cagey about it.
Aside from becoming the opposite of the values their name suggests, there’s two main mistakes OpenAI made in my view: violate copyright when training, and rush to release the chatbot. Stealing original work is going to bite them legally (opening them to all sorts of lawsuits while killing their own ability to sue competitors piggy-backing off their model output, for example), and is a special case of them being generally shortsighted and passing on an opportunity to make a truly Apple- or Amazon-scale business by applying strategy and longer term thinking (even if someone else got to release an LLM chatbot before them, they could—as in, had the funds and the talent to—build something higher level, properly licensed, and much more difficult to commoditise).
If this was the fault of Altman, it is understandable that certain people would want him out.
If we could incrementally update our own brains by swapping cells for chips, what percentage of our brain has to be chips before us learning from a book is a violation of copyright?
When learning to recite a recent children's poem in kindergarten, what level of accuracy can a child attain before their ability to repeat it privately to one other person at a time is a copyright violation?
Do the copyright claims have any legs at all? ianal, but I thought it was pretty settled that statistical compilation of copyrighted works (indexes, concordances, summaries, full-text search databases) were considered "facts" and not copies.
(This would be separate from the contributory infringement claim if the model will output a copyrighted work verbatim)
So, why didn't the board tell the other executives (and employees) what Murati had told them. When it was them in the firing line, why didn't Ilya tell that story? They could have just fired Murati (based on the screenshots presented) and continued as before. Or what am I missing?
Yeah I don't understand this either. Make a case or don't but keeping it incredibly vague, especially when so much money was on the line due to the secondary wasn't going to work.
In November 2023, OpenAI CEO Sam Altman was suddenly fired by the board—not because of AI safety fears or Effective Altruism, but due to concerns over his leadership, secrecy, and possibly misleading behavior. CTO Mira Murati and chief scientist Ilya Sutskever shared evidence of Altman’s actions, like skipping safety protocols and secretly controlling OpenAI’s startup fund.
The board didn’t explain the firing well, and it backfired. Murati, who at first supported the board, turned on them when they wouldn’t give clear reasons. Nearly all OpenAI employees, including Murati and Sutskever, threatened to quit unless Altman came back. With the company on the brink of chaos, the board caved and he was reinstated days later.
From the outside it really seems like Peter Thiel was a brilliant kid who read Lord of the Rings and became obsessed with becoming the real world Sauron, manipulating weak-minded men in Silicon Valley into following the path of soulless corruption.
For anybody who followed the saga at the time, there's nothing revelatory here as implied by the title but the essay (an excerpt from a book) is a useful summary if you wanted one.
One of the strengths of the Chinese companies is they are more aligned on their goals as team members: make cutting edge LLMs and sell access. When you have all these competing interests you end up with internal strife. This faction wants to prevent the literal end of humanity, this faction wants to make the world a better place, this faction wants to curry favor with Washington apparatchiks. Nobody is really that interested in making money by inventing new algorithms. The result is frankly embarrassing drama for the whole world to indulge in.
“One Company for All People” is a great weakness of American companies and is contributing to this economic downturn, and not just in tech. Corporate Universalism needs to go the way of history.
Chinese companies have one massive advantage in aggregate: they know that from 2028 onwards they will be competing for a captive domestic market of >1.3B people. The CCP have declared as their industrial [service] policy that by the end of 2027, all Chinese companies must be using services exclusively from Chinese suppliers. The target ratio of domestic/foreign services is being ramped up year over year, so that by 2028 the base expectation is everyone to have 100% Chinese suppliers only.
From thereon, every exception must be justified to - and approved by - their respective politburo.
An obvious second-order effect is that there has been an explosion of Chinese B2B companies eager to get themselves established in the market. They know that in just a few years they can still sell their services outside China, but can expect very limited competition from non-Chinese companies. And inside the country, they have a population of ~4x of US to compete for.
Chinese strength is that they have a manufacturing economy and oversupply on everything.
They have a great incentive to quicken AI science, as it will lead to disillusionment, if not replacement, of the knowledge economy, in other words the US economy. I believe this to be the hidden motive and it's not about profit.
The way I see it, everything points to the opposite being true: US companies, by and large, are completely dominant in technology. Google seems to be winning the race and is exactly the "one company for all people" kind of place you're talking about. Academic studies have generally shown that diverse teams outperform monocultures.
Meh, I personally don't care that much about OpenAI drama, anymore. By now it is clear that they do not hold any edge and that they won't be able to establish an AI monopoly, and that all I ever cared about.
Some of the early comments here almost read as astroturfing from friends of 'sama. This article provides interesting context to one of the most consequential SV events of the decade.
https://archive.is/xP4N1
[dead]
I’m even more confused now than before I read the article:
- Sutskever and Murati compile evidence of Altman’s lies and manipulation to oust him.
- Sutskever emails the evidence to the board and suggests they act on it.
- The board fires Altman but refuses to explain why.
- Murati demands the board explain why.
- The board refuses, and Murati and Sutskever rebel against the board and petition with other employees to reinstate Altman.
It all makes no sense. And why wouldn’t the board just explain their decision if Murati herself was imploring them to do so?
I read the article on archive and figured there was a big chunk missing. It really does not make any sense.
Sutskever and Murati were methodical, they waited until the board was favorable to the outcome they wanted, engaged with board members individually laying the groundwork... and then just changed their mind when it actually happened!?
The article says Sutskever was blindsided by the rank-and-file being on Sam's side. Presumably he thought the outcome was going to be business as more-or-less usual but with Murati or someone as CEO and then panicked when that didn't happen.
1 reply →
The board did not plan or execute their ouster well, which forced Murati and Sutskever to coup their coup to maintain the stability of the company. The board and Sutskever were expecting the general support of the company, so they had no real backup plan or evidence ready that they could publicly release.
Why couldn’t they release the evidence? At least some of it is here in the article, and it’s damaging to Sam but not particularly damaging to the company. If Murati demanded they release the evidence, why refuse?
3 replies →
It makes perfectly sense if you don't try to read too much logic into their actions and view it solely from the social dynamics/emotions (Murati early realised that the coup led by Sutskever, Toner and her had failed). Besides that, the board installing her as new ceo - who provided the main claims for why to oust the old ceo - wouldn't fly through with employees and partners. She knew that. Also some of the people on the board clearly weren't qualified for that job as you can see how this whole coup was carried out by them.
> And why wouldn’t the board just explain their decision if Murati herself was imploring them to do so?
I think because they were in over their heads. They were on the board to run a non-profit and then it metastasized into a high-stakes Fortune 50-sized company.
People cared about the OpenAI drama when it looked like they might have some real edge and the future of AI depended on them. Now it’s clear the tech is cool but rapidly converging into a commodity with nobody having any edge that translates into a sustainable business model.
In that reality they can drama all they want now, nobody really cares anymore.
Yes and the open source models + local inference are progressing rapidly. This whole API idea is kind of limited by the fact that you need to RT to a datacenter + trust someone with all your data.
Imagine when OpenAI has their 23&me moment in 2050 and a judge rules all your queries since 2023 are for sale to the highest bidder.
It doesn't need to wait until 2050. The queries would be for sale as soon as they stop providing a competitive advantage.
Even worse for these LLM-as-a-service companies i that the utility of open source LLMs largely comes down to the customization: you can get a lot of utility by restricting token output, varying temperature, and lightly retraining them for specific applications.
The use-cases for LLMs seem unexplored beyond basic chatbot stuff.
2 replies →
Selling tokens is likely to be a tough business in a couple of years
There's more to business than tech. There's more to business than product.
The software behind Facebook as an app wasn't particularly unique, yet it eclipsed the competition. The same could be said for Google. Google didn't even have any real lock-in for years, but it still owned consumer mindshare, which gave it the vast majority of search traffic, which made it one of the most valuable companies in the world.
ChatGPT is in a similar position. The fact of the matter is, the average person knows what ChatGPT is and how to use it. Many hundreds of millions of normal people use ChatGPT weekly, and the number is growing. The same cannot be said of Claude, DeepSeek, Grok, or the various open source models.
And the gap is massive. It's not even close. It's like 400M weekly ChatGPT actives vs 30M monthly Claude actives.
So yes, the average Hacker News contrarian who thinks their tiny bubble represents the entire world might think that "nobody cares," in part because nobody they know cares, and in part because that assessment aligns with their own personal biases and desires.
But anyone who's been paying attention to how internet behemoths grow for the past 30 years certainly still cares about OpenAI.
The software behind Facebook as an app wasn't particularly unique, yet it eclipsed the competition. The same could be said for Google.
I remember the search engines of the time and Google was a quantum leap.
ChatGPT is even more revolutionary but whatever Google is now, once it was brilliant.
3 replies →
You can't compare Facebook with ChatGPT because the costs per user are in totally different orders of magnitude. One $5/mo VPS can serve the traffic of several hundred thousand Facebook users, while ChatGPT needs an array of GPUs per active user. They can optimize this somewhat, but never as much as Facebook can.
This means that they're stuck with more expensive monetization plans to cover their free tier loss leader, hence the $200/mo Pro subscription. And once you're charging that kind of price to try to make ends meet, you're ripe for disruption no matter how good your name recognition.
4 replies →
> Google didn't even have any real lock-in for years, but it still owned consumer mindshare, which gave it the vast majority of search traffic, which made it one of the most valuable companies in the world.
This isn't correct at all. Google's search engine was an important stepping stone to the behavior that actually gave them lock-in, which was an aggressive, anti-competitive and generally illegal effort to monopolize the market for online advertising through acquisitions and boxing out competitors.
It really was only possible because for reasons we decided to completely stop enforcing antitrust laws for a decade or two.
1 reply →
400 million use it for free, you can give away 400 million of anything for free. The question is, how many are willing to pay the monthly fee required to stop OpenAI from bleeding 5billion/year and return the promised trillions to investors.
1 reply →
[flagged]
Regardless of how wrong someone is or you feel they are, can you please make your substantive points thoughtfully? This was not a good Hacker News comment, and you've unfortunately been doing this repeatedly lately.
https://news.ycombinator.com/newsguidelines.html
1 reply →
OpenAI is spending $2 for every $1 it earns. It's certainly eating its investors' lunch, but it's not a sustainable business yet and from all accounts doesn't have a clear plan for how to become one.
Meanwhile, the ZIRP policies that made this kind of non-strategy strategy feasible are gone.
1 reply →
>They had banked on Murati calming employees while they searched for a CEO. Instead, she was leading her colleagues in a revolt against the board.
I finally finished the 4th of Caro's books about LBJ, "The Passage of Power", largest part about how LBJ dealt with assassination. Over and over shows how LBJ made sure that nobody, meaning world leaders, citizens, others in government, and, relevant here, also those in the Kennedy administration would feel lost and want to resign. Caro made sure to note how this is a very difficult task and required LBJ to act differently than normal, but also how important it is to not have things go into disarray which easily happens.
Side note: Astounding notes of how LBJ was able to get bills that weren't going to get through congress with Kennedy were pushed through and made possible by Johnson. Quote to end a chapter by Richard Russell, southern complete segregationist and racist, says "You know, we could have beaten John Kennedy on civil rights, but we can't Johnson." Other side, Caro makes certain however about how the coming issues of Vietnam show the darker side of LBJ and not get fully caught up in his stability of power and civil rights successes.
Maybe these are all cases of those who want power are usually those who shouldn't have it.
To save others the lookup- this is not talking about assassinations carried out by the administration abroad, but about the Kennedy assassination.
So sam let the cat out of bag (chatgpt) behind the backs of "safety review" and the board. Probably why Google was caught flat footed and how ChatGPT became the household name.
Dubious moral decision but an excellent business one. Perhaps the benefit of hindsight where ChatGPT didn't cause immediate societal collapse helps here.
ChatGPT is already out when the story picks up, it's talking about concerns about GPT-4.
And the story isn't about that single incident of Altman dodging review and working behind the backs of the board—it's about a pattern of deception and toxic management practices that culminated in Altman lying to Murati about what the legal department had said, which lie was given to the board as part of a folio of evidence that he needed to be ousted.
You're trying to distill a pattern of toxicity and distrust into a single decision, which ameliorates it more than is fair.
Yeah to me the overt lying is more damning than any particular decision. If he owned the decision to bypass ethics review and release a model, fine, we can argue if that was prudent or not, but at least it's honest leadership. Lying that the counsel said it was ok when they hadn't is a whole other thing! When someone starts doing that repeatedly, and it keeps getting back to you that stuff they said was just outright false, you can't work with them at all imo.
If this is something he's been doing for years, it becomes clearer why Y Combinator fired him, though they have been kind of cagey about it.
The question then remains: if you have a lying, toxic, manipulative boss, who would want to work for them ? Especially the direct reports of one
3 replies →
Aside from becoming the opposite of the values their name suggests, there’s two main mistakes OpenAI made in my view: violate copyright when training, and rush to release the chatbot. Stealing original work is going to bite them legally (opening them to all sorts of lawsuits while killing their own ability to sue competitors piggy-backing off their model output, for example), and is a special case of them being generally shortsighted and passing on an opportunity to make a truly Apple- or Amazon-scale business by applying strategy and longer term thinking (even if someone else got to release an LLM chatbot before them, they could—as in, had the funds and the talent to—build something higher level, properly licensed, and much more difficult to commoditise).
If this was the fault of Altman, it is understandable that certain people would want him out.
> violate copyright when training
If we could incrementally update our own brains by swapping cells for chips, what percentage of our brain has to be chips before us learning from a book is a violation of copyright?
When learning to recite a recent children's poem in kindergarten, what level of accuracy can a child attain before their ability to repeat it privately to one other person at a time is a copyright violation?
15 replies →
Do the copyright claims have any legs at all? ianal, but I thought it was pretty settled that statistical compilation of copyrighted works (indexes, concordances, summaries, full-text search databases) were considered "facts" and not copies.
(This would be separate from the contributory infringement claim if the model will output a copyrighted work verbatim)
3 replies →
[dead]
I was with you until “immediate societal collapse”, what?
Obviously safety is not a problem if the lack of it doesn't cause an immediate end to civilization.
/s
So, why didn't the board tell the other executives (and employees) what Murati had told them. When it was them in the firing line, why didn't Ilya tell that story? They could have just fired Murati (based on the screenshots presented) and continued as before. Or what am I missing?
Yeah I don't understand this either. Make a case or don't but keeping it incredibly vague, especially when so much money was on the line due to the secondary wasn't going to work.
So sam was getting paid - possibly in egregious amounts while lying to congress?
VC huckster lies to the public, news at 11.
Why has safety taken such a back seat? Were the fears overblown back in 2022 or have model providers gotten better at fine tuning the worst away?
TLDR:
In November 2023, OpenAI CEO Sam Altman was suddenly fired by the board—not because of AI safety fears or Effective Altruism, but due to concerns over his leadership, secrecy, and possibly misleading behavior. CTO Mira Murati and chief scientist Ilya Sutskever shared evidence of Altman’s actions, like skipping safety protocols and secretly controlling OpenAI’s startup fund.
The board didn’t explain the firing well, and it backfired. Murati, who at first supported the board, turned on them when they wouldn’t give clear reasons. Nearly all OpenAI employees, including Murati and Sutskever, threatened to quit unless Altman came back. With the company on the brink of chaos, the board caved and he was reinstated days later.
https://archive.is/20250329135312/https://www.wsj.com/tech/a...
From the outside it really seems like Peter Thiel was a brilliant kid who read Lord of the Rings and became obsessed with becoming the real world Sauron, manipulating weak-minded men in Silicon Valley into following the path of soulless corruption.
Did he read the end?
Skill issue. Easily overcome.
Not closely guarding the only means to destroy his source of power was such an obvious plot hole and oversight. ;)
You can innovate that part away /s
Text-only, works where archive.is is blocked:
https://assets.msn.com/content/view/v2/Detail/en-in/AA1BRU7s
For anybody who followed the saga at the time, there's nothing revelatory here as implied by the title but the essay (an excerpt from a book) is a useful summary if you wanted one.
https://archive.is/i3DHj
One of the strengths of the Chinese companies is they are more aligned on their goals as team members: make cutting edge LLMs and sell access. When you have all these competing interests you end up with internal strife. This faction wants to prevent the literal end of humanity, this faction wants to make the world a better place, this faction wants to curry favor with Washington apparatchiks. Nobody is really that interested in making money by inventing new algorithms. The result is frankly embarrassing drama for the whole world to indulge in.
“One Company for All People” is a great weakness of American companies and is contributing to this economic downturn, and not just in tech. Corporate Universalism needs to go the way of history.
Chinese companies have one massive advantage in aggregate: they know that from 2028 onwards they will be competing for a captive domestic market of >1.3B people. The CCP have declared as their industrial [service] policy that by the end of 2027, all Chinese companies must be using services exclusively from Chinese suppliers. The target ratio of domestic/foreign services is being ramped up year over year, so that by 2028 the base expectation is everyone to have 100% Chinese suppliers only.
From thereon, every exception must be justified to - and approved by - their respective politburo.
An obvious second-order effect is that there has been an explosion of Chinese B2B companies eager to get themselves established in the market. They know that in just a few years they can still sell their services outside China, but can expect very limited competition from non-Chinese companies. And inside the country, they have a population of ~4x of US to compete for.
Chinese strength is that they have a manufacturing economy and oversupply on everything.
They have a great incentive to quicken AI science, as it will lead to disillusionment, if not replacement, of the knowledge economy, in other words the US economy. I believe this to be the hidden motive and it's not about profit.
That, and state sponsored hacking groups and corporate moles sending everything they can back home.
This is old info. China is more than a manufacturing economy nowadays.
It’s quickly surpassing the US.
I'm sure other priorities at those Chinese companies are also in conflict with making money.
Indeed, making money is obviously a much better goal than embarrassing things like preventing the end of humanity or making the world a better place.
What evidence are you basing this in?
The way I see it, everything points to the opposite being true: US companies, by and large, are completely dominant in technology. Google seems to be winning the race and is exactly the "one company for all people" kind of place you're talking about. Academic studies have generally shown that diverse teams outperform monocultures.
Meh, I personally don't care that much about OpenAI drama, anymore. By now it is clear that they do not hold any edge and that they won't be able to establish an AI monopoly, and that all I ever cared about.
That’s not what their revenue says, though, right?
We lose a dollar on every sale, but we'll make it up on volume!
It’s good for the community to move beyond drama and build
Literally nothing gives me any trust in using and adopting tech from such a sociopath like Altman.
Sam Altman is immune to consequences
That tends to happen if you’re rich.
This is definitely the new US. Boy have my illusions been shattered.
And then to see people here defending this new state of affairs. It's a different world we are living in now.
2 replies →
Some of the early comments here almost read as astroturfing from friends of 'sama. This article provides interesting context to one of the most consequential SV events of the decade.
[flagged]
if you say something's provably false but you don't prove or even allude to a reason why someone should believe you, um, it's not very convincing
You're 100% right on that.
But this should be enough for you,
"This account is based on interviews with dozens of people who lived through one of the wildest business stories of all time"
4 replies →
Can you explain your pov in detail? I’m interested.
Sure, send me an email and I'll be glad to.
I wouldn't feel comfortable writing about it, it's not illegal but it just doesn't feel right.
6 replies →