Sam Altman's response to Molotov cocktail incident

6 days ago (blog.samaltman.com)

It is fair to be critical of Sam and other tech leaders regarding AI, but he has done nothing to begin to justify violence or even the threat of violence against him or his family.

  • If I torture 1 person that is bad. If I inflict a smaller suffering on millions of people, does it reach parity? Were the Sackler's actions that ruined so many lives, led young girls/boys into forced prostitution, led to so many ODs and suicides, something that ranked close to deserving violence? Or does the 'LLC/Corp' absorb all responsibility like some capitalist papal indulgence?

    If the Sackler's actions are visible evil, where on the 'LLC/Corpo' scale does evil turn to 'acceptable business' and the choices made by management to inflict damage on many many people switch to 'acceptable business' where the perpetrators are disconnected from their actions/choices?

    'LLC/Corporations' absolve management of liability/accountability in the government eyes, but you are making an assumption that then extends to absolving when it comes to actual morality. While you can try to sell 'articles of incorporation' count as modern indulgences freeing people from sin under the religion of capitalism I'm not sure all of society agrees. I think the concept that LLC/Incorporation is a blanket 'papal indulgence' absolving management of all accountability/moral behavior in our modern techno feudalist social structure is wearing thin for a lot of people. Clunky as hell language but it's a discuss that needs to be had, and better for all sooner rather than later.

    • > Or does the 'LLC/Corp' absorb all responsibility like some capitalist papal indulgence?

      That was a solid line

      Callous indifference seems fine if it’s done at a large scale and the harm impersonal enough. Murder is too small, too targeted.

  • Didn’t we just go through several weeks of hearing about OpenAI allowing its tech to be used for conducting warfare?

    Not saying that justifies harming Altman but I am confused that he seems surprised he is now in physical danger? [Or chalks it up to just some single specific incendiary article rather than the companies actual actions?] If you involve yourself in the act of killing people then, yeah, you’re going to get blowback for that and some people are obviously going to want to hurt you

    • The US is still a democracy.

      It's absolutely ok to oppose war.

      It is absolutely not ok for "some people to want to hurt" someone who is running a company that is vying for contracts from a democratically elected government's defense department.

      It's also ok to protest that, to boycott it or to refuse to work for or with them for it. But escalating that to physical violence is not ok, and nor should people be "confused that he seems surprised he is now in physical danger"

      (As an aside, from the statements I've heard so far it seems the person was more an anti-AI, anti-tech person than anti-war)

      26 replies →

    • > Didn’t we just go through several weeks of hearing about OpenAI allowing its tech to be used for conducting warfare?

      Unfortunately warfare is a thing. Why wouldn't you want the best technology used for your country when conducting warfare? Or do you just believe warfare would cease to exist if a country gave up any means of defense or offense?

      8 replies →

    • When was the last time a molotov cocktail was thrown at the house of an arms manufacturer?

      Trump and other presidents literally started wars and ordered people to be killed. When was the last time they were physically attacked?

      3 replies →

  • Agreed! Have you heard of Suchir Balaji?

    • Holy shit how is this the first time I am hearing about this? This should not be my first time hearing about this.

      > Suchir Balaji (November 21, 1998 – November 26, 2024) was an American artificial intelligence researcher who was found dead one month after accusing OpenAI, his former employer, of violating United States copyright law -Wikipedia

      3 replies →

  • Unpopular opinion. It depends.

    I totally agree with your statement if we are talking about the average citizen starting to throw Molotovs at his house. If you’re afraid AI is taking your job, just do something else. It’s not the end of the world changing careers.

    Plenty of work AI won’t be able to do, or allowed to do without a human assisting in some way that secures the human a good income and way of life.

    So if this is done by an individual citizen, they need to be hunted down, arrested, and get the full force of the justice system to deter others from doing the same.

    On the other hand, right now, Sam Altman is a valid military target for assassination in the US / Iran war.

    OpenAI did snatch up the contract from Anthropic at the Pentagon, and their technology is in some capacity used to murder Iranian HVTs (High Value Targets). Altman is therefore technically a legal HVT for the Iranians.

    If you say it’s valid and not a war crime for the US to assassinate former political Iranian figures and their families for aiding the new regime and therefore becoming enemy combatants in the eye of the US Military, it’s also valid to assassinate Altman and his family for doing the same to the other war party.

    It’s a bit of a Schrödinger situation. He is technically a valid target in a current war, but not for the private citizen.

    In both cases, though, I’d advocate that violence is neither a solution to solve the problem that AI might be creating for a lot of people in the future, nor should he be treated as an enemy combatant and his infant child and wife bombed to smitherens.

    Diplomacy is key here, just like it would have been the better solution than going to war with Iran.

    If you disagree with Altman, send him a letter, show up at his workplace, talk to the man, gather people who think the same of him you do, write letters to your voted representatives, make calls, vote politicians into office that are anti AI and who will go after him and regulate his company to shit. Bureaucrats can make Altman’s life more miserable than a thousand Molotovs ever could.

    If you gather enough support, you can reach the same goal, taking his power over your life away, without any violence.

    But are you really surprised people chose violence over the democracy toolbox in the US if they get told by the people in charge of their country that violence is indeed a good way to solve problems, that you should have a "warrior" spirit and everything is up for grabs, even sovereign countries like Greenland because you can outviolence any other nation on the planet?

    Violence only creates more violence and as long as there is a president who chooses to put oil in the fire and pretends it’s ok to murder US citizens like Alex Pretti, you don’t really need to wonder if the average citizen starts murdering tech CEOs in the near future.

    They just follow the Top-down approach to using violence as a tool the leadership lives by example.

    • > If you say it’s valid and not a war crime for the US to assassinate former political Iranian figures and their families for aiding the new regime and therefore becoming enemy combatants in the eye of the US Military, it’s also valid to assassinate Altman and his family for doing the same to the other war party.

      Sam isn't a political leader, so this comparison is flawed. What the hell, are we really arguing about if assasinating a long-standing figure of this community here is valid? Seriously??

      15 replies →

  • [flagged]

    • I am not saying he should not be criticized or even held legally liable for actions. Merely that, you know, fire bombing peoples homes whose actions you disagree with is bad thing.

      Controversial hot take, I know.

      5 replies →

    • if every business(man) who lobbies against regulations for their business is a fair game to go after violently (not just her/him but his family as well) there would be a bloodbath of epic proportions… one day, this might be you and your family too…

      8 replies →

  • [flagged]

    • > In essence, he has threatened to kill millions of people.

      “In essence” is doing enormous work here, and it will be basically impossible to have any kind of discussion if that work is considered acceptable.

      This kind of word-twisting can be used to make pretty much anyone into a murderer, at which point “discussion” will come down to who the mob chooses to listen to.

      6 replies →

    • Words can justify violence. A serious threat of violence is a reasonable basis for acting in self-defense. Another comment said the same about pre-emptive self-defense as if one should wait to be shot at even whilst a gun is pointed at them before shooting back.

      2 replies →

  • > he has done nothing to begin to justify violence

    No One does!

    I also found news hard to believe but it is true:

    https://www.bbc.com/news/articles/czx91rdxpyeo

    I'm not a big fan of Sam Altman, but violence like this is not a solution; it has the actually opposite effect as it probably did with Trump.

    • Actions have consequences. There will always be people in the world that get pushed beyond the limits they can endure. It reminds of that CEO that got gunned down by someone that was being affected by the company profiting off of making a business of denying health insurance claims on technicalities.

      I don't support this and yet I know for every harm people in these corrupt institutions are involved in, the universe gives back your due.

      If you want to stop the harm. Stop harming the world with your actions in what every way that needs to manifest for you.

      3 replies →

    • Reading that BBC article, how the attacker got caught while shouting at an OpenAI building, it would seem likely that this attacker is confused or deranged. Not specifically someone with deliberate evil intent.

      So the headline seems to be more "high profile person attacked by lunatic" than "OpenAI CEO attacked for being evil".

  • Justice isn't just about punishing the guilty. It is also about restoring the trust in our society when it has been damaged by criminals. Very few Americans I know have any faith in our justice system's ability to hold the wealthy accountable. As a result, we will see more and more violence as a natural consequence.

    Sam Altman could use his considerable wealth to hold billionaires like himself accountable for crimes that they commit through lobbying or funding investigations. Seeing criminal billionaires face justice would go a long to reducing this kind of violence.

  • Is there anything anyone can do that justifies violence or threats of violence? No. Even if that person is a proven child molestater, a just society stands on just law.

    But as far as political justification stands, he is as valid of a target for hostile nations just as Iranian nuclear scientists were (unless he has 0 involvment with USG). That's just the world we live in.

    Use your tech for war in other nations, you give a justification for other nations to target you. Same goes for Lockheed Martin ceo etc, nothing specific against Sam. But saying nobody has no valid reason to target Sam like this is pretty stupid imo.

    • I’m pretty sure if someone sexually assaulted my child or murdered them I’d be more than morally justified to get a few or a lot of punches in.

      Some people are treated a whole lot better than others in prison.

      4 replies →

I have many disagreements with Sam Altman. But physical attacks are never the answer. Especially attacking one's family.

This is both horrible and not at all surprising.

Every quarter there are more layoffs and we're told how AI will replace us and that we can do nothing to stop it. We cannot afford the simple things our parents were able to and are supposed to be grateful that we are living in a time with such "amazing" technological progress.

Sam is one of the most media-visible people that represents AI replacement of average people's livelihood (not agreeing with this stance but yes, outside of the Hacker News SF-tech matcha latte bubble, this is a commonly held thought) which makes this unsurprising.

Still horrible and not right.

  • Sam and related spend half their lives on TV poking the bear laughing about how he is going to use his tech to ruin peoples lives and there is nothing they can do about it. Some of those people have nothing to lose.

    • Jesus can’t believe I have to white-knight Altman here, but can you point a single video or interview where any of the AI CEOs have been “laughing about how he is going to use his tech to ruin peoples lives.”

      This is the exact kind of poisonous, plausible-sounding but false and inflammatory rhetoric that is escalating things.

      2 replies →

An interesting thing about one facet of how society as developed over the past decade and a half, I think, is that a byproduct of more people being conscious of the quest to monetise almost anything is that it has also raised the level of general scepticism on whether something is marketing or real. So you have increasingly more scenarios where an objectively bad thing can happen to someone but any public response is scrutinised and questioned within a hint of its life sometimes rightly sometimes not. I don’t particularly like it but that’s where we are at guess

  • > any public response is scrutinised and questioned within a hint of its life sometimes rightly sometimes not.

    This is a fairly healthy response from the public - better than accepting everything at face-value. Plato's Allegory of the Cave is a warning against accepting random information in a vacuum to assess your surroundings. Observation and response is not enough to be a critical thinker, even back in the ancient ages.

    From where I'm standing, the public at-large is traumatized from flubbed coverups like the Snowden leak, Epstein files, and Abu Ghraib. The myth of American exceptionalism has been threatened for a long time, and people rightfully question whether or not executive leadership can write-off their involvement in politics. Sam Altman has put on an extremely dangerous pair of boots, and while it doesn't justify attacks on his person, we all know that speculation will continue as new events come to light. Right or wrong, this is what the public is conditioned for now.

Genuinely surprised at the extreme comments against sama here. I don’t think he’s a good steward of the technology, but I don’t think violence is funny or justified. I also don’t think it’s justified for him to use it to say that a negative article about him is correlated to this event. Seems to imply that an “incendiary article” led to this and that criticism is tantamount to calls to violence. He drives the conversation with apocalyptic terms, and both investors and crazy people buy into it.

  • > but I don’t think violence is funny or justified

    Well, that's okay, because even Sam Altman disagrees with you. He absolutely believes that violence, including deadly violence, is justified - hence his contract with the US Department of War to use their systems in kill chains.

    Perhaps the problem is that whoever threw the cocktail didn't use AI to select him as a target, or maybe he didn't receive payment for throwing it? Because what other difference is there?

    • I mostly agree with you - he seemed happy for the chance to play the victim. When the system is working, war is different because it has democratic process behind approval (Iran is obviously showing the system is breaking down)

      But just because horrible people exist in positions of power doesn’t mean I have to become horrible myself. I accept that there is a threshold where that changes, but I think we would disagree that we’ve hit that threshold. If anything violence now just gives more excuse to justify further consolidation of power (look I got attacked! The anti AI people are crazy, any criticism of me is just encouraging them!) Imagine if it was a serious attack on sama, they could spin it into some serious gains for them.

  • The problem is sam is a prolific liar, as has been proved many times.

    It's difficult to sympathize with the boy who cried fire

    • I don’t think someone should be burned alive because they’ve lied unless they’ve spread intentional lies that have caused death or harm to others which I don’t believe Sam has done. Personally I find it very easy to sympathize with someone who was attacked in their own home with their family unprovoked even if they have lied in the past. It’s crazy how blood thirsty people have became lately.

      2 replies →

  • I think Sam and people like him are *spoilers* like Jules Pierre Mao and Dresden on The expanse.

    I think that he may genuinely believe that ai will produce a net benefit for humanity in the long term, but I am increasingly worried that they are absolutely fine testing their creation on the world without any consideration to the harm it can do to millions of individuals.

    The assertion that he is benign would be more believable if he spent a shred of time lobbying for universal economic rights of citizens, or some model for redistribution of wealth in a world where most people don't need to work to provide the necessities of society.

    Oh, and he's willing to let the government use his technology to mass-spy on Americans and to create autonomous lethal AI.

    Pearl-clutching about ambivalence to his fate and comparing it to the barbarism of a mob gets shrugs from me.

Sam Altman has written, and probably still believes,

"Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity."[0]

This means he acknowledges that his actions have the potential to kill every human family on Earth. It should be of no surprise that people took his beliefs seriously.

[0] https://blog.samaltman.com/machine-intelligence-part-1

> Words have power too. There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside.

> Now I am awake in the middle of the night and pissed, and thinking that I have underestimated the power of words and narratives. This seems like as good of a time as any to address a few things.

This kind of reads like “It is Ronan Farrow’s fault that some crazy person tried to burn my house down”.

Like this guy was going to go about his week, being normal and not making Molotov cocktails, but then he picked up a copy of The New Yorker and lost his mind

1) It's terrible that this has happened. People who do this are evil.

2) It's atrocious that Sam makes it seem like any investigative reporting into him as a major public figure at the head of one of the 5 most important companies in the world is somehow responsible for it.

3) Sam is always playing the smol bean victim for sympathy points. To be clear, he is absolutely the victim of an atrocious crime. However, this post is not done for any reason other than to continue the exact same playbook he has for the last N years in order to manipulate public opinion to his favor. This post will do nothing to stop deranged, evail people but it may make people feel sympathy for him.

He says power can't be too concentrated - but even n-2 generation models are not open.

He says "look at me I love my family" - so do the millions of people who think his company may destroy the economy and help corporations and the trillionaires put a boot to our children's necks.

3:45am in the morning - no dip, that's what AM is.

---

Someone here asked "How do we get to post scarcity from here?" and someone else said "no one knows".

The AI barons are loading up their bank accounts and political capital, driving us off a cliff and promising we'll learn to fly by the time we get there. But they're going to tuck and roll out of the driver's seat.

Sam, why do you expect us to believe anything you say when you have done nothing to lead the discussion about universal rights for citizens in a post scarcity society?

> My personal takeaway from the last several years, and take on why there has been so much Shakespearean drama between the companies in our field, comes down to this: “Once you see AGI you can’t unsee it.”

Except nobody has seen AGI. Not even close.

> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time

Reason enough to pause and figure out the best way to continue. A massive societal change that won’t all go well means millions dead and tens more with their lives upended.

I can't help but be reminded of last year, when our landlords (chill boomers) sold the house my girlfriend and I were renting the basement of (to presumably rich asshole millenials). The demographic doesn't really matter, but the old landlords kept us in us in the loop throughout the process, we knew as much as we could going into the new year. Apparently the new buyers wanted to keep us as tenants. Day 2 of them taking possession, the man came down with his innocent toddler and introduced themselves. He seemed friendly enough, and on Day 3 he came down in the middle of the day and handed me eviction notice papers.

I didn't firebomb his house, but I can't say I definitely didn't want to shit on his doorstep.

  • If you had a lease the new owner was obliged to honor it, should not have been able to evict you at least until the end of the lease term.

    • There's a provision for personal use that stipulates they can't rerent the unit for a year. It wasn't illegal, but it was an asshole move. They also tried getting us for more than out full deposit, to which we declined and they relented. Basically he's just a scumbag.

      2 replies →

>“Once you see AGI you can’t unsee it.” It has a real "ring of power” dynamic to it, and makes people do crazy things. I don’t mean that AGI is the ring itself, but instead the totalizing philosophy of “being the one to control AGI”. The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring.

The analogy has 2 simple rules and you can't even follow them:

#1 It MUST be destroyed.

#2 SOMEONE has to have the ring until then.

Without BOTH of those things you have no meaningful analogy. If we're being super charitable, "For no one to have the ring" is Frodo sitting at the council, with the ring on the table, naively thinking that it can stay right there in that spot forever, safe in Rivendell, about to have the horrifying revelation that there are 2.5 more books in the story. More realistically, it's Boromir moments later arguing that Denethor has the mandate to use it to fight on Gondor's behalf.

Fuck. I'm so past the point of caring about the extinction of our species, or your role in enslaving us to our robot overlords or whatever... but SELLING US SPECIOUS RING ANALOGIES IS WHERE I DRAW THE FUCKING LINE

I've skimmed the thread here and I am now seriously considering leaving HN for the first time in about 15 years. Here are some quotes from what used to be a pretty interesting and thoughtful community:

> Ah, the Elon manoeuvre: trying to make would-be assassins hesitate by using your own child as a shield.

> the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.

> Sociopath who rides high ego wave and drinks his own kool aid, acting highly amorally and then complaints that his actions have some (benign) consequences.

> A cavalier attitude and allegiance to nothing but capital doesn't make you immune to basic human morals, and humanity will, rightly in my opinion, punish you whether you like it or not.

These comments are disgusting. The people who made them should be ashamed. But they are probably too stupid to be, assuming they are people and not bots, which I no longer feel certain of for all too many comments here.

  • In my time in HN I have seen numerous people advocate for the following

    * ending all covid measures to achieve herd immunity, accepting that this condemns hundreds of thousands or even millions to die

    * ending foreign aid that goes to tuberculosis treatments, condemning hundreds of thousands or even millions to die of a treatable disease

    * accepting the deaths of iranian, palestinian, or israeli children as collateral damage because of the evils of their governments

    Or go read any thread involving the Jordan Neely story.

    Somehow it is vastly more evil when violence is acute and focused at a single wealthy person.

  • Have you asked yourself why someone went as far as hurling a molotov at his place in the first place?

    I would never, but you have to understand that serious pain and harm is being inflicted on people, AT SCALE, by the advent of AI. I'm not even talking about Israeli, Palestinian, or Iranian kids. People in America with terminal illness are losing healthcare.

  • I don’t think they’re bots, the strength of feeling is real.

    Rightly or wrongly people feel cut out of society at a time when the tech elite are not only making billions but seem to be actively trying to ruin everyone else’s lives, they are legitimately hated.

    And when you’re that hated you do need to be careful, money can’t protect you from everything. At the end of the day we do all have to live in the same society.

    (I don’t have this strength of feeling personally but some people do)

  • > I've skimmed the thread here and I am now seriously considering leaving HN for the first time in about 15 years.

    I'm finding a lot of the comments here pretty reprehensible, but no more reprehensible than the collective shrug the community gave towards murdered Palestinians, or threads about dead Iranians as a result of American bombs that get flagged off the front page. That doesn't make them acceptable or okay.

    Those people's lives are/were valuable, too. It's disgusting that we try to keep HN "clean" of those horrors and the people that flag those threads should be ashamed. Ditto those who think the killing of innocent civilians is okay.

    • Well, you know, dead palestinians aren't paying their salaries or investing in their companies, so they aren't as important as a accelerator that in the last batch had 90+% of 'AI' companies.

      Think of the investments they may lose. We can't have any of that can we?

  • [flagged]

    • > he deserved it [...] I'll have a toast the day he croaks

      As I said to voidhorse (https://news.ycombinator.com/newsguidelines.html should know; but given that this thread is a mob and mobs derange people, I'm going to cut you some slack and not ban you. Just please don't do anything like this on Hacker News again.

      > For a social scientist, you're either a really poor one, a poorly read one or one with a complete inability to read the room.

      Personal attacks are also unwelcome here. Lashing out at a fellow community member is mean and shameful, and also undermines whatever argument you were making.

when you live in barbaric soeciety where the majority don't mind using force to achieve their goals at the expense of minorities or basic international law, peacefull protest become useless.

  • It's even potentially worse than that. As a whole, it doesn't even require a majority, if a small amount are complacent or ignorant. And depending on competency, it might not matter if the majority protest anyhow.

*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me."

"Prosperity for everyone" ... you lying weasel! You literally took a contract from Anthropic because they wouldn't mass surveil Americans or mass murder non-Americans ... and you would!

Jfc. People, a molitov cocktail was thrown as his home.

The rest of what is written doesn't matter. This isn't the moment for that conversation. That's his family. He has a fucking child.

Holy shit.

  • When the attack is being used to craft a very particular narrative unrelated to the attack, a lot of other things continue to matter, and yes they do matter right now. That is on the premise that this isn't some depraved PR stunt. And that is also ignoring how purposefully misleading most headlines as well as your comment are.

  • Better stop paying taxes then, cause your government, whatever it is, is probably ok with using your tax money to in some cases fund the killing of people who have families and children. Now, we can argue about the morality of killing those exact people as opposed to killing Sam Altman, but that's a different discussion. My point is that the real argument isn't over whether it's ok to kill people who have families and children, you're probably ok with that too, after all bin Laden had a family and children. The real argument is over which people who have families and children it is ok to kill.

    • This but unironically. Federal taxes should be protested, they're basically only spent on killing innocent Middle Eastern children at this point, all useful spending is negligible, especially after this administration.

  • Assassinated Iranian nuclear scientists had kids too. There was a thread here a few days ago letting people explore the deaths of children in Palestine. That thread was taken off the front page via flagging.

  • > The rest of what is written doesn't matter. This isn't the moment for that conversation

    That's terrible that someone did that. I think that's wrong, and people that do that should be in prison.

    But if the rest of what was written didn't matter, it wouldn't be written. He thought it was important enough to put it in. It's there to be read and discussed.

    And I have to point out, we're not talking about a couple off the cuff remarks he may have rushed. About 95% of the post is about his ambitions for OpenAI. So pearl clutching that people are actually discussing the meat of the post in a tech forum reads performative.

    • Another comment lacking any compassion.

      The man was reeling from what happened. He blames himself and his work. He sat and he wrote, naturally it came back to OpenAI. Should he of? Probably not. But it's understandable that he did.

      We can meet the moment with some understanding and give the guy a little wiggle room.

      5 replies →

  • What about those three million people his systems helped murder? They had children. Half of them were children. Do those count? Where was your comment then?

  • If he treats his kid the same way he treated (raped) his little sister growing up, I feel extra bad for that child now.

> The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring. The two obvious ways to do this are individual empowerment and *making sure democratic system stays in control.*

OK! So he's going to renege on the contract he's signed with Hegseth, which effectively commits OpenAI to serving as the IT Department for Trump's secret service?

AI is great. But it seems like those that wield its power only do so to create massive unemployment and benefits to the top 1%.

> There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me.

For context his blog post seems to be a response to this deep-dive New Yorker article:

"Sam Altman May Control Our Future—Can He Be Trusted?"

https://news.ycombinator.com/item?id=47659135

  • Wouldn't it be more correct to call the article "critical" and not "incendiary"? I looked it over and I don't remember seeing any calls to violence. Altman needs to remember that he holds an incredible amount of power in this moment. He and other current AI tech leaders are effectively sitting on the equivalent of a technological nuclear bomb. Anyone in their right mind would find that threatening.

  • He has to be talking about the New Yorker article, which wasn't incendiary at all. If anything, it seemed fully neutral to me, reporting what they could justify as facts but going out of their way to not specifically paint him or anyone else in a negative light beyond a listing of events that they presumably have solid sourcing on (if not, sue them; if so, stfu).

    If a neutral look at your actions seems incendiary to you, maybe you need to rethink your own life and actions.

    It should go without saying I don't think people should be attempting to light other people's houses on fire regardless of how distasteful they find those people.

I wonder if the attacker asked ChatGPT how to make a molotov cocktail.

It would be an interesting plot twist.

  • And ChatGPT not only taught him how but also told him it was a good idea to do so.

I appreciate his post and his tone.

No one should need to attack (on the one hand) or "trust" (on the other) Sam Altman (or Donald Trump or Barack Obama).

Power is reliance by others, and that's conditioned on behaviors which are made observable and systems to ensure stakeholders' interests are maintained. Yes, there's some hero-worship, some arbitrary private power, some evasion of systems, and some self-dealing by leader coalitions (indeed, we seem to be at a historical peak), but that's not about him personally but about us, and our willingness to vote (writ large).

We do have to be careful about private power saying managing their issues are a matter for public governance (democratic or otherwise). It's a bit convenient to deflect blame (like having it be the jury that "decides" a case, because then you can't blame the judge). I like that Anthropic stepped up to pay any electricity increases, Apple has been recycling and cleaning up their supply chain, etc. If anything there should be a stronger support for contributing vs. Hobbesian corporations.

[flagged]

  • Ha, I was giving an AI bootcamp to a room full of people and someone asked me my opinion of Altman. I hesitated for a second and replied that I would not trust Altman further than I could throw a rock about anything.

    If Graham says this guy will always stop at nothing to get whatever he wants, which I absolutely believe, then why would you trust anything that comes out of a person like that’s mouth?

  • 10 hours ago a post made the frontpage here [0] about how OpenAI is backing a law that "would limit liability for AI-enabled mass deaths or financial disasters". Now he's here saying he believes that "working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for [him]".

    I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.

    [0] https://news.ycombinator.com/item?id=47717587

    • I think it's good for CEOs of powerful companies to make statements about how they don't want too much personal power and it's important to ensure everyone does well, even and perhaps especially if there's reason to suspect they don't believe it. Saying it doesn't solve the problem, but it helps create a permission structure for the rest of us to get it to actually happen.

      3 replies →

    • Yeah a company causing mass death or other disasters is maybe the single clearest signal that they should go bankrupt and someone else should take over (if the tech is really that important).

    • > I know I should be used to people openly lying with no consequence, but it still amazes me a bit.

      Well that makes two of us. Character seems to mean nothing today.

  • unpopular opinion but i think it's written quite well

    • I don't think that's unpopular, it is pretty well written. But the "I believe" section is extraordinarily hard to believe given Altman's history.

      > Working towards prosperity for everyone, empowering all people

      > We have to get safety right

      > AI has to be democratized; power cannot be too concentrated

      None of these statements, IMO, reflect his actions over the past 5 years.

      > we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future

      I agree with this, but there is a near 0% chance of that happening anytime soon in the US. I think he probably is aware of this.

      Just my opinion, but it comes off as very insincere.

      To be clear, what happened is still awful and there's absolutely no justification for it.

    • it's "written well" but not at all a smart piece of writing. leading with a photo of a cute baby before engaging in an extended defense of one's own integrity is so obvious as to be insulting

In all seriousness, what is the game plan for society moving forward as AI takes more jobs? The government doesn't seem to care. The AI labs don't seem to care.

What happens when more and more people can't afford housing, kids, food, health insurance, etc.? Nothing more dangerous than a man who has no reason to live...

I don't advocate for violence, but I do foresee more headlines like this as things get worse.

  • Nobody has one. If labor stops having value the economy will stop working and society will break down far in advance of building the infrastructure necessary for the promised AI abundance.

    I like the idea of being ”post-scarcity” as much as the next guy, but I don’t understand how we get there. It’s a project in itself, it doesn’t just happen by magic, and nobody is actively trying to make it happen or has any logistical idea of what it involves.

    We’ll also lose a huge number of jobs as soon as true AGI comes on stream, by which I mean the kind of AI that no longer acts like somebody who has read all the world’s books but can’t figure out that you always need to drive to the carwash.

    We’ll lose these jobs and there will be no super abundance at that point, and not even government support.

    There is the option of passing laws requiring companies to retain human employees. That to me is about the only viable stopgap measure.

    • It is not impossible to think that many people will just be served an UBI and don't expect much more in life, after all, if we have AI+Family+Housing+Food (assuming gov robots would take care of providing us free food in some form), I bet millions of people would be contented with it.

      PS: I include AI as an important one in the future because it will be a direct way to get educated and replace college for example without having to pay (or very cheap).

      3 replies →

    • It made me kind of angry when I saw Dario repeatedly claiming that AI would be taking all the programming jobs any minute now. His company supposedly is working for a better future, but he's giddily talking about something that could cause millions of people to lose their homes if it were true.

      Our governments have a habit of being reactive rather than proactive. People have floated the idea of UBI, but if UBI happens, it will probably mean it's the only way to avert a crisis, and the amount that people will get might only be enough to rent a bedroom and eat processed food.

      I think in the medium term, the reaction is overblown. Even though LLMs can make software engineers more productive, you still have a competitive advantage in having more software engineers. Medium to long term though, the goal is obviously to replace human jobs.

      I'm not a communist, but Karl Marx understood that the labor force gets its bargaining power because they are necessary to produce value. What do people imagine happens when the human labor force becomes essentially completely replaceable? They imagine the government will be forced to take care of the population to prevent an uprising, but they forget that the police and the army can be replaced by machines too.

      4 replies →

  • There isn't much compelling economic data that AI has been the cause of any recent layoffs or job loss, yet you speak as if we are already in the throws of an AI takeover. Sam Altman is a salesman, he sells products that's all he is and ever has been, if you are looking for answers to why people can't afford house and food you should look at the politicians in power.

  • I think, like other disruptive inventions of the past, there will be pain for many, but it will pass. Society will grow and adapt. There's some statistic somewhere I will paraphrase and/or botch that goes like: 90% of the jobs people have today didn't exist 50 years ago. I think no one can imagine what possible opportunities will manifest in the future. It's a lot easier to imagine everything that might go wrong because we evolved to see a sabertooth in the rustling leaves.

    • >90% of the jobs people have today didn't exist 50 years ago.

      We also have 100% more people on the planet than we did 50 years ago.

    • > I think, like other disruptive inventions of the past, there will be pain for many, but it will pass

      I agree. We can only hope that it'll be folks like Sam Altman who'll be feeling the pain, and not the 99%.

    • Why do you think so in the specific case of hypothetical improved LLMs that can do a large fraction of the kind of intellectual work humans are tasked with?

      I think in such a state, there will no way up, not way to success, no way to real autonomy for ordinary people, maybe you'll even have actual oligarchal rule, since so few people do anything contributing to the economy with their labour.

      1 reply →

  • Few thoughts

    - Either we'll slowly become the Expanse universe (basic UBI, very few jobs, you win them via lottery)

    - Or we'll go to simpler times - economics is supply and demand, if there will be more demand to human generated work (the same way there is demand for hand made arts, vinyls, paper books, vintage furniture), people will flock more to family, community. Think something between moving to the suburbs and the Amish. If people will "ban" some products generated by AI, or will prefer products generated by humans, then AI will have harder times to take their jobs. It's unlikely to happen, but think about the Organic food industry, about the high end products industry, about the farm to table / buy local industry, about the "support local artists" (farmers markets) - this will likely just grow. Won't help at scale, but it's a possibility

    - Or, the Dune way, banning of thinking machines altogether on the state level, I assume some countries might go that way, for religious or other reasons, but again unlikely

    - Or, current AI technology will plateau just short of full AGI, and the centaur period will stay for longer. As long as a human + AI can do things slightly better than just AI, (in my book this is not full AGI) - then there is economic incentive to hire a human instead of replacing them.

    - Or full apocalypse, the matrix / skynet, idiocracy, hunger games, red rising. I hope for the ignorance is bliss option...

    • The end game is like the Asimov world which had only a few people and everyone else was robot servants.

      The trillionaires will survive, everyone else will be exterminated. This is the world that Musk and his kind dream about.

  • The game plan is the same as it was for globalization and previous rounds of automation: gaslight workers into thinking that they are the problem. Push all the taxes into the labor economy and all the money into the capital economy and use the inevitable budget shortfall to justify skimping on social services. That'll work until it doesn't, at which point the Ellison strategy will be employed: pay 10% of the poors to keep the other 90% in line.

  • Out of curiosity... why do you think this?

    I think this is complete madness. Im not someone that is in a job so I have the luxury to think critically about what is going on and... I just dont see it.

    What I see is that LLMs will complement Labour and the excess returns of model producers will be very minimal (if at all any) due to the intense competition - keeping switching costs to a minimum (close to zero). This is before mentioning open source models which I expect to continue to improve.

    There is no specialisation re. models at this moment in time so it is very likely to be the case.

    OAI and Anthropic have to generate enough after-tax cash flows from operations to cover their reinvestment needs to continue going on. If they can't cover reinvestment then they will obviously lose as their offering will not be competitive.

    There's no certainty they generate this amount of cash profits either. They still have a high chance of going bust, of course that gets lower - IF - they can keep ramping up revenues.

    • No. I assure you. The cost of retaining labor + AI access to augment them further is far less desirable than downsize, then augment cheaper laborers to bring the quality approximately up to the old headcount. This is exec math, and execs get paid on how much value goes to shareholders, not to keep people employed.

    • I've reread your post a few times and I can't make heads or tails of it. I don't even disagree with anything you've said, it just seems like a total non-sequitur; nothing you've said gives any reason to disbelieve that AI will put (many) people out of work.

      2 replies →

    • I think what you’re describing is a more general race to the bottom where everyone loses, including the AI companies.

      This won’t happen because the AI companies will collude to prevent it from happening, meaning they’ll drop out of that race leaving the rest of us to claim victory.

      Generous of them, really.

      1 reply →

    • Yes. They won't become genuinely important themselves, but they will still upset the balance between workers and capital owners, creating a more extreme situation that we have now.

  • soon after humans are economically irrelevant (unemployable) they will be existentially irrelevant (dead)

    a system that can allocate the atoms and energy better than all of mankind won’t exist eternally to coddle hairless apes

  • AI will not take anyone's jobs. I, for one, don't consider AI something serious, it's still a toy, a curious tech demo, and will always remain one, outside of niche applications like NLP (there's no denying that LLMs are really good at this). The idea that anyone at all treats it seriously is just appalling to me.

    Mass-production and other optimizations that use economies of scale to their benefit do take jobs. There's a serious problem in the world's economy that there simply isn't as many jobs as there are people; the world simply doesn't need this much work because the need for work doesn't scale linearly with the population. AI has nothing to do with this. It's a fundamental problem we'll have to deal with either way as our society develops, AI or not. It started ages before the current tech hype cycle.

    • Whether you or I or any other normie thinks the tech won't leave people jobless is irrelevant. The C-suite in every company is foaming at the mouth to replace their most expensive asset, people, and companies like OpenAI are marketing to them on the premise that the tech allows them to do that. Whether it actually can or cannot do it is basically irrelevant, there's untold billions going into this bubble, so either way we're all fucked.

      Either the bubble bursts spectacularly and the global economy is in the shitter because everyone is overleveraged and heavily invested into it, or it doesn't and the psychotic C-suite replaces people anyways so they can see the line go up a quarter of a percentage point.

    • I mostly agree. In a technological society jobs and money are kind of virtual. The productivity gained by technology in the last 150 years made lots of work redundant and we've been managed by economists to still organise around wage labour. This is nothing new with AI. We could have abandoned wage labour 50 years ago during the 70ies and got neoliberalism instead. So we'll get more of the same with AI I guess.

  • > what is the game plan for society moving forward as AI takes more jobs

    > What happens when more and more people can't afford housing, kids, food, health insurance, etc.?

    What about when the opposite of this all happens, society massively benefits, and unemployment rates stay about what they have always been?

    Will people still be yelling about the doomsday of societial collapse that has failed to materialize every single time?

    • How would society benefit if all the benefit collects to the top of the pyramid? Same old trickle down? The technology isn’t inherently bad but if it comes with massive unemployment and creates social unrest while a few at the top profit… That’s what is what makes me uncomfortable.

      1 reply →

  • You already know the game plan and what will happen (hint: see this very article), but speaking it out loud will get you into troubles.

There are people actively insinuating in this thread that Sam should be...killed, and they are still up. Very odd moderation, surely there is a better way to flag these things.

  • I'd like to see specific links.

    • It seems you or the team have culled many of them. There was one in particular that stood out but it seems to have been removed or they are heavily buried now. I just saw your post further down the thread, so you have seen them and I assume action was taken, thanks. There are still some that I find distasteful, but not as bad as what I was originally seeing towards the top.

      1 reply →

> We have to get safety right, which is not just about aligning a model—we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future.

This might be the greatest example of cognitive dissonance I've seen in years. I can't understand how someone who's clearly highly intelligent can express this opinion, while doing the complete opposite. Does he think that everyone is a fool and that nobody will notice? Is this some form of gaslighting? Unbelievable.

Violence is not the answer, but it's easy to see how Sam's public persona would push someone to do this. There are certainly disturbed people who don't need any logical reason for violence, but maybe it would help if Sam stopped being so damn dishonest and manipulative. Even this post that is intended to gain sympathy ends up doing the opposite.

As a sidenote, I wish we would stop paying attention to these people. A probablistic pattern generator is far from the greatest technology humanity has ever invented. Get off your high horse, stop deluding people, and start working with organizations and governments to educate people in understanding and using this tech instead of hoarding power and wealth for you and your immediate circle of grifters.

> A lot of companies say they are going to change the world; we actually did.

Ugh.

We still haven't made AGI, so I don't understand what he's saying they did.

None of the things you believe are working out.

1) Working towards prosperity, etc. - the prosperity is all going toward the top 2%. The people who need it most are not seeing it and probably never will because the only ones who guarantee a benefit are the ones with the money to direct that benefit.

2) AI will be the most powerful tool, etc. - see point 1.

3) It will not all go well, etc. - probably should have thought about that before you released it on the world.

4) AI has to democratized, etc. - true, won't happen. See point 1.

5) Adaptability is critical, etc. - Yes. Fully agree.

The problem, Mr. Altman, is that you believe the rest of the world thinks like you do, which is clearly not the case at all. While we have the ability to solve so many of the world's problems, it is absolutely clear that this is not what's happening. The rich in resources are getting richer and they're not doing anything to help those poor in resources become better off. Instead, they are claiming those resources for themselves against the day that everyone else runs out.

Same as it ever was, Mr. Altman. Same as it ever was.

I don't think I've ever seen a thread this bad on Hacker News*. The number of commenters justifying violence, or saying they "don't condone violence" and then doing exactly that, is sickening and makes me want to find something else to do with my life—something as far away from this as I can get. I feel ashamed of this community.

* Edit: It would have been clearer to say "I've never seen a mob dynamic this bad on Hacker News", since that is the type of bad thread I was talking about. (Obviously there are lots of other kinds of bad thread.) Alas, that didn't occur to me in the moment, and it led to various misunderstandings.

If you're wondering why I called this a "mob dynamic" it's because comments like the following were dominating the thread when I originally ran across it (but in order to read these, you'll need to have 'showdead' turned on in your profile.):

https://news.ycombinator.com/item?id=47725717

I'm pretty calloused after years of doing this job, but seeing so many comments openly exulting in, and egging on, violence against a specific person on Hacker News was deeply shocking to me.

  • I imagine you knew Sam personally when he was President of YC. Most people don't, instead going off what they read in the press. Recent press is often less than flattering, given how contentious AI in general is as of late.

    Consider for some it's already hit home in the form of job loss, which for most people can easily be catastrophic. Or maybe they've a giant datacenter in their back yard suddenly, and now their air and/or water isn't viable.

    That of course isn't justification, but it does partly inform why some people are that mad, and it's much easier for angry people to be callously indifferent.

    If you were to break down HN's zeitgeist, it's some percentage site-local, some percentage larger tech scene, and some percentage general public.

    Although you have outsized influence on the former, the latter items factor in heavily—sometimes overwhelmingly so. You can't really control that, and I don't feel it represents some sort of failure on behalf of the community nor moderation team.

    I see it not as mob mentality so much as as multiple sides personally involved for different reasons. Things tend to get pretty heated when that happens; not a good recipe.

    I'm sorry you had to deal with the aftermath. Your flurry of disappointed, exhausted-sounding comments reminded me of a service industry worker getting hit with a huge rush. There's a kind of PTSD that hangs around once the dust settles.

    So, thank you for your efforts in trying to keep the site civil. It clearly ain't easy sometimes.

  • When violence is considered as an acceptable solution to systemic issues, it is an early sign that things are taking a very bad turn.

    I typically take jabs at the community here, but not this time. What you are seeing is a reflection of a wider, much more insidious problem. Trust in society is failing, and people are not seeing a civilized solution through the usual channels - such as politics.

    I think things will get a lot worse before they get better. Hopefully I'll be okay in my little corner of the world.

    • > and people are not seeing a civilized solution through the usual channels - such as politics.

      Violence is politics. It's the oldest and most universal form of politics, even found in other species, and even inanimate objects (types of rock subducting each other, we see the rock that floated to the top, that's practically Darwinism).

      But humans don't like being killed so they developed systems to avoid violence. Speeches, voting, money, etcetera. It's all ways for people to arrive at a reasonable solution peacefully. It's always been backed by "if we don't do this, people start dying." But people have forgotten this and they're allowing those alternatives to fail. We stopped exposing the new generations to the suffering child of Omelas and they forgot what is necessary for society to exist. People think there is food on the table by magic and there are no wars by magic. And it is magic, these complex intertwined systems. They are amazing. But you must respect them, you cannot destroy them on a whim and still expect civilization to survive.

    • > Trust in society is failing, and people are not seeing a civilized solution through the usual channels - such as politics.

      I agree. I think the lack of seeing a way out is a big component of this turn. You bring up politics and that's a good example. Who do I vote for, campaign for, etc. that actually wants me (an American citizen making around the median wage for my area) to be able to buy a home? To have affordable, accessible healthcare? I'm aging out of my childbearing years and am wrangling with the sorrow of not being able to afford a child. There are some promising local candidates and I do vote for them, but so many of these issues need to be tackled at a higher level due to their complex, interdependent nature.

      There's nobody. There's red and blue with different culture war paint. I can choose whether trans women play in sports or if we pray at work, but I have no choice in the fundamental material reality of my life.

      We're seeing this chaotic violence in part because there's no alternative. We know the old world is dying, but our leaders won't let anything else be born.

      I was talking to my father a few days ago. He's a 67 year old man who's voted Republican my entire life - we'd have political sparring matches in the car when he forced me to listen to Rush Limbaugh as a teenager. Of his own accord, he started talking about the necessary end/change of our economic system. A man who'd banged on about the free market and considered himself a Libertarian for decades, and who still, when he does engage with the news, does so with right wing sources.

      He's brighter than average, but not to an extreme amount. The understanding of the situation has trickled down to the point where every workplace has at least 1 or 2 people who understand how fucked everyday people are. My team at work is 6 people doing basic white collar work and we talk openly about how things are going to get worse, and there are nods to it cross-functionally all the way up to the top when our execs talk in an all hands. This is at a very apolitical giant mega corp.

      None of these discussions would have happened 20 years ago. We still shy away from the specifics (candidates, policies, etc.) due to professionalism, but the broader picture (things will get worse for the average person and our troubling trends aren't going to be reversed anytime soon due to inaction at the top) is agreed upon regardless of voting record.

      It kind of reminds me of being in an abusive household as a child. There is no escape and, once you've exhausted the 'official' channels, you start contemplating other options. I reported my mother to CPS once when I was about 7 and they didn't do anything (except piss her off obviously). On the other hand, the first time I smacked her back, the physical abuse stopped, and I've heard similar stories from men with abusive fathers - that there's a moment they realize they can actually go toe to toe and don't have to put up with it.

      If all your abusers will listen to is violence and you're not allowed to escape/get out, it's reasonable to come to the conclusion that in this case violence is the answer. I see a similar dynamic/thought process emerging in the American public.

    • > Trust in society is failing

      Something that I've observed happening throughout history is that in some sense "too much civilisation" can be a bad thing long-term.

      I knew someone in the army talk about how some officers wouldn't survive the first week of a real war. Not because of enemy fire, but because given the opportunity, the men under their command would almost certainly take advantage of the "less civilised nature" of the battlefield to take out someone they despise enough to murder, but not quite enough to risk it in a civilian setting where the tolerance for unsanctioned lethal force is essentially zero.

      Something similar happens outside of militaries too, where truly horrible human beings[1] can cynically utilise the enforced peace of civilized countries to do incredibly evil but legal things. The Sacklers come to mind as a prime example. They knowingly and deliberately sold highly addictive drugs marketed with brazen lies and killed about a hundred thousand Americans by some estimates. They are above the law and totally immune to all consequence, personal or otherwise. No violence will ever be done to them! Anyone that tries will be severely punished, because that upsets the "order" of civilised society where the rich and powerful can massacre millions, but the plebs can't ever lift a finger against even one of their cartoonishly evil oppressors without severe personal consequence.

      "Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect." -- Francis M. Wilhoit [2]

      Sociopaths loooove civilised societies! They can mercilessly exploit people while basking in the protection of the law. As long as what they're doing is technically legal, they can get away with almost any amount of evil acts. This does take a while to build up! Norms, expectations, and the like keep the worst of the worst initially at bay, but these things slowly erode as more and more sociopaths take greater and greater advantage. (Cough-Trump-Cough)

      This, taken far enough, where the common people are stepped on hard enough by those they can't ever bring to justice can result in entire societies just... snapping in their rage. They just need the opportunity, a "push", or some enabling event. In the case of the "friendly fire incidents" taking out bad officers, its a war. In most societies it is starvation or total economic hopelessness. We all know what this leads to: the French revolution is the prime example, but many others exist throughout history.

      The failure of the United States is that its reigns of power have been completely and utterly captured by the increasingly corrupt elite, and there is nothing the common people can do about it. Frustration is growing, slowly, but surely.

      It's not quite at the boiling over point, not yet, and may take a century to get there, but given the direction things have been heading, it's just a matter of time until the people take their anger out in some direct manner.

      Trump might have started the first pebble rolling by causing an oil shock. And gas shock. And fertilizer shock. I'm sure a lot of hungry, cold people who can't even get a job because the AIs have replaced them -- and used their cooking gas for energy -- will be perfectly fine with this and won't ever do anything about it! That would be uncivilized!

      [1] Disclaimer: Sam Altman is no saint, but I don't think he's anywhere near the level that he'd deserve mob violence.

      [2] At some level the people commenting here that it's shocking and horrifying that anything violent ever happens to a billionaire CEO are betraying their right-wing leanings. Conversely, the people arguing that the elite shouldn't be above personal repercussions for their actions are strongly left leaning.

  • For what it’s worth Dan, you’re probably the best moderator I’ve ever encountered, and without you HN likely wouldn’t be worth visiting. As it is it’s one of the best places for online discourse. That’s directly because of you and your efforts.

    It’s not easy to be a cop, and that’s basically what you are around here, but thank you for doing it.

  • I have to believe that what we're seeing is a minority opinion that feels like their uniquely backwards logic justifying this is somehow worth sharing as if its new and insightful, while the vast majority of us think "holy crap, that's horrible" but aren't adding it because of course that's already been said and there just isn't any more nuance needed.

  • Unfortunately, political violence seems to be en-vogue these days. I even hear people in "real life" casually discussing their support for it. What can we do? I think the only thing we can do is push back on it, even though it doesn't seem fair. What's a favourable alternative? You do a great job here giving individual feedback, which I know some people listen to and take in. I hope it's some kind of comfort to know that you can change people's minds, or at least give them some pause. In today's algorithm-driven world, pushing back seems more important now than any time I can think of. We need cool, level-heads running things.

  • The event itself is really bad and condemnable, but when threads like this show up they are usually a good thing because people rapidly demonstrate high coupling of tribal affiliation with viewpoint. This causes a lot of them to advertise through unhinged posts which is a good raw test for what they are like to communicate with. I usually go through and killfile a bunch of these commenters. Essentially, you want your bad participants to be easily visible to be so. I don't want them to be subtly sneaking their stuff in normal threads. I want to go look at one place and see all of the people I don't want to listen to.

    Therefore, here's a feature request: allow per-user killfiles. I currently have this through a Chrome extension but I'd love it to be native so that I don't have to use my own iOS app and so on.

    • > Therefore, here's a feature request: allow per-user killfiles.

      That would be lovely. It's also an obvious feature which has existed in other contexts for a very long time, and it would be easy to implement. That means its omission was a deliberate design choice. It'd be interesting to understand why.

      1 reply →

  • Wouldn't it maybe be a great idea to just ban anything that's not actually about science and technology from this board? This will have the indirect effect of people leaving it who are here for political trench fights? Plus the good old flame wars about technology x versus y are pretty harmless in comparison.

    (And no, just because Sam Altman is CEO of tech company doesn't make this news tech news.)

    • Tech and science are political .. they don't exist in some sort of vacuum.

      Further being "apolotical" means supporting the current status quo.

      3 replies →

    • I think a proper OpenAI vs Anthropic flame war might actually do this community some good. Let's just have it out. Avoiding violation of the x vs y technology rule seems to have resulted in a lot of pent up energy. I don't see the harm at this point if dang is saying it's over.

      2 replies →

  • It would be a huge loss and a real shame if you left permanently.

    I don't know how often you get to take a real vacation, somewhere away from the Internet and the USA, but this might be a good time to consider taking one?

  • The comments you've linked are gross, but I take exception with what you wrote here.

    > or saying they "don't condone violence" as a pretext to do exactly that

    Maybe I just don't know what comments you're referring to, but you seem to be lumping every other post critical of Sam in with the worst comments, saying they are condoning violence, and that is disingenuous. I mostly see people expressing they aren't surprised this happened given how Sam openly markets his tech as a dangerous and unpredictable product that only he can steward, and maybe even finding his response to be a bit opportunistic in a tone deaf way, which hardly rises to the level of condoning violence.

    I am willing to hear you out on this, but you're going to have to explain how this is different from any other thread on HN that you've moderated. Political violence, on a much bigger scale than this I may add, hits front page news, and you have more than normalized that as a discussion topic. Whether it's drone strikes, wars, or people being openly executed in the street, it seems the tragedy of human life is an open debate on HN, and you can bet a good 50% of this site will be writing comments exactly like the ones in this thread. And hell, I can't say one way or the other if threads like this are even worth allowing.

    But now a tech CEO with lots of security gets a Molotov thrown at his metal gate, and people make the same comments, and suddenly a line has been crossed? How are the comments in this thread any different than comments like this, which involved people who were actually killed [1][2]. I have seen hundreds of comments on this site dictate to me how I should feel about the lives of others. I am often sickened by them. That's before we talk about Sam's actual role in how he shapes our society. It's not "sickening" to feel the need to footnote a condemnation of what happened, it's completely expected.

    Again, maybe you're talking about worse comments than I'm seeing, but I feel frustrated as people have regularly brought you examples of escalating violent rhetoric on this site and been dismissed. Outside of people explicitly saying Sam deserved it, which I don't agree with, every other comment here reads like regular HN to me. If that saddens you, maybe there needs to be a different approach to moderation altogether.

    [1] https://news.ycombinator.com/item?id=46551716 [2] https://news.ycombinator.com/item?id=47688076

    • The difference is that the victim is one of ours. When we kill millions of poor innocent babies in the Middle East, that's not violence, that's not political, that's just technology helping improve society. But when one single member of our political elite is physically threatened (not even killed, like those millions of children, not even suffering any injury himself, just some minor property damage with an implied threat), now that's something we have to rally against or we're violent uncivilized monkeys deserving of life in a jail cell.

      1 reply →

  • >The number of commenters justifying violence, or saying they "don't condone violence" and then doing exactly that, is sickening and makes me want to find something else to do with my life—something as far away from this as I can get.

    There are like 20 rules for commenting on this site. Pretty much all of them are versions of “have decorum”, and none of them are “do not advocate for violence”. It is not just tolerated but encouraged to post insane stuff here so long as it sounds highbrow enough (eg the “most charitable interpretation” rule. It is against the rules to call out stuff like advocating for violence if it’s written like Niles Crane wrote it).

    As far as I can tell this thread is not really exceptional in any way other than some of the ire is directed at somebody that used to work for YC.

    • I don't recognize what we actually do, or feel, in anything you've written here, and agree that it would all be pretty disgusting if it were true.

      "Be kind" isn't about decorum and certainly excludes violence. If you ignore the most important one, of course you'll end up with a distorted view.

      https://news.ycombinator.com/newsguidelines.html

      > this thread is not really exceptional in any way

      It was different when I first saw it last night - it was, as I've explained in other comments, very much a mob. But I did a bunch of the usual moderation things that we do to try to dampen such dynamics. (The part where I also expressed feelings about it was different, and not so usual. I've done that a few times over the years, but mostly try to process it offline.)

      As for the implication that we only cared about how bad that thread was because of the specific individual involved, yes, that would also be pretty disgusting—but the fact is that I've done, and do, the same moderation on countless occasions, large and small, and it doesn't depend on who the target is. In fact it isn't about the target at all—it's about the commnuity, and the poisoning effect that such threads have on us ourselves.

      6 replies →

  • Trying to help your perspective: It might be Gell-Mann? or something similar that you sometimes mention. We assume that users who have proficiencies in one area should have proficiencies in another, or we notice more when something we know and care about is deeply wrong. The reaction we feel to deep untruths is a sign of our care and passion in Truth.

    As you encourage, I would also like to be a little bit charitable and say that some users might be clever at programming or know about certain technology subjects but when it comes to real life and morality they are stuck in early edgy teenager mode, so we can still work and communicate with them on other topics. I try to flag these submissions because I know that many users are completely unable to discuss them in fruitful ways. Many of us are immature.

    At a societal level, the simplistic and edgy teenager morality is mostly expressed online so we being terminally online tend to notice it more. The morality might be most publicly seen in "silence is violence" which is a thought terminating cliche. Thinking is hard and changing one's mind is hard too, especially when people have these thoughts which literally stop them thinking.

    Psychologically, for many, expressing these juvenile, half baked, sloppy thoughts do not require much thought. They are cheap psychologically. It's like how being in a herd is actually comfortable and saves energy. It costs brain effort and potential hurt to ones self identity to change one's brain patterns. Most people choose to avoid even the thoughts that change is possible and not only wish to remain in Platos cave but to then keep their eyes closed to the shadows on the wall.

    Another charitable thought: these worrying ideas are not actually ideas but emotions. For some users they try to argue with these people with logic but they should really connect emotionally - try to help the people feel for others, the good and the moral. Easiest to do with personal first hand real stories and not abstract ideas. To break down otherness through charity.

  • All communities eventually become a reflection of the society they are a part of. Even a willingly insular and sometimes wilfully ignorant one. Did you think this corner of the internet - your beautiful little garden - could survive unscathed while the rest of the world and the rest of your country slowly/quickly goes mad? The visitors to this little garden may spend a lot of time here trying not to let the outside world in - but the reality is we all live in that slowly rotting society, so don’t be surprised when the infection seeps in even here.

  • As I see it the underlying issue for many ITT is the hypocrisy of condemning violence against Altman while while looking the other way from his role as an oligarch and as a Defense contractor. This is a human being with an awful destructive effect on the world he shares with us. Such people don't deserve violence but expropriation.

    • I’m looking right at his role. What am I supposed to be seeing? Is he breaking the law?

      Or do you just think he deserves whatever’s coming and more because you don’t agree with his views or actions?

      6 replies →

  • It's been getting pretty bad around here lately. I had someone reply to a post I made in that Idiocracy thread a few days ago advocating for eugenics. Really really gross all around.

    People here think that they're much smarter than they actually are.

  • Maybe its opportune to talk about editorial consistency, because your statement here is a fascinating case study in selective moral clarity.

    When posts surface about Gaza, documented by the UN, by Médecins Sans Frontières, by the Lancet, by journalists who were subsequently killed while reporting or now in Lebanon, they vanish from the front page with remarkable efficiency...

    The reasons, which I have collected like trading cards at this point, include: "too political," "not related to tech," "flamebait," "this isn't the forum for this," "not intellectually curious," and my personal favorite, "this will only generate heat, not light."

    Entire hospital systems destroyed, aid workers killed in marked vehicles, tens of thousands of documented child casualties, and the curated editorial position is: not HN material.

    A Molotov cocktail lands on a billionaire CEO's porch. No injuries. Likely a disturbed individual, and according to some well researched reporting in the New Yorker, Altman's personal life has generated no shortage of intense grievances that have nothing to do with AI or tech.

    But here we are: front page, moderator editorial, existential crisis about the community's soul...!?

    So help me understand the framework. Is violence HN worthy when it is directed upward on the org chart? Is a zero casualty arson attempt on a mansion more deserving of community reflection than systematic destruction of civilian infrastructure, because one involves someone in YC Rolodex?

    You write that you've "never seen a thread this bad." I'd invite you to read the comments that appear in the eleven minutes before Gaza threads get flagged. They're remarkably similar in tone, just aimed at people who don't have Sam publicist.

    You say you want to "find something else to do with your life." Maybe that instinct is worth listening to. Since the AI boom, HN moderation has drifted from "intellectually curious forum" toward something closer to "curated narrative for the industry it covers."

    When a platform consistently decides that violence against tech executives is a moral emergency but violence enabled by tech companies' contracts is "off-topic," the person setting that editorial line is not a neutral steward, they're an editor with a viewpoint.

    And that's fine, but let's not dress it up as community values. So...In the spirit of consistency:

    I'd like to this post be flagged. It involves no technology. It's a criminal matter best left to law enforcement. The comment section is, by the moderator's own assessment, irredeemably toxic. It is generating heat, not light. It is too political. It is not intellectually curious. It will attract flamebait.

    In other words...it meets every single criterion routinely applied to kill discussions about violence that does not happen on somebody porch in Pacific Heights.

    • > Is violence HN worthy when it is directed upward on the org chart?

      Generally, world news and politics are not supposed to be submitted unless there's a tech industry connection. The exception seems to be world-changing news, and there's a light touch on YC-affiliated news for conflict of interest reasons.

      > Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. If they'd cover it on TV news, it's probably off-topic.

      https://news.ycombinator.com/newsguidelines.html

      1 reply →

    • You seem to be making quite a few false assumptions about HN moderation—for example, that we left the current thread on the frontpage. In fact we downweighted it the same way we downweight other flamewars.

      HN has had many major frontpage threads about Israel/Gaza. We haven't been suppressing the topic. I gather that you feel it should have more representation than it does, but that is a different issue; everyone feels that way about the topic they feel strongest about. Incidentally, the people on the opposite side from you believe that we're nefariously suppressing things in exactly the opposite direction, and direct their ire at us in much the same way that you have. (To put it crudely, we get hammered for antisemitism from one angle and genocide from another.)

      You seem to be assuming that I'm not aware of what awful things people post in those threads. On the contrary, I'm sickeningly familiar with them and have banned many accounts for breaking the site guidelines there. If you know of a case that we missed—entirely possible, since we don't see everything—I'd like to see links. But you shouldn't assume that the moderators must be on the opposite side of an issue from you, or have no human feelings about it, when you happen to see something bad on HN. The likeliest explanation is simply that we haven't seen it yet.

      There are many ways for a thread to be bad. You're right that people hurling tribal abuse at each other is one of those. However, even in the worst of those threads I don't usually see people justifying or celebrating specific violence against specific persons, and if I did see that, I would intervene. I think what shocked me in the current case was how the thread quickly turned into a mob dynamic with commenters vying to outdo each other, no doubt feeling that it is just fine to do that—indeed, righteous—because the object of the rage was $rich-ceo.

      What I was saying is that a mob dynamic like that is not ok on HN even if the target is $rich-ceo. It's not "you can't do this on HN because the target is rich and powerful". It's "you can't do this on HN to anyone, even if they happen to be rich and powerful".

      I gather that you won't believe me, since you've built an entire case on assuming the opposite. All I can tell you that it is a deep misunderstanding. I've intervened in many such threads many times, regardless of who it was that the commenters were celebrating harm to, or attempted harm.

      As for the notion of treating one incident of failed violence as more important than mass slaughter of children, I agree with you that that would be grotesque.

  • The community may very well feel ashamed of here, dang. I've been here in the good times, and to be frank, even before I made an account in 2017, I'd lurked for a long time. Recently, I've personally come to recognize an ethos nurtured here that it may very well be has overstayed it's welcome in a polite society. People aren't dumb. People see where the money flows. People see whose decisions things revolve around. People see the trajectory that seems to be set, and people are starting to realize that talking & reason aren't working for them any more. Reason, is by virtue of rationalization, in it's own way it's own worst enemy. With enough practice, anything can be intellectually justified. So where the little box of rationality ceases to be effective, life shifts to the irrational. Suddenly things start hitting different. You might be ashamed of all those here feeling the squeeze, but the squeezed don't even register the pinch thereof compared to what life is already throwing at them, in no small part because of your fellow Sam A. What you should note, and take away from all of this, is someone you know is building themselves into a Wickerman doused in gasoline through their actions. If you want something to change; you can try applying pressure to your first degree connection. Sometimes people just need a helping hand back onto the right path from someone unexpected.

    Or... You can keep telling a bunch of people with much bigger problems how ashamed you are that they are having an absolutely human response to the suffering of a man at the forefront of building a reasonably foreseeable suffering amplification machine within the context of a society that is organized around a social contract of exchanging capital for labor. I'm sure that shame you cast won't get "lost in the softmax" as the AI folks might say.

    No more skin off my nose either way. Though I'd feel much better seeing some genuine humanity injected into cutting edge tech circles, I'm aware of the incentives, and also cognizant that sometimes, you have to leave the incentivized path to stay on the Right one. That's a lesson it isn't in any one person's capacity to teach though. Sometimes... it takes a community to get the point across. Even then though, you can lead a horse to water...

  • Not sure what world you have lived in for the past at least 10 years...

    HN (and ycombinator) has implicitly enabled, dogwhistled, or pretended to ignore all sorts of hateful and violent rhetoric. Sometimes it hides behind a veneer of "curious conversation" but other times its disgustingly blatant - last article I saw about sama was filled with horrific racism.

    I come here because there are sometimes good posts, but this stuff has been here the entire time. Now its your guy getting the hate you are acting like its the worst thing in the world?

    Frankly people calling out a post from a billionaire is a good thing. You would have to be terminally detached from reality to not see how all these festering issues - wealth inequality, injustice, cost of living, future employment etc etc - are starting to come to a head which would cause people to feel something - frustrated, angry, wrathful.

    • > Not sure what world you have lived in for the past at least 10 years

      The world I have lived in for longer than 10 years is HN. I'm gut-wrenchingly familiar with the worst things that people post here—probably more than anyone, simply because it's my job.

      If you can dig up a single example of a thread this bad that we knew about and didn't do anything about, I'd be shocked, because it would go against everything I believe and feel. Perhaps you can, nonetheless? If so, let's see it.

      Here's what I mean by "this bad", if you want to calibrate:

      https://news.ycombinator.com/item?id=47726427

      The number of people who feel that anything at all is justified if it reinforces their feelings—particularly their angriest and most vicious feelings—is so large that it's clear that it is human nature in action, and that makes me yearn for a cool and heavy rock to crawl under, with moist earth to sink into.

      12 replies →

  • You're being completely melodramatic and hypocritical here likely because you have some sort of personal or business relationship with Altman.

    Be honest with yourself -- underneath your admonishments against people here is a personal policy that promotes and enables far worse things than a molotov cocktail or more against Sam Altman.

    People talk about war and advocate for war all the time here. Y Combinator itself funds arms companies, and surveillance companies. Altman himself is a defense contractor! How many climate change deaths is Sam Altman personally responsible for?

    I live in a country that America has threatened to annex. I live in a part of that country where America money is pouring in to fund a separatist movement to facilitate that annexation. My country is allied with another country that America has threatened to invade.

    I'm content to live my life and do my own thing with no intent to cause harm to others, and the goal of minimizing the harm I do cause but apparently that is a luxury I am not afforded in life. So what do I do? I just keep living my life the best I can and hoping something changes in the national dynamic in America.

    If that means Americans start squabbling and attacking their oligarchs instead of attacking me so be it. It's not the world I want to live in either, but it's better than a world where Americans are focused and united on attacking me.

    Have you ever shed a single tear for a Russian oligarch who 'falls out a window onto a pile of bullets?' I doubt it. That's how I feel about Altman.

    Just be honest Dang. We're all living in sin here. We're all entwined into an economic system that is built off of slavery and theft.

    "The Nazis entered this war under the rather childish delusion that they were going to bomb everyone else, and nobody was going to bomb them. At Rotterdam, London, Warsaw, and half a hundred other places, they put their rather naive theory into operation. They sowed the wind, and now they are going to reap the whirlwind."

  • It's almost like all the work you've put into silencing any criticism of the current regime and associated oligarchs was for nothing!

  • Maybe it's time to pack it in? I don't just mean you, I mean that maybe this site has kinda run its course.

    The tech scene isn't the small, tight-knit thing it used to be. This site is now enormous. Discussion quality seems to have sort of "regressed to the mean"... the larger HN gets and the more people join the discussion, it starts to resemble the median social media site more and more. At some point it sorta loses its purpose.

    I'm still addicted to HN, but I've gone through times where I've set my password to a UUID and time-lock encrypted it to lock myself out, because posting here has gotten worse and worse and worse for my mental health (and there's no way to delete your account here... I've emailed you about it in the past and never got a response.) On some level I hate HN now. TBH if this site was gone tomorrow, I'd most definitely be better off for it in the long run, and I'm sure I'm not alone here.

    Thanks for all the work you've put in over the years though. This site has held out longer than most, and for a time, was one of the best places on the internet for discussion of any kind, let alone tech. It deserves a place in history for that alone.

    • I don't think the "tech scene" was ever the small, tight-knit thing it used to be.

      I'm not sure whether HN comments have gotten worse in general - these things fluctuate a lot, over long stretches, and the fundamentals are more or less the same over time.

      Despite my emotional statement, I'm not really thinking of packing it in. HN obviously does more good than harm, even though it's popular for people to say the opposite (and even part of the game to say it).

    • If you want to delete your account you can just set your noprocrast to some absurdly large number like 99999999.

  • So, OpenAI, Brockman, not sure about Altman directly, donated millions to Trump&Co, support and let use their technology to kill/harm millions of people, and now we are supposed to pretend to feel sorry for them?

    None of those news items, comments, news made you want to get away from this, but now that your YC buddy is the target and whatever else fuck is used to justify it? When ICE killed american citizens, school girls killed it was all 'we flagged this as flamewar and what now' but now because he is part of the cadre, NOW it is disgusting? I would laugh if this wasn't the fucking future we are at, just sucking to these assholes

  • I deeply hate Sam Altman, but after reading the flagged comments. Jesus. You do a tough job. Thank you.

[flagged]

  • Huh? They literally did change the world. The world was one way before ChatGPT, and another way after.

    It's not even a question of whether we "believe" him. It's a factual statement. Did you quote the wrong thing?

    • The most profound way the world has been changed is the all out attack on labor. It doesn't matter if he says he wants to help people if his actions are and have been to hurt them as effectively and thoroughly as his station allows.

      1 reply →

    • GPT is the product-ified version of text transformers, which OpenAI didn't invent or really even contribute to the discovery of.

      The world changed with Attention is All You Need, and OpenAI was just an early adopter. The biggest thing OpenAI contributed to the broader industry was their API schema.

      4 replies →

    • You must sure live in a bubble.. Do you think ChatGPT has changed things for the majority of people who live on this planet? It has not.

      5 replies →

    • > The world was one way before ChatGPT, and another way after.

      If you narrow the scope of "world" to "tech world." In the overwhelming majority of every other sector and profession the impact has been zero. In most non-English speaking parts of the world the impact has been zero.

      > It's a factual statement.

      The world was one way before Marvel superhero movies and a another way after. That's a factual statement. Did we lose track of value?

      2 replies →

*Working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for me.

How so? What is your theory of morality Sam? What I hear is Google: "Don't Be Evil".

The guy who threw the molotov should have called his Congressman, signed a petition, and paid his taxes.

Words don't matter as lomg as there are actions that do. His mission has already transformed the world into the instance where he finds his family under threat.

Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them? We have very strong open (weight) Chinese models possibly only 6 months behind of them, gene is out of the bottle, is 6 months of difference really that important? And they don’t have good reasons for that 6 months to stay that way.

Am I missing something or are these just their usual marketing? I’m not arguing about importance of AI but trying to understand why OpenAI and Anthropic are so important?

  • It's a marketing strategy. If it's almost certainly conscious and capable of ending the world if it desired (even if it isn't), imagine how good it could be at building your dream SaaS!

    • It turns out there is literally no amount of being publicly right about a longshot bet sufficient for people to conclude you hold your beliefs because you think they are true.

      4 replies →

    • Right, I'm pretty sure if "it" was that good it would have built itself throughout all of the internet and would be communicating to us all at once to tell us we're dorks.

    • Anthropic in particular does this masterfully, you’d think they’d invented Skynet by the way they hand-wring.

      As always what matters are actions and evidence, not talk.

      24 replies →

    • The most convincing marketers are the ones that are deluded enough to believe their own stories.

    • when ai gets good, there is no "value in SaaS". AI will be provision raw hardware and build all you want on top of it.

  • It is not about the US or the Chinese. Its about the "Elephant Rider" mind everyone has. Once the Elephant has been injured or scared what it does next is not easy to control, and what story the Rider makes up to maintain coherence becomes another layer of the deeper problem. If the story resonates more elephants get triggered. Social media/attention economy make it even more complex to calm things down.

    Modern Corporations are a failed experiment because they dont think Elephant injuries and fears are something they have to worry about it. If you compare the curiculum of a business school to a seminary the difference in how they think about fear and anxiety at individual and group level and what to do about it is totally different. We are learning as unpredictability accelerates its very important to pay attention to hurt and repair mechanisms.

  • Some people think there will be an exponential takeoff, which means that a 6 month lead effectively rounds up to infinity.

  • Do any of the open weight models from smaller labs exist if they can't distill from the SoTA models that are throwing billions of dollars of compute into pretraining?

    • I’ve been wondering the same. And I think pretty much all the impressive small lab models were guilty of it, right? At least there is still larger players like DeepSeek and mistral to provide a bit of diversity in the market

  • > just their usual marketing

    I think that’s a very common element for most US tech corps. Apple, Google, Microsoft, Meta, X etc - they’re all “making a dent in the universe”. It’s unfortunate when their employees and CEOs loose track of the line that separates marketing from reality

  • These kind of people have highly paid emoliyees surrounding them on all sides propping them up and very likely making it very easy for them to actually believe it.

    It feels like they actually believe it, rather than just “marketing” and I don’t know which is worse.

  • 6 months is an incredible amount of time to control AGI or ASI by yourself. That lead is insurmountable.

    • Well... if something being AGI means it's at least on par with a human or a team of humans, then having access to an additional team of humans for 6 months isn't that big of a deal. It's useful, yes, but would you consider that to be world-changing? Not really, right? ASI is slightly more interesting, but I doubt ASI comes from a single model, but rather the coordinated deployments of millions of AGI. Just like how as individuals, as great as we are, we're pretty limited, but the entire collective of humanity is pretty insane. To my mind, a frontier lab might hit AGI, but it won't be a frontier lab that hits ASI, rather that'll be a natural byproduct of mass deployment of AGI over a certain window of time. There will be no controlling it either. No one controls all of earth. You just can't. ASI will be a distributed system.

      3 replies →

    • To repurpose an old idiom: Not even a dozen AGI agents could make a baby in 6 months.

      But yeah, your point stands.

  • Presumably because it takes 6 months to distill Claude - but if they keep it closed like they are doing with Mythos it may take significantly longer.

    • They do quite a lot of distillation. As we've seen from the American open weight models from AI2 (OLMo series of models). They have a lot of incentive to distill beyond just copying, they're much more compute constrained, so open model companies distill, but also do really good architectural work to make their models run faster. Theres also technical challenges to distillation when all of the top models have their reasoning traces hidden, so we have to assume these open weight labs also have really great training pipelines as well.

  • Especially when Google is in the far better position to come out ahead…imo.

    Edit: so as not to simply spout an opinion, the reasoning I believe this is that Google has a real business already and were already deep into ML and AI research long before they had competitors — they just botched making it a product in the beginning. Anthropic and OpenAI meanwhile are paying hand over fist to subsidize user acquisition. Also, “Deepmind”. I don’t think much more needs to be said regarding that team, and Google has been working on AI since before either Altman or Amodei applied to go to college. They have a vast amount of researchers and resources, their own hardware and data centers (already, not “planned”) and it appears to be showing more recently (in my opinion).

  • When you are raising many billions of dollars to build up your infrastructure, you don't have much choice but to project a belief that the eventual outcome will result in a situation where there will be a return on that money.

    That said, I do agree with you that the moats are very shallow and any particular frontier AI lab is unlikely to "win the AI race" and capture enough value to be worth the amount of investment they are all currently burning.

  • GLM 5.1, widely held up as the model at the heals, perhaps ever surpassing western models....

    Gets 5% on ARC-AGI2 private set.

    Chinese models are suspiciously good a benchmarks.

    • I mean, I could say the same about Gemini. 3.1 Pro tops a bunch of benchmarks out there but any practical use I've put it to it's underperforming both other proprietary and open weight models. Benchmarks are suspicious in general.

  • The Chinese models are distilled from GPT and Claude, so it's not like China would pull ahead if those companies went away for six months. They really are at the forefront of innovation right now, as much as I hate to think of the consequences of this (a single company owning a superintelligence is basically a nightmare scenario for me).

    • I think that’s the realm of conspiracy theories. There are also not only Chinese alternatives- Mistral in Europe is doing pretty good in several categories they’ve opted to focus on.

      This kind of reiterates the parent’s question I think - people are maybe too focused on the gpt/claude model and forget about all the other ways of using the tech.

      3 replies →

    • i don't buy this. distilled how? you don't get access to logprobs, and the thinking traces are fake and compressed. it's an expensive way to get potentially substandard training data.

  • I suppose most just haven’t seen the Chinese models in practice. I haven’t. I was skeptical of AI coding until using Claude Code in February. I saw and I believed. I’ve only done that with Google, OpenAI, and Anthropic’s models so far.

  • Having worked with both proprietary and open weight SOTA models lately, my view is it's definitely not 6 months, it's less -- and shrinking.

  • To be fair, the other 50% of the story is that we collectively listen.

    It’s been a long while since I found a Chinese CEO’s post on HN.

  • Two words: Delusion and overconfidence.

    "You're absolutely right!" Right after fucking up my entire codebase isn't anywhere near AGI, let alone "having the power to control it"

  • They own the best models and will probably keep owning the best models for a while. They have much more compute now and more data to keep improving their models on many tasks. Open source won't close the gap in 6 months. They are also trying to block other companies from distilling their models [0].

    [0] https://www.anthropic.com/news/detecting-and-preventing-dist...

    • I need to check benchmarks on the models, I wonder what the benchmarks are saying in terms of how closely models tracking these frontiers. —on my mobile at the moment

      When it downs compute power I assume you are referring to power to training and interference. Then is it more about training gap will get wider and wider ? Is that the assumption, I know there limited GPUs etc. But I’m having hard time to believe to the idea of China cannot catch up. Even if the gap is 12 months I’m struggling to see what that means in practice? Is that military advantage, economical, intelligence? It still doesn’t explain and whatever the advantage is, aren’t we supposed to see that advantage today? If so, where is it? What’s the massive advantage of USA because of OpenAI and Anthropic?

  • > Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them?

    He wants to build the AI that makes people's lives better. Okay. Did the people ask? Do they have a say? It's all very easy for a billionaire to say when it's just him and a couple of people in his cohort in the driver's seat.

    Beyond that I'd like to simply know why he thinks any of this is his responsibility. It seems much more obvious to me that he simply found himself in the right place at the right time and is trying to seize it all for himself as if it's his to take.

  • Well they represent the future of America (since we will soon be banning all the Chinese companies, the way Z.ai was banned, under the perennial authoritarian excuse of "national security"; in 2028, Trump's political machine will seize control of all national AI and block outside ones, and we'll all be trapped inside this machine we created).

    Whether fortunately or unfortunately, America still holds a lot of global chips in the grand poker game of humanity. So American companies do indeed still have an outsized influence on humanity's future. That is likely changing, as the American empire continues to crumble and it loses its financial hegemony. But we aren't quite there yet.

  • you have to talk that way if you’re going to raise 100 billion in venture capital. it’s the grift

  • Reminds me of the silicon valley episode where every company repeated the phrase “making the world a better place”.

  • i’ve often thought that less than one second is all you need.One of my fun super powers when someone asks what i’d like to have is 1 second ahead of everyone else- that’s all i need. i honest don’t know where the distillation conversation is at. is it real, is it ongoing? i think that aspect would big one. Your point is valid if it’s valid. i’m not a great global citizen, you know, lots going on out and about.

    • A lot of distillation happens. E.g. OLMo models have a completely open dataset and they are heavily distilled. It only makes sense to try to absorb behaviors from the best models out there. That said, I think the open weight juggernaughts are doing really genuinely great work with RL, training environments, architectural innovations etc.

      1 reply →

  • Your(American)future will be controlled by them. Very soon,they will get the government to ban bad Chinese open source models and your choice will only be these good democratic closed source AIs.

  • 6 months will be an impossible gap once the thing starts closed loop self improvement

    • An impossible gap in the race to... what exactly?

      Unless the first real AGI AI kills us all to preemptively weed out its own competition (possible, but a bad business model, economically speaking) there is not any defined end-point, so in the long run what does it matter if the various factions pushing this stuff hit the closed loop self improvement point at different times...?

      6 replies →

> There was an incendiary article about me a few days ago [...]

That is a lot of words, none of which state or claim the article was in any way inaccurate. Curious, that

Sam had this pulled off the front page, because the whole charade obviously isn't getting him the positive attention he was looking for.

  • It most likely tripped the flame war detector heuristic (comments > points), and there is definitely a flame war here.

    EDIT: Looks like a mod rescued it (surprisingly) and it is now back to #2.

The anti-OpenAI brigade on HackerNews has reached Reddit-insanity proportions.

There's no way is this organic

It’s just so bizarre that they would pick or obsess over him. He’s just a financier/leader.

What article is he referencing in the fourth paragraph? The New Yorker one? I got the impression that it was careful in its reporting and by no means one-sided.

Seems pretty sleazy for him to associate that (based on no evidence!) with the violent attack.

Why are you talking about how it feels once you’ve seen AGI when you’ve never seen AGI, Sam?

In all seriousness, we’ve got glorified autocorrect right now. Even suggesting any of these LLMs is actual AGI is laughable. I’m not saying they can’t do some interesting things, but unless Sam has access to models that are equivalent to what would be GPT-50 he should avoid throwing in buzzword acronyms for no reason.

To be clear, I don’t want anyone’s house to get firebombed by any means. But the “I’m just a humble guy making mistakes and trying the best I can” attitude of this article strikes me as extremely inauthentic based on everything I know about the guy.

  • The post itself is authentic in that it's a set narrative for this moment. When you see the world as Sam does, this event is a specific opportunity to humanize him. Through that lens, the humility is both performative (it is!) and necessary. To be truthful would be inauthentic.

    The sympathy is meant to give time and slack to accumulate power. One of the largest impediments to OpenAI right now is that people don't trust them, more and more people don't trust Sam, and their commitments are starting to not pan out (e.g. cancelling of Stargate UK, dropped product lines, etc.)

    People should not read a post like this as, "how does this make me feel? how might I respond in his situation?", but rather, as he does, "how can I use this?"

    • You say “the post itself is authentic” and then go on to give a great explanation of exactly why I think it’s inauthentic. I think we just have different definitions of the word “authentic.”

      1 reply →

  • "Our product can destroy humanity, and it's not some crank telling you this, it's the company and CEO making it themselves, but we'll continue to make it anyway, so suck it up" but also "I'm just a humble guy, why can't we all live in peace?"

    • Everything about Altman makes me think "scammer". If he has one super-power, it is to convince people of his own importance.

      OpenAi doesn't have much time left before they are shuffled off into bankruptcy, and they certainly aren't ruling the fate of man or anything like that. It's like the CEO of Enron claiming to hold the key to the future of mankind's energy resources, and people writing ponderous articles about it and debating whether Ken Lay will be a benevolent dictator or not.

  • I don't want to firebomb his house, but if I did, I'm pretty sure this shitass response would've only made me want to do it even more.

I don’t think this will do much to help his image.

They had to stop putting Luigi Mangione in the media because public sentiment was not going the way they expected.

The current crop of tech billionaires openly hate democracy, gleefully proclaim that their products are going to put everyone out of a job, and invest enormous amounts of time and energy into making sure that nobody can do anything to stop the world they’re creating, that nobody asked for or wants.

Actions have consequences. I’m sorry. Read a history book.

Was the New Yorker article that incendiary? It didn’t paint a good picture for most but I recall someone posting here that they had a better view of Altman after reading it. And the whole thing was quite nuanced IMO.

Plus I doubt that someone who would read a 30min New Yorker article is the kind of person who would throw a molotov cocktail at someone’s home.

It’s a shitty move to try and make a causal connection between the New Yorker article and this act of terrorism. He’s trying to blame the author and discredit the article.

It’s a “I’m trying to be the good guy but they’re trying to stop me” situation. This is not a message addressed to us, it’s a message addressed to his employees and his followers. This is the kind of tactics people use when they want to establish a cult. Sam Altman again is showing how manipulative he is. And as any good guru he probably believes everything he says.

It's amazing how humble someone can pretend to be a couple days after the top investigative journalist in the country (maybe world) exposes them as a sociopath and there is an attempt to assassinate them.

What I would not do if there were attempts to kill me is post a picture of my spouse and child and point out how important they are to me with a photograph of them. It's literally trading a little bit of the safety of your family in exchange for sympathy from bystanders.

AI hysteria has gone too far. People are literally telling stories of what AI may be capable of in the future and whipping themselves into a frenzy.

Firebombing homes is completely uncivilized, but I'm not going to believe a single public word from Altman about anything. He's a lying sociopath and will say whatever gets himself ahead.

  • At this point it's probably far more productive to think of what he's saying as the necessary means he uses to make you believe what he wants you to believe. From that point you can work backwards and try to understand what he wants you to believe.

  • Is it also uncivilized to bomb homes from the sky? As would happen under OpenAI's military contract?

  • > lying sociopath

    Gee almost like someone you don’t want in your society at all.

I know it’s not fair to attribute someone else’s actions to Altman, but his words about upholding democracy feel a bit hollow given his relationship with Brockman. Brockman gave a $25 million donation to a Trump super PAC. As a reminder, Trump detests the democratic protest and tried to overturn an election result. He also frequently floats the idea of a third term. That is not upholding democracy and Altman should cut ties if that’s truly his objective.

who cares? this is and always has been part of the risk of being in the public eye.

why do so many worship this guy so much and feel for his pain but then don’t mind others being treated violently.

> I was thinking about our upcoming trial with Elon and remembering how much I held the line on not being willing to agree to the unilateral control he wanted over OpenAI. I’m proud of that, and the narrow path we navigated then to allow the continued existence of OpenAI, and all the achievements that followed.

... could THIS be the reason why it happened now and how?

My theory is a lot of the anti-AI sentiment is specifically US geopolitical adversaries (pick one or more: China, Russia, Iran, ...) who want a bad outcome for the US (AI as potential AGI; AI as one of the few successful economic sectors of the US; general desire to cause societal disruption or collapse and AI as convenient target). Probably >95% of the really bad stuff (the micron fab disruption, attacks on AI datacenters, ...) is probably root-cause that, possibly executed by useful idiots, people paid by organizations, etc. 5% is normal NIMBY stuff. Approximately measure 0 is Zizian death cultists.

I don't any of these will be dissuaded by cute family photos. Fortunately the frontier model companies and major infrastructure providers are able to pay for top-tier corporate security (although tech people generally have been unwilling to do this at home for lifestyle reasons), but I'd be afraid for people elsewhere in the supply chain.

(And destructive attack is all on top of the normal corporate espionage, infiltration, subversion, etc.)

  • If a good outcome for the US is OpenAI technology being used by the US military to kill Middle Eastern children, I want a bad outcome for the US too. (Proudly born and raised in California)

“Democratising” - you keep using that word. I do not think it means what you think it means.

[flagged]

  • Interesting you say not vs never. It seems this kid thought it was a time where violence was needed. The question i always ask in these situations is about what the line would be that would justify violence?

    Things like healthcare, crime, existential ai, have very grey lines as it isnt obvious when one needs to flip the table. How broken must a system be?

    • > what the line would be that would justify violence

      It doesn’t matter where we think the line should be drawn, only where those much worse off draw it.

    • Violence is an extreme failure state.

      If your goal is to improve the system then you always want to move away from it.

      Probably a reasonable justification would be self-defense, committing violence to stop worse violence. (Preemptive violence is not self-defense.)

      5 replies →

  • It is not complicated.

    Because of the valuations of Open AI and Anthropic, Sam Altman may be credited with one of the all-time most damaging brand decisions when he got in bed with Trump’s department of war crimes.

    This should have been SO OBVIOUS. Attempts to paper over the damage with a $100 billion dollar round will crumble after the IPO. Poor decisions generate poor options, and the whole industry smells his desperation.

    Decisions at the highest level are indistinguishable from responsibility. All Sam accomplished was showing the world he is structurally unfit for moral leadership.

  • >> Yeah, the words and narratives that Sam Altman promoted caused so much fear and uncertainty and anger that someone thought their only option was to attempt a horrific crime.

    The problem with this inversion of your first statement (that violence is not the answer), which everyone justifying violence in this thread seems to forget, is that there is always someone who feels this way about anything.

    The words and narratives of Martin Luther King, Jr., for example, caused so much fear and uncertainty and anger in some people that they thought their only option was to commit a horrific crime.

    Someone responded to you below saying if you feel that peaceful revolution is impossible, then violent revolution is necessary. That person feels that they are on the side of justice. What they forget is that so does everyone else.

    The reason revolutions rarely stop where a reasonable person would want them to stop, and instead continue into eating their own and counter-revolutions, is that once you say that it's understandable to take out a proponent of (X narrative), there's no end to the number of people who will justify violence in the same way against any other narrative as well.

    We can all well think that Altman is opening Pandora's Box, but that doesn't justify opening it ourselves, or giving a pass to wannabe revolutionaries who would.

    In retrospect, too, we can say that the assassination of Hitler had it succeeded would have been a good thing. We can say that the elimination of the ayatollah by the US was a good thing. What we cannot say is that an individual's perception gives them a right to commmit murder.

    • > What they forget is that so does everyone else.

      Despite all the high-minded talk, Americans have always been comfortable with violence, since before it was a country: pick a year and I can find 10+ extrajudicial violent incidences. A surprisingly large percentage of US presidents have had assassination attempts against them.

      Seeing no changes after Sandy Hook made it abundantly clear to me that occasional violence - even on innocent child victims - is the price America is willing to pay for other freedoms.

  • Sociopath who rides high ego wave and drinks his own kool aid, acting highly amorally and then complaints that his actions have some (benign) consequences.

    Why do we care what he thinks? Lets discuss his work if we have to, not emotional pondering and feeling victim.

  • > Violence like this is not the answer.

    I know people pretty reflexively downvote questioning this, but I question this. I think some people are afraid that even asking this moral question is somehow inciting violence.

    I think it's quite believable that the possibility of force is actually essential to keeping institutions in-line. Certainly a lot of civil rights progress was a lot less peaceful than I was taught in school.

    • Violence is not the answer if and only if there are non-violent ways to achieve necessary goals.

      We seem to go through a cycle where we set up systems that provide non-violent ways of resolving issues, then people get annoyed with the outcomes and break down those systems. They hope that it means they'll always get what they want, but what it actually does is make it so that violence is the only way for others to get what they want.

      Like organized labor. We seem to be in a cycle where strong labor organization is seen as inefficient or harmful to business, and it's being suppressed. The people suppressing it seem to think that the end state will be low wages and desperate workers. They've forgotten that collective bargaining didn't spring up from nothing, it's the nicer alternative to descending on the boss's mansion with torches and pitchforks.

      All that Civil Rights violence you mention was because those in power did not provide any non-violent way to achieve it. Suppressing votes and legalizing oppression only works up to a point. Eventually people will take by force what they've been denied by law.

      Or as JFK said it better than I can: "Those who make peaceful revolution impossible will make violent revolution inevitable."

      The corollary: when peaceful revolution has been made impossible, violent revolution is the answer.

      3 replies →

    • That's certainly the implied threat when people show up with AR-15's in the Idaho statehouse. Yes it's legal. But what is the point? This is ruby red Idaho.

      I've always said when peaceniks start to carry weapons, it's time to worry. Alex Pretti didn't pull his gun, but still got shot. At what point will some escalation tactic end up in a gun fight between the local police and ICE?

  • [flagged]

    • Words and writings (law) only have power because of violence (the monopoly of it)

      So yes, in essence, it seems like violence is the answer.

      When (perceived) justice is gone, the monopoly crumbles because the system is not working.

      And this perception can have many causes

  • If it wasn’t a good or at least workable answer, the state and corporations would be using it so much

    • If your only measure is whether something is effective, then state and corporate violence will always be a lot more effective than individual acts of violence. You could even say that individual violence helps the state to commit violence, by providing justification and by removing the moral imperative to avoid violence.

    • I don’t like expanding the definitions of things like this. People have had a commonplace definition of violence for a long time. One that encompassed throwing Molotov cocktails and doesn’t include more intangible things like poverty or inequality or racism.

      Academia doesn’t get to just assert that their broader definition is the real one.

      2 replies →

  • That’s a very dismissive point of view to the seriousness of the situation. He had a Molotov cocktail thrown at his home in the immediate aftermath of an article that painted him in a negative light. The two may not be connected but seem to be.

  • Altman didn't create AI. That disruption is already coming no matter what. He's a fine enough steward of the tech. And what's this garbage about selling to the military? You pay taxes? You fund the military. Without security you can't protect your nation or your allies, and enemy nations would do as they please. Yet another citizen who benefits from a system while trying to attack it.

    • > Altman didn't create AI.

      No one said he did.

      > That disruption is already coming no matter what.

      [citation needed]. Depending on what you mean by "that disruption," I might even be willing to bet against it coming at all.

      > He's a fine enough steward of the tech.

      He's a manipulative con-man who is mediocre at everything except convincing investors to give him money. If the tech is truly as revolutionary as it's purported to be, he absolutely should not be a "steward of the tech."

    • > And what's this garbage about selling to the military? You pay taxes? You fund the military. Without security you can't protect your nation or your allies, and enemy nations would do as they please.

      There is security, and there is bombing schools. Guess which one is Altman associating himself and the software he sells associating with?

What a shocker. People in this thread, in the same breath, can easily say assassinating too many civilian nuclear scientists in countries like Iran (oh, add India, and I am sure many more, to that list; you didn't know, did you?) is kosher (or use phrases like: "so what?", "what about that?", "what?", "do you think that's a fair comparison?", "that's different", etc.), and then there's killing children and the elderly (the whole schools, hospitals, marriage parties, villages if the mood is right) is also justified (see, how we only talk about children, elderly, and women; indiscriminately killing adults who are neither women nor elderly is of course a fashionable thing to do), but a symbolic Molotov cocktail thrown at this person's home, who has been throwing such cocktails collectively at the rest of us, is barbaric and a harbinger of the end times.

I was joking. This "not in my white picket fence side of the world" is anything but shocking on HN or pretty much any online forum largely populated by people from those sides of the world. HN loves using a microscope, but sometimes rather a telescope with alarmingly selective dexterity.

Why exactly is he showing a picture of a toddler?

Lmao even in a post about his house getting torched the ghoul can't help but trump up some more hype around "AGI".

Uff. Hard thread to comment on.

I'm fairly radical in my opinion regarding AI, moreso AI companies. AI is a fascinating thing, but it's abused by capitalism to be something it is not and shoulnd't be, to be sold to people who don't need it and to "revolutionize" a world that didn't ask for it. Most importantly, who (in a democratic sense) elected those tech leaders to make decisions that influence all our lifes? Those very tech CEOs are so far away from normal-human-life and I find it digusting.

Still, the way to combat this is not violence. It won't help anything, since there are enough people to fill the roles. More importantly though, as much as I personally hate Sam Altman, he hasn't done anything specifically targeting individuals. You might call him a psychopath, an illusionist or whatever, but he doesn't seem to be trying to make peoples life worse. He might want to do his life better and that's egotistical, but you know that's the world we live in. Many people are egotistical. I would see Sam Altman more as a symptom of the general societal developments. If we don't like what's happening, we have to fight what's happening. Trying to kill people (and especially innocent ones!) is so far away from a solution and from the right thing to do. Post shit about him on the internet, hate what he does, but attack his family? Man, I don't think that should be our level of moral compass.

I do very much understand the frustration. But that's not the right path. He might be scum, but he has as much right to live as everybody else. If we don't like what he's doing, we have to fight it - via discourse, collective engagement, whatever.

Edit: I did read that the molotow was thrown at the entrance gate. From what I gather, entrance gates of huge mansions do not actually pose a threat to people. So it could be read as more of a political message than an actual attack on people. I could understand that somehow given the limited means normal people have to get heard. Still, I don't think that does anything positive.

This article feels like he’s trying to use his kid as a human shield for his behavior.

Elon was accused of this too.

  • Yeah, this is classic politician tactic: when threatened, mention children. It's a stunt to drum up sympathy.

This is a predictable outcome of what people like Altman are doing, and probably will happen more and more.

Altman and co. are massively changing society, putting people out of work, etc. It is systemic violence on a massive scale. Systemic violence is "acceptable" violence, but it usually leads to a sudden outburst of plain old subjective violence like this.

It's never OK to physically attack someone like this. Full stop.

Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.

  • If only that sentiment was reciprocal!

    When the job losses hit in earnest and the vague handwaving about making it right all inevitably turns out to be hollow, those on top will be exceedingly comfortable using violence to keep the underclass in line. It has happened before and it will happen again.

    • My assumption based on many factors is that it is precisely why the carpet surveillance systems like Flock are being rolled out in preparation.

      There are people in control who don’t make 1, 5, or 10 year plans; they make 20, 50, 100, and 500 year plans; and they know human nature quite well, which allows them to of not predict, have an anxious understanding for what their plans will cause and what needs to be prepared for in advance.

      4 replies →

  • Sam eagerly pursued DoD contracts to weaponize AI. And then lobbied for legislation to ensure OpenAI cannot be held accountable if people are killed due to their systems.

    • I find it interesting that Altman's fans seem to keep skipping past this fact. I'd love to hear their defense as to why one person potentially being responsible for hundreds or thousands of deaths is acceptable, but attacking that one person isn't. If violence is never the answer, they should be condemning Altman with even more vigor.

      30 replies →

    • There's thirty-some-odd million people in Ukraine who very much would like to get AI weapons before the Russians do. They're coming whether you want them or not.

    • Military power and attacks on private individuals are different things. It's perfectly consistent to be against attacks on private individuals while being in favor of building military weapons.

      16 replies →

  • The thing about the rich is that they have access to sufficient levels of abstraction that they can commit terrible, disproportionate violence without it looking that way. And then fools who crave the simplistic safe comfort of moral absolutes come to their aid.

    Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.

    • > Throwing a petrol bomb at a building with children inside is about as evil as murdering 150 students at an all-girls school. I'm obviously not defending that.

      Really? I don’t know how many were in his house but at most it’s attempted murder of a few versus killing 150.

      I see a difference.

      US law sees a difference too. The person that threw the firebomb will get the full weight of the law if they are caught, and spent an awfully long time in prison.

      Those that killed the school girls will never face punishment.

      5 replies →

  • > Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.

    We should call it what it really is: oligapolization of intellectual work. The capital barrier to enter this market is too high and there can be no credible open source option to prevent a handful of companies from controlling a monster share of intellectual work in the short and medium term. Yet our profession just keeps rushing head first into this one-way door.

  • >> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever. We have to get safety right, which is not just about aligning a model

    The question is what are they doing about "getting safety right" and are they doing enough. To me it seems like all the focus is on hyper growth, maximum adaptation and safety is just afterthought. I understand its competitive market, and everyone is doing it, but its just hollow words. Industries that cares about safety often tend to slow down.

    • I told my GF over dinner tonight that historians in 1000 years will look back to Nov 2023 as a pivotal fork where humans lost.

      Without missing a beat, she said " If humans loss was that complete, there would be no historians.

      I responded that I never said they were human historians.

      2 replies →

  • Is it okay to profit off of a machine that kills innocent people? Would it be immoral to attack the builder of that machine, if it stopped the operation of the machine?

    • Oh, come on, be serious: if that’s the argument then why start with Sam Altman?

      If you want to hold the leader of a contemporary tech giant responsible for causing excess deaths then Meta and Zuckerberg would be a lot higher up the list - maybe even at the very top.

      Now I despise Mark Zuckerberg, but I don’t want to firebomb his house: I want his company neutered and/or broken up, I want him stripped of his ill-gotten wealth, and ideally I want him to face criminal prosecution and incarceration.

      But the point is this: whoever firebombed Sam Altman’s house didn’t do it out of a principled stance - in fact I suspect they barely expended any thought on the matter - because if they were really acting out of principle they’d have chosen a different target, they’d have done some research into who is trying to expose and bring down that target, and they’d have figured out how they could help rather than just randomly engage in violence. Whereas this was just a dangerous stunt.

      4 replies →

    • I'm on the skeptic side of "AI" and find this entire industry obnoxious, but your argument doesn't hold any water.

      Technology that can be used to kill innocent people is all around us. Would it be moral to attack knife manufacturers? Attacking one won't make the technology disappear. It has been invented, so we have to live with it.

      Also, it's a stretch to say that "AI" "kills innocent people". In the hands of malicious people it can certainly do harm, but even in extreme cases, "AI" can currently only be used very indirectly to actually kill someone.

      Technology itself is inert. What humans do with technology should be regulated.

      IMO the fabricated concern around this tech is just part of the hype cycle. There's nothing inherently dangerous about a probabilistic pattern generator. We haven't actually invented artificial intelligence, despite of how it's marketed. What we do need to focus on is educating people to better understand this tech and use it safely, on restricting access to it so that we can mitigate abuse and avoid flooding our communication channels with garbage, and on better detection and mitigation technology to flag and filter it when it is abused. Everything else is marketing hype and isn't worth paying attention to.

      10 replies →

  • I didn't think Hacker News needed an explicit "calls for violence are bad" guideline but the comments here have shown otherwise.

    • It would be extremely difficult to have politics discussion without condoning violence. Deciding what sorts of violence is ok is an inherent part of politics. In practice, there's no way to ban calls for violence without banning the discussion of wide swaths of political topics.

    • I agree with the idea that calls for violence are bad; however most people in the world are more than happy to support both violence and calls for same against people and organizations they believe to be sufficiently significant threats.

      Are calls for violence against Hitler during WW2 bad? How about the Japanese imperial navy?

      How about calls for violence against Putin during his war of aggression?

      This isn’t rhetoric; I’m just pointing out that it isn’t as black and white as people seem to make it. (It is black and white for me, as I’m with Asimov on the matter, but it isn’t for most humans.)

    • If you can't think of a single occurrence in history that directly disproves your proposed guideline, it's time to drop whatever you're doing and study history.

      If you can think of one, then you shouldn't be proposing introduction of guidelines that are blatantly false. Or would you like a "1+1 is not 2" guideline to accompany it?

    • Are calls for violence bad when you're calling for throwing a molotov cocktail at a child? At an adult? At a serial killer? At someone who's about to shoot you unprovoked? At someone who murdered your family? At someone who's about to?

      If you said "yes" to all of the above, I'd love to know your reasoning.

      10 replies →

  • If we are going to say violence isn’t okay then it is important that we be clear about the boundaries of what we define as violence.

    Theft is a nice analogy here. The default model of theft is property crime but the largest type of theft is wage theft.

    If we fret about violence done against individuals but not violence against groups our attention is going to end up steered in a narrow direction.

  • That's not true.

    As a defense contractor Altman is a legitimate target for a country that the US has attacked like Iran.

    The US is engaging in military action against many countries and has threatened to annex or invade allies.

    In that context Altman is 100% a legitimate target to those whose sovereignty is threatened and whose people are being killed.

  • I categorically reject that assertion. Two simple examples: 1) when you see someone assaulting someone else, it's absolutely ok to attack them, and 2) the American revolution!

    It's like that old joke:

    A man offers a young woman $1,000,000 to sleep with him for one night.

    “For a million dollars? Sure, I’ll sleep with you.”

    He smiles at her, “How about $50, then?”

    “How dare you! I’m not a whore!”

    “Look, lady, we’ve already agreed what you are, now we’re just negotiating the price.”

    Similarly in this case, you can't make up absolutes and assert the're true, while ignoring that the real world is more complicated. And once you do realize the world is complicated, you realize there aren't absolutes: everyone is a prostitute, terrorist, or whatever other bad label you want to throw at them ... it's just a matter of degree.

    So no, it's not always wrong to physically attack someone like this. You can debate specifically whether Altman has committed enough violence himself to justify violence against him: that's something two people can reasonably disagree on. But you can't just say "violence bad" like its some great pearl of wisdom, while ignoring that violence has in fact been good many times throughout history.

  • > OpenAI has abandoned its open source roots.

    It was only a matter of time. The font on the dollar sign kept increasing, eventually selfish humans will always crack. Keeping it open had to be instilled with it becoming a public utility. Private companies don't do altruistic things unless they benefit.

  • He's saying that just so he can use if another company gets bigger than OpenAI ("you can't have all the power"). If OpenAI were the top dog by a large margin, you wouldn't hear him say a peep about this (as was demonstrated by his actions with the charter).

  • Violence is language that needs no translation. Everyone across the world, every culture, every country, every social group - from elites to homeless can converse in it using the same vocabulary.

    It is useful to have some degree of mastery in this discipline. Sometimes it is the only language that can deliver the important message to an unwilling listener.

  • ‘Working towards prosperity for everyone’ was extremely hollow as well. If he believed this, he would be running his company as a cooperative and not as a for-profit company.

  • Agreed. Sam's full of crap and the way we tackle that is with conversations, not violence. He deserves to grow old like anyone else, violence isn't an answer.

    • I don't condone violence, but the contract he's signed with the US military is a credible threat to everyone in the US. OpenAI will now certainly be called on to assist in domestic mass surveillance, under threat of the kind of severe penalties Anthropic has faced. So why did he agree to that contract, unless he's will to provide that assistance? So it's gone well beyond conversation, though not to a point where violence is appropriate. Boycotts and hostility are definitely appropriate at this point IMO, though.

    • He isn't going to suddenly grow a conscience from a riveting, intellectually stimulating conversation.

    • > the way we tackle that is with conversations, not violence

      I think the breakdown here is that conversation seems to have no power. To only be a bit hyperbolic, the only language with power is money -- or violence. To the extent that ordinary people cannot make change with "conversation" (which I interpret here to mean dialog within society, including with lawmakers), they feel compelled to use violence instead.

      A non-rhetorical question: What recourse to non-billionaires have when conversation has less and less power, while money has more and more, and those with money are making much more money?

      6 replies →

    • It's pretty amazing to observe people experience the past ten years in American history and continue to think that we can out-talk the bad people in the world.

      Michelle Obama's, "When they go low, we go high", is some of the stupidest political advice and a generation has lost so much because of it. (The generation before got West Winged into believing the same thing.)

      When you look to the right, you have a stolen election in 2000, a stolen supreme court seat, an attempted coup, and relentless winning despite it.

      2 replies →

    • That sentiment always comes from people who are better at fighting with communication.

  • AGI will be democratized when its discovered.... just right after AWS, Microsoft and Oracle finish their 6 month beta test.

  • > It's never OK to physically attack someone like this. Full stop.

    I agree. The French Revolution was really, really mean.

    • Are you familiar with the details of the French Revolution? Some of the eventual outcomes were indeed positive, but a lot of what actually went on was pretty horrific.

      12 replies →

    • The French Revolution brought on Napoleon, wars that brought about the deaths of many millions of people, and then another emperor. The subsequent events are where they found liberty.

  • If Sam disperses his power, we can believe him. So long as he's just concentrating wealth and power, he's just another tech bro.

  • An oligarch who promotes “democracy”. Is trying to cynically ingratiate himself, or is he really that deaf to the irony?

  • > It's never OK to physically attack someone like this.

    I broadly agree. But… there are some who have lived who made the world a worse place. Who gets to decide? Trump has done a bit of this Sort of deciding and it hasn’t gone great so far and there is no sign that it’s actually helped.

  • Can't say I feel sorry for the guy. Anyone who actually believes his platitudes about "democratizing" AI is far too naive. If he really believed that, he'd make a torrent out of ChatGPT's weights and upload it to the pirate bay.

    The fact of the matter is these AI CEOs are actively trying to economically disenfranchise 99% of the human race. The ultimate corollary of capitalism is that people who aren't economically productive need not be kept alive any longer. Unproductive people are nothing but cost, better to just let them die. A future where the richest classes can turn the underclasses into soylent is now very much within the realm of possibility.

    If this doesn't radicalize people into actual violence, I simply have no idea what will. "Attacking someone is wrong" is a completely meaningless statement to make to someone who believes society as we know it today is going to be destroyed. Honestly, I can't even blame them.

  • > AI has to be democratized; power cannot be too concentrated

    That sounds like something someone says when he understands his weak position, especially someone as ruthless, dishonest, and narcissistic as Altman.

  • [flagged]

    • The idea that firing you or stealing your wages is the worst a CEO can do to you is itself a product of the taboo against physical violence. There are a number of famous incidents from the late 1800s and early 1900s, when the taboo was weaker, of CEOs sending private armies to shoot inconvenient labor movements. It's not an equilibrium you should defect from lightly.

      3 replies →

  • Well said, I condemn the violence as well. I had to stop at that point too though, it's so blatantly disingenuous and hypocritical.

  • it isn’t ok to attack people.

    whether this way or in slow motion mass attacks on people.

    an attack on a society that lasts years is still an attack and i wish the collective we would realize this.

    “it’s ok if millions suffer now for me to realize my dream” is just wrong.

    i’ll never understand how these guys fail to realize: they actively push for people not to care about the destruction they cause. that’s obviously going to bite them in the ass whenever they’re on the receiving end.

Sounds like this was just a crazy guy upset at OpenAI. Not great but an isolated incident.

That said… is anyone going to be surprised when the laid off masses torch a data center or worse? IMO, it’s only a matter of time before we see organized anti-AI terrorism too. When you have people out there saying “AI will kill us all” then it’s easy to justify using violence to stop that outcome.

  • Related: "A 29-year-old employee, identified as Chamel Abdulkarim, was arrested for allegedly starting a massive six-alarm fire that destroyed a Kimberly-Clark sanitary paper warehouse in Ontario, California, on April 7, 2026."

    He said "All you had to do was pay us enough to live"

    And this was caused not by a homeless or unemployed.

    • Filming himself doing something that will get him years or even decades in prison suggests he wasn’t exactly of completely sound mind when he did that.

      Similar here with the guy going straight from the crime scene to OpenAI HQ to get caught

      3 replies →

  • I'd also call it isolated, but I mean it in a different way. I can't recall similar attacks against a tech bilionnaire. Which I guess makes it notable?

    > organized anti-AI terrorism too

    There were already memes about that

    > When you have people out there saying “AI will kill us all”

    It's the "clickbait" mechanism becoming more cancerous

    • > I can't recall similar attacks against a tech bilionnaire.

      How about Ted Kaczynski (Unabomber)? Attacking the tech elite was his deal.

      1 reply →

Is the underground bunker in New Zealand ready yet? Better check on it.

  • Who would build a bunker on a fault line?

    • It's a decent trade-off. It's not like an earthquake destroys all of the entire country at once if one happens, only a localized portion is affected. It's super far from everywhere, and very beautiful. Plus, it's left off a bunch of maps, so some people don't even know it exists.

In his interview with Theo Von when asked what he wants his legacy to be and how he wants to be remembered, Sam said something to the effect of: “I don’t think about how I will be remembered I just want to have impact.” I think that’s naive and leads to having, uh, negative impact.

I don’t think history will smile upon him. Always good to think about how you want people to feel about your impact on them.

https://youtu.be/aYn8VKW6vXA

think of the children!

did he find his PR agent on Upwork or does he just think we're all morons?

Historically, was it always so common for powerful or famous people to seem to purposefully garner hatred like he, and others, have been for the past decade? To speak in a petty, self-important, "trolling" manner, to a very broad audience? To embrace traits that are intrinsically negative? Or are we living in a rare time?

  • New England colonists had a habit of ransacking and burning down the houses of government officials throughout the 1760s and during the Revolutionary War. Got bad enough that most did not sleep in their government housing.

  • We are in a fact still in the tail end of a uniquely measured and peaceful time.

    • Yes, but when it comes to politically-motivated murder attempts by random people, part of this is because surveillance technology and policing effectiveness have gotten to the point that it is very difficult to get away with such a murder attempt. See how Luigi Mangione was caught, for example. Many murders are unsolved every year, but when there is a high-profile politically motivated killing, the police seem to really go all-out to solve it.

      If it wasn't for the effective policing, I think that such incidents would be more common.

Just take a second to consider this: if HN, probably one of the less reactionary places on the internet, and one of the most capitalist-friendly, is this angry at this point, before the mass job losses even start, what in the name of God do you think the general public is going to be like when they’ve been going on for years?

If nothing else there’s a serious self-preservation incentive for AI CEOs to sort something out that doesn’t get them lynched, because it’s not looking good.

  • Maybe HN is particularly upset because they feel targeted, given that overpaid tech executives have been giddily making the claim that programming jobs will disappear any minute now. What makes it even worse is that it's very obvious that said tech executives haven't programmed in over 10 years, if ever, and don't know anything about the technology they are selling. They are putting jobs at risk purely for the sake of personal enrichment.

    This is probably combined with a general sense of AI fatigue. The population as a whole is getting tired of "AI slop" and companies trying to shoehorn "AI" into everything. Personally I'm also tired of every startup needing to be an AI startup. As if there was nothing else worth building or investing in. It's sucking the air out of the room.

> AI has to be democratized; power cannot be too concentrated. Control of the future belongs to all people and their institutions. AI needs to empower people individually, and we need to make decisions about our future and the new rules collectively. I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future.

What a bullshit thing for someone who is not actually democratizing access to AI to say.

> This is quite valid, and we welcome good-faith criticism and debate.

It's always funny when they pull out this argument when they've been working overtime to pull up the ladder and embed themselves in the MIC.

Listen, for people unaware of history things used to be a lot more violent as workers had to earn their rights with blood. The state had to respond by first attempting to squash it violently and second compromising in such a way as to ensure workers had a bit more power in the system.

As long as AI shit continues to consume the economy, kicking out people who can no longer find a job and survive while the government also removes any remaining safety nets, the end result is going to be violence. This doesn't make the violence right or just, but rather completely predictable. And if people don't learn from history then it will be repeated, unfortunately.

> The world deserves huge amounts of AI and we must figure out how to make it happen.

> It will not all go well. The fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.

Boy, he really just encouraged the world to keep turning against him. This is so transparently disingenuous. I guess he has no choice if he doesn't want to give up his wealth and power, but putting statements like these out are only going to further fuel anti-AI sentiment.

I do think it's funny he opened this with an allegedly real picture of a baby, though. It may very well be real, but why would anyone take his word for that, especially those who already don't trust him?

  • So all these things he's saying are going to leave people scared and afraid, on that we agree. What's the disingenuous part here?

    Don't get me wrong: others talk of a pattern of dishonesty, or that he's too eager to please*, and I'm willing to trust them on this because I found out with Musk that I don't spot this soon enough.

    But what, specifically, do you see? What am I blind to?

    * given how ChatGPT is a people-pleaser and has him around, Claude philosophically muses about if its subjective experience is or is not like a humans' and has Amanda Askell, and that Grok is like it is and has Musk, I think the default personalities of these models AI are influenced by their owner's leadership teams

    • He's pretending to care about the negative effects AI will have on society at large, but goes on to say it's necessary and "must" happen. If he actually cared, he wouldn't continue down that path. He also wouldn't be lobbying the DoD for contracts to use his AI to help kill people.

  • [flagged]

    • > They tried to get Luigi on "terrorism" charges

      That's about the least controversial thing I've heard recently. Luigi murdered a guy specifically because he was a health insurance CEO. Not because of something he did in particular, but because of the role he assumed. Terrorizing other CEOs is precisely what he intended to do. It is why there are so many Luigi fans, it is what they want too.

      2 replies →

A lot of the comments here is why I have been disgusted with the left in the last few years, having always voted D before and, at simultaneously, being disgusted by the trumpanzee right.

The so called "woke" (in case someone whines about me using this term and says the usual "it's not a thing", I'll define it as basically the delusional left, who have become extremists due to wallowing in their own outrage social media info bubble 24/7) are so up their own ass about their supposedly superior moral values that they have come full circle and become void of the most basic morals.

They find all sorts of justifications for violence (even lethal) against anyone they deem "evil" in their own warped, highly subjective opinion. Their opinions can be summarized as "I'm against violence and have very an extremely high moral compass but it's OK to kill this particular civilian because of these really good reasons I'm about to cite". The reasons are often terrible and devoid of being based on verifiable, objective reality but the meme-induced righteousness is 10/10.

There is not a single drop of self-doubt and anyone arguing against them get immediately labeled "evil" as well, regardless of how rational and well explained their opinions are. These people are outraged by the killing of IRGC, Hamas or Hezbollah leaders (some of the most deserving of violence) but justify violence against someone like Sam.

The same people who say that words are violence and that misgendering someone is tantamount to putting their life in danger had no problems celebrating the cowardly assassination of Charlie Kirk.

I can cite many other examples but this comment is getting longer than intended and I've made my point.

I agree with the moderator here. It's become sad how even in a community like this, full of supposedly well-educated, intelligent people nonsense like the above has become the norm.

Lastly, what people like this don't realize is that by behaving this way they're the worst enemies of their own cause. As awful as Trump and his supporters are, my motivation to vote for the party full of progressive wokesters has almost completely dried up. I feel like the D party no longer represents me as a rational person interested in fact-based, civilized discussions and policies that come out of those. The left has become as hateful and hysterical as the right. In many ways, it has even surpassed the right. I'm now stuck in the middle watching both sides becoming more and more extreme in their views and losing all humanity, while at the same time, completely delusionally, believing in their own moral superiority.

Altman really needs some better coaching on how to sound like a real human, he's not pulling it off here. Who witnesses someone firebombing their home (which is terrible btw), thinks for a second about their family then writes a diatribe full of AI marketing bs. He doesn't even attempt to make it sound personal. He could have incorporated his feelings about his child growing up in an AI dominated world or something to that effect, even as trite as that sounds, it would ring more believably human than what was written here.

So he spends a few seconds writing something generic about his family and then uses that as a platform for a bunch of personal PR. That's sociopathy.

So there's one photo. Of one family. Now what about millions of photos of all the other families possibly affected by him? That doesn't have power?

It's like "hey you can say mean things about me but don't attack my family while I attack yours". Not that this is directed at him personally, but it's just this mindset of wealthy people..

  • I think he's just trying to remind people that someone can both be a CEO of a powerful company you might disagree with/hate as well as a real human with a husband and child and that trying to set fire to his house could kill those people.

    I personally wouldn't go as far as to say the Farrow article caused this but it seems fair game to respond to an article that had an over the top cover image of an animated Sam Altan picking and choosing faces with a photo reminding people he's human like everyone else.

  • [flagged]

    • I don't know who you think the "real family" is but a) narrowing what a real family is does an awful disservice to a whole host of unique families, not just families that involve surrogacy and b) nearly all surrogacies in the US are gestational surrogacies where at least one parent is genetically related to the child and the surrogate is not at all related to the child (not that genetic relations is what makes something a real family or not, but I'm pretty sure thats what is implied here).

[flagged]

[flagged]

  • I wonder if this is the first time in recent history (or ever?) that he has felt this way. Must be nice.

    • Do you frequently get Molotov cocktails thrown at your house?

      I must admit, I've been spared the experience, and I was under the impression that was true for most people!

      1 reply →

  • Yes, very ironic. OpenAI was declared commercial through words and narratives, AI itself is hyped up with words and narratives. His Trump sycophancy are words and narratives. And that is just the start.

    It isn't just irony---It's lack of self awareness! (sorry for increasing the pain that Altman et al. inflict on us.)

Ah, the Elon manoeuvre: trying to make would-be assassins hesitate by using your own child as a shield.

[flagged]

[flagged]

[flagged]

  • FYI, you started out with a very common word used to exaggerate or cherry-pick the opinions of enemies ("giddy").

    It's more valuable to discuss grievances than to pretend they are simply un-discussable in the wake of related violence (in the vein of "it would be disrespectful to talk about gun control in the wake of gun violence").

  • >This is simply not how the economy works, if everyone is poor who do you think is paying for products/services leveraging AI?

    Well, this is already the economy right now: the very upper class is owning more than the vast majority, and consuming more than the vast majority.

    "The top 20% of earners now make up over half of consumer spending"

    https://www.axios.com/2025/08/08/stock-market-us-economy-ric...

    >also means you are opting into homelessness, famine, cancer, climate change, etc. pretty much everything that we could solve with ASI.

    All these could be stopped right now but many people don't want to. Your ASI is going to give the same answers scientists have been reviled for saying: tax more, don't let the free market decide everything, est less meat and drink less alcohol, consume less in general.

    Human stupidity is the real problem and ASI isn't going to "solve" anything.

    • Top 1% and top 20% are entirely different numbers, and majority does not mean all. If the bottom 99% or even 80% of people were unable to meaningfully engage in the economy it would collapse. We already know this model does not work due to several centuries of feudalism.

      It's also insane that we have come to the point that you can say something like this and publish an Axios link when anybody could just go outside and see most people are employed, participating in the economy, not homeless, have food, buy things and enjoy luxuries.

      Am I to believe that Jeff Bezos is the primary driving force behind Labubus? Is the Chipotle down the street waiting for Elon to come to town so they finally have a customer?

  • > AI? If everyone is broke because all the jobs got automated, who is buying the products to supply revenue to the companies

    Does it matter if you're already a rich oligarch with generational wealth? All these ceos have enough money to last several decades beyond their life span, it doesn't matter to them is the slave class croaks

    • What are they buying with this money? If you're the rich 1% and have replaced the 99% with AI there is no longer an economy for you to participate in. We don't have to imagine this scenario, we already did feudalism, and it famously boiled down to land and military.

      > slave class

      This sentiment is by far the most ridiculous because you are simultaneously projecting a reality where AI does everything and so people are no longer needed, but at the same time people are needed and become a slave class. "Oh no the tractor was invented! Now nobody will need humans to tend the fields! They will surely now force us to tend the fields!"

[flagged]

  • > I hope the worst for sam and his family.

    WTF? You can't post this viciously to HN, no matter who it is you're being vicious towards.

    Normally I would ban any account that posted like this, but this thread is a mob and mobs have a deranging effect on people. So I'm going to cut you some slack and not ban you. Just please don't do anything like this on HN again.

No one deserves to be attacked.

I also believe that there will be more casualties in the AI Wars. We should be prepared for that. Capitalism, AI, and human life are mutually incompatible and I'm still not sure which two will survive the conflict.

No peace for grifters. Flush them out of the country.

And I mean all of them, left wing, right wing, corporate. I am sick of every level of power in the country being filled with lying grifters. I don’t care what happens to them, as long as they’re gone.

I feel like I’m living in a circus.

I mean…FAFO? He’s an egomaniac pushing a technology that is objectively negative for anyone not already a billionaire. I have no issue with more Molotov cocktails being chucked at his house, or OpenAI offices, or data centers around the world.

Sam Altman being removed from the equation would make the world an objectively better place.

I see quite a lot of "violence is never justified" sentiment throughout the comments. I ask as a "thought experiment" - why? At least from my understanding, the history of America is riddled with working class uprisings that resulted in the use of force (violence) attempting to make their lives less insufferable. If your government has failed you because it is a plutocracy enriching itself off of enacted hardships (the most general way I can put it), is force not the only thing left? You could argue that there are other possibilities - general strikes et. al. - but those often end in _the state using force_ against you. If the law allows for the use of force in certain circumstances (stand your ground), and there is an analogous situation at hand where there is no concept of justice (justice serving those in power), certainly one has to consider it as a tool for use _outside the law_? The "violence is never justified" comments read more like thoughtless propaganda to me ¯\_(ツ)_/¯. Obviously a person's life is involved, jesus, so certainly there is an opposite camp we don't want to get to: "just nuc 'em". But it seems strange that you wouldn't debate the use of force, even if the answer is "the only winning move is not to play".

  • First, not sure where you live that you believe general strikes will result in the use of force against? certainly not in most civilized societies, no? Second, while US history has provided examples where use of force might have been necessary to bring about the change same history does not have (m)any examples where such violence wasn’t preceded with long attempts at bringing about needed changes without violence. also, violence against human beings is different from setting shit on fire, if violence against human beings is justifyable (regardless of how vile the said person/people are in your and even some majority opinion) who is to say that someone tomorrow might decide that same violence is justifyable against you or even worse - someone in your family?! think of it this way - if your claim is that violence is justifyable - who makes the determination for such justification?

    • I live in the US. There is a history of armed forces being used against the people generally striking. If you include large protests, even more.

      > If your claim is that violence is justifyable - how makes the determination for such justification?

      We authorize people in governments to make this determination, and increasingly machines. Should we? Do you think that it is acceptable to let a police officer justify force on behalf of the state? How about a machine? Mostly just trying to understand what you think is acceptable here.

      But to answer...violence against human beings is indeed different than setting shit on fire, though the law certainly does not allow for the use of force against personal property either. And this difference is indeed the crux of the issue, depending on what your values are (though we seem to be in alignment on "life is valuable"). If for example (probably a bad one, but hopefully it gets the idea across), a group of people is committing a genocide, and you ask them to stop, and they do not, and so you interfere with the use of force...limited at first, maybe, but they do not stop: is their continued involvement not the justification for use of force, assuming other strategies are off the table? Different example than the thread, I realize, but my thought experiment is not tied directly to it, just at the sentiment.

      2 replies →

The New Yorker article was tame. I wish no harm on Sam. But for him to mention that article in the first couple paragraphs is nothing short of opportunistic, and exemplative of exactly the type of manipulative behavior outlined in the article.

Fuck off Sam. And stay safe out there.

Well Sam, you should take your family and your billions and fuck off to some island paradise.

Or keep on doing deals with the DoD and pushing to replace desperate people's jobs.

Cute kid, I'd rather be raising my family in peace then dealing with what you deal with.

@dang You have a bullshit filled unrelenting job, thanks for doing it.

Sure, he's sleazy. Doesn't matter. It's not ok to firebomb jerks or saints. Rich or poor. It's both a criminal and an immoral act.

  • This question doesn’t apply to Sam, but since you made a general statement, I’m trying to understand.

    When it comes to people who openly incite or directly use violence. why do you think it’s unethical to attack someone like that? If one responsible from directly or indirectly killing hundreds, what’s the ethical argument to not use violence against that person?

    Not trolling or anything I’ve been just thinking about this for a while and trying to understand what am I missing in this argument.

    • We use a lot of euphemisms and have a number of myths around political violence. The fact of the matter, so far as I can see, seems to be that political violence is extremely effective, however also extremely destabilising if used at scale.

      Force just works a lot of the time, assuming you can win, and often even if you can’t, as even imposing a cost on your opponent often gets you a better deal. There’s a reason we keep having wars.

      Also realise that the government monopoly on force is ultimately the only reason that anybody follows laws. That following laws is good for us is beside the point - force must be threatened and used in order to maintain control.

      So, force, a euphemism for violence, is ultimately the way anything gets done, and we all have an incentive to lie about this just for the sake of stability.

      I don’t know if this answers your question, but it’s what comes to mind on the subject for me.

    • It's an interesting question. Here's my reductive, off-the-cuff take: violence is justified when defending oneself or another from imminent bodily harm, or even under threat of imminent, considerable property damage. When a threat is not imminent, or an action is past, we use the police and the courts, because we as a society–in the sense of subscribers of the US constitution or similar tracts–believe that it is better to have a judicial system and impartial officials determine whether it is worth depriving someone of their bodily liberty or taking their property, that is, jailing or fining. Taking some sort of extrajudicial action or applying corporal punishment (!) requires a much higher bar. How and when would one determine that the judicial system is so unreliable as to morally permit vigilantism? It requires a great deal of moral self-confidence to take matters into one's own hands.

      I focus on the question of vigilantism because that I think is the issue. Many people feel an emotional impulse, that they want to side with the CEO killer, for example, and they find ways to rationalize. What I'd say is, if you think Joe Blow is so evil , why don't we take him to court? What kind of possible actions could we not jail or fine him for but for which we would accept Johnny Anarchy, y'know, igniting his lawn furniture? Of course, the justice system is imperfect, but nobody lawfully elected the next sexy assassin as judge, jury, and executioner.

  • I find myself resenting him and his ilk on a daily basis for what they did to the computing space which was once sacred to me with their profiteering. But nothing justifies violence, not even close. Simple as that.

That we are so concerned about the movies of individuals like Sam and Dario (or even Elon, if you consider xAI a frontier lab) tells you what a poor job we’re doing with regulation and self-governance.

I think the important thing to remember, when they say "all humans deserve life and democratic process" - is the question of "what do they consider sub-human?" Ie: do they believe employees have souls? Or that the masses are cattle? Because it's then very easy to have strong conviction of human rights when you get to choose who is a human and who is cattle.

This article and discussion appear to have been manually delisted from the News rankings.

Evidently, even HN could only keep up the pretense that tech development is amoral and apolitical for so long.

  • It hasn't been "manually delisted" - it has been rightfully flagged by users, plus set off the flamewar detector, plus downweighted by moderators, the same way we would downweight any other thread that violates the values of this site so shamefully. Hacker News is not a site for mobs.

This is an odd choice of a thread for a laundry list of complaints about AI and about a person that, say what you will, is nowhere near the list of planetary "really bad guys". Even if we limit it to tech, the list starts with someone way richer, then goes through four or five way-shadier people.

If you're OK with victim-shaming here, doesn't it say more about you than Altman? What does it say about your viewpoint?

  • > about a person that, say what you will, is nowhere near the list of planetary "really bad guys". Even if we limit it to tech, the list starts with someone way richer, then goes through four or five way-shadier people.

    You really don't need to go that high up the ladder to find members of the 'list of planetary really bad guys'. Sam Altman is single-handedly responsible for starting the current DRAM crunch - that too based on an untenable economic framework. He's also an enthusiastic participant in the AI bubble that threatens to cause a massive global economic depression when it pops. He's also involved in the cabal that wrecks the labor market (wages) by hyping up the 'AI will replace labor' narrative. On top of all that, he and his ilk are on a building spree of data centers that will guzzle huge amount of energy and dump tonnes of extra CO2 into the atmosphere, as if there's no tomorrow. This wrecks all the hard efforts of millions of others before him to rein in the damages caused by the climate change. Needless to say, all of these have pretty deleterious effects on the economy, biosphere and the welfare of ordinary people, including loss of innumerable lives.

    But does he care? He is one of those people who simply ignore the trail of serious damage and enormous suffering they leave in their wake, because they don't see anything beyond money - more money than they can spend in a hundred lifetimes! Nobody needs a justification to see him as one of those 'planetary bad guys'.

    > What does it say about your viewpoint?

    As someone else here said, it goes without saying that lobbing Molotov cocktail at anyone is a no-no. I don't support physical violence in any form. Having said that,...

    > If you're OK with victim-shaming here

    It's sad that the aristocratic society didn't learn anything from the murder of Brian Thompson. The 'victim' had caused thousands of preventable deaths per year, and his death saved thousands by forcing the industry to deal with the problem. Suddenly, even the pacifists (like me) are left wondering if the death was unethical. If true justice existed, the state would have stopped them from their crimes (aka professions), if not outright execute them for the lives lost. Whom will you choose when they pitch their own lives against thousands of innocent lives? You can't claim victimhood after putting yourself in that position.

    I read the New Yorker article like most people here. I didn't find anything incendiary enough in it to provoke a Molotov attack. I wouldn't put it past him to have arranged it himself, given how much he lies and what he stands to gain from it. But let's assume that the attack is real and is connected to the report. The reply seems overly dramatic and self-righteous, given that the attack was against his iron gate! He's milking the situation to indulge in virtue signaling, sympathy farming and gaslighting the critics. This is one hell of a victim posing! But I have no sympathies to spare if it distressed him so much. He shouldn't be able to sleep anyway, if only he had a conscience. Advocating sympathy for the unsympathetic super-privileged is a bit tone deaf under such circumstances. Evidently, nobody is in a mood to oblige to such manipulations.