← Back to context

Comment by xpe

11 hours ago

I'm aiming for intellectual honesty here. I'm not taking a side for a person or an org, but I'm taking a stand for a quality bar.

> They knew they had deliberately made their system worse

Define "they". The teams that made particular changes? In real-world organizations, not all relevant information flows to all the right places at the right time. Mistakes happen because these are complex systems.

Define "worse". There are lot of factors involved. With a given amount of capacity at a given time, some aspect of "quality" has to give. So "quality" is a judgment call. It is easy to use a non-charitable definition to "gotcha" someone. (Some concepts are inherently indefensible. Sometimes you just can't win. "Quality" is one of those things. As soon as I define quality one way, you can attack me by defining it another way. A particular version of this principle is explained in The Alignment Problem by Brian Christian, by the way, regarding predictive policing iirc.)

I'm seeing a lot of moral outrage but not enough intellectual curiosity. It embarrassingly easy to say "they should have done better" ... ok. Until someone demonstrates to me they understand the complexity of a nearly-billion dollar company rapidly scaling with new technology, growing faster than most people comprehend, I think ... they are just complaining and cooking up reasons so they are right in feeling that way. This possible truth: complex systems are hard to do well apparently doesn't scratch that itch for many people. So they reach for blame. This is not the way to learn. Blaming tends to cut off curiosity.

I suggest this instead: redirect if you can to "what makes these things so complicated?" and go learn about that. You'll be happier, smarter, and ... most importantly ... be building a habit that will serve you well in life. Take it from an old guy who is late to the game on this. I've bailed on companies because "I thought I knew better". :/

> Define "they". The teams that made particular changes? In real-world organizations, not all relevant information flows to all the right places at the right time. Mistakes happen because these are complex systems.

Accidentally/deliberately making your CS teams ill-informed should not function as a get out of jail free card. Rather the reverse.

  • > Accidentally/deliberately making your CS teams ill-informed should not function as a get out of jail free card. Rather the reverse.

    Thanks for your reply. I very much agree that intention or competence does not change responsibility and accountability. Both principles still apply.

    In this comment, I'm mostly in philosopher and rationalist mode here. Except for the [0] footnote, I try to shy away from my personal take about Anthropic and the bigger stakes. See [0] for my take in brief. (And yes I know brief is ironic or awkward given the footnote is longer than most HN comments.) Here's my overall observation about the arc of the conversation: we're still dancing around the deeper issues. There is more work to do.

    It helps to recognize the work metaphors are doing here. You chose the phrase "get out of jail free". Intentionally or not, this phrase smuggles in some notion of illegality or at least "deserving of punishment" [1]. The Anthropic mistakes have real-world impacts, including upset customers, but (as I see it) we're not in the realm of legal action nor in the realm of "just punishment", by which I mean the idea of retributive justice [2].

    So, with this in mind, from a customer-decision point of view, the following are foundational:

        Rat-1: Pay attention to _effects_ of what Anthropic. did
    
        Rat-2: Pay attention to how these effects _affect me_.
    

    But when to this foundation, I need to be careful:

        Rat-3: Not one-sidedly or selectively re-introduce *intent* into my other critiques. If I get back to diagnosing or inferring *intent*, I have to do so while actually seeking the whole truth, not just selecting explanations that serve my interests
    
        Rat-4: When in a customer frame, I don't benefit from "moralizing" ... my customer POV is not well suited for that. As a customer, my job is to *make a sensible decision*. Should I keep using Claude? If so, how do I adjust my expectations and workflow?
    

    ...

    Personally, when I view the dozens of dozens I've read here, a common theme is see is disappointment. I relatively rarely see constructive and truth-seeking retrospective-work. On the other hand, I see Anthropic going out of their way to communicate their retrospective while admitting they need to do better. This is why I say this:

        Of course companies are going to screw up. The question is: as a customer, am I going to take a time-averaged view so I don't shoot myself in the foot by overreacting?
    

    [0]: My personal big-picture take is that if anyone in the world, anywhere, builds a superintelligent AI using our current levels of understanding, there is no expectation at all that we can control it safely. So I predict with something close to 90% or higher, that civilization and humanity as we know it won't last another 10 years after the onset of superintelligence (ASI).

    This is the IABIED argument -- plenty of people write about it -- though imo few of the book reviews I've seen substantively engage with the core arguments. Instead, most reviewers reject it for the usual reasons: it is a weird and uncomfortable argument and the people making it seem wacky or self-interested to some people. I do respect reviews who disagree based on model-driven thinking. Everything else to me reads like emotional coping rather than substantive engagement.

    With this in mind, I care a lot about Anthropic's failures and what they imply about how it participates in the evolving situation.

    But I care almost zero about conventional notions of blame. Taking materialism as true, free will is at bottom a helpful fiction for people. For most people, it is the reality we take for granted. The problem is blame is often just an excuse for scapegoating people for their mistakes, when in fact these mistakes just flow downstream from the laws of physics. Many of these mistakes are nearly statistical certainties when viewed from the lens of system dynamics or sociology or psychology or neuroscience or having bad role models or being born into a not-great situation.

    To put it charitably, blame is what people do when they want to explain s--tty consequences on the actions of people and systems. That sense bothers me less; I'm trying to shift thinking away from the kind of blaming that leads to bad predictions.

    [1]: From the Urban Dictionary (I'm not citing this as "proof of credibility" of the definition):

        "A get out of jail free card is a metaphorical way to refer to anything that will get someone out of an undesirable situation or allow them to avoid punishment."
    

    ... I'm only citing UD so you know what mean. When I use the word dictionary, I mean a catalog of usage not a prescription of correctness.

    [2]: https://plato.stanford.edu/entries/justice-retributive/