← Back to context

Comment by AdieuToLogic

7 days ago

> Good journalism would include ...

The link you provided begins with the declaration:

  Written by Amazon Staff

I am not a journalist and even I would question the "good journalism would include" assertion given the source provided.

> I find it somewhat overblown.

As I quoted in a peer comment:

  Dave Treadwell, Amazon's SVP of e-commerce services, told 
  staff on Tuesday that a "trend of incidents" emerged since 
  the third quarter of 2025, including "several major" 
  incidents in the last few weeks, according to an internal 
  document obtained by Business Insider. At least one of 
  those disruptions were tied to Amazon's AI coding assistant 
  Q, while others exposed deeper issues, another internal 
  document explained.
  
  Problems included what he described as "high blast radius 
  changes," where software updates propagated broadly because 
  control planes lacked suitable safeguards. (A control plane 
  guides how data flows across a computer network).

If the above is "overblown", then the SVP has done so. I have no evidence to believe this is the case however.

Do you?

> I am not a journalist and even I would question the "good journalism would include" assertion given the source provided.

You've misunderstood. I was saying good journalism would include both sides, and hopefully primary sources alongside the reporting, so readers can evaluate both.

> If the above is "overblown", then the SVP has done so. I have no evidence to believe this is the case however.

It says "at least one of those disruptions were tied to Amazon's AI coding assistant Q, while others exposed deeper issues." You initially cited this article as evidence that coding agents don't produce working code. But the SVP is describing a broader trend of deployment and control plane failures, most of which are classic infrastructure problems that predate AI tooling entirely. You're attributing a systemic operational failure to AI code generation when even your own source doesn't support that.

More fundamentally, your original argument was that the premise "software can write working code" is flawed. One company having incidents, where some of those incidents involved AI tooling doesn't prove that. Humans cause production incidents every single day. By your logic, the existence of any bug would prove humans can't write working code either.