← Back to context

Comment by notpublic

16 hours ago

"A report was recently published by an AI-research company called Anthropic. They are the ones who notably created Claude, an AI-assistant for coding. Personally, I don’t use it but that is besides the point."

Not sure if the author has tried any other AI-assistants for coding. People who haven't tried coding AI assistant underestimates its capabilities (though unfortunately, those who use them overestimate what they can do too). Having used Claude for some time, I find the report's assertions quite plausible.

Yup. One recent thing I started using it for is debugging network issues (or whatever) inside actual servers. Just give it permission to SSH into the box and investigate for itself.

Super useful to see it isolate the problem using tcpdump, investigating route tables, etc.

There are lots of use cases that this is useful for, but you need to know its limits and perhaps even more importantly, be able to jump in when you see it’s going down the wrong path.

> Personally, I don’t use it but that is besides the point.

This popped out to me, too. This pattern shows up a lot on HN where commenters proudly declare that they don’t use something but then write as if they know it better than anyone else.

The pattern is common in AI threads where someone proudly declares that they don’t use any of the tools but then wants to position themselves as an expert on the tools, like this article. It happens in every thread about Apple products where people proudly declare they haven’t used Apple products in years but then try to write about how bad it is to use modern Apple products, despite having just told us they aren’t familiar with them.

I think these takes are catnip to contrarians, but I always find it unconvincing when someone tells me they’re not familiar with a topic but then also wants me to believe they have unique insights into that same topic they just told us they aren’t familiar with.

  • Whether the author uses any AI tools or not (to talk of using Claude specifically) is quite literally completely beside the point, which is readily apparent from actually reading the article versus going into it with your hackles raised ready to "defend AI".

  • welcome, you're well along the path of realizing that most of the people on this site don't know what they're talking about

The article doesn't talk about the implausibility of the the tool to do the stated task. It talks the report, and how it doesn't have any details to make us believe the tool did the task. Maybe the thing they are describing could happen. That doesn't mean we have any evidence that it did.

  • If you know what to look for, the report actually has quite a few details on how they did it. In fact, when the report came out, all it did was confirm my suspicions.

The author’s arguments explicitly don’t dispute plausibility. It accurately states that mere plausibility is a misleading basis for this report, but that the report provides nothing but plausibility, and thus is of low quality and dubious motivation.

Anthropic’s lack of any evidence for their claims doesn’t require any position on AI agent capability at all.

Think better.

They should also get get a different AI to write the lede, as it is pretty empty when we get past the "besides (sick) the point"