Comment by dvt
6 days ago
Fully agree with @tptacek here:
> But AI is also incredibly — a word I use advisedly — important. It’s getting the same kind of attention that smart phones got in 2008, and not as much as the Internet got. That seems about right.
However, I just don't think the AI coding part is that interesting or future-thinking. We're seeing so much more progress in semantic search, tool calling, general purpose uses, robotics, I mean, DeepMind just won a Nobel for goodness' sake.
Don't get me wrong, I use ChatGPT to write all kinds of annoying boilerplate, and it's not too bad at recalling weird quirks I don't remember (yes, even for Rust). But hard problems? Real problems? Zero shot. Novel problems? No way.
> But I’ve been first responder on an incident and fed 4o — not o4-mini, 4o — log transcripts, and watched it in seconds spot LVM metadata corruption issues on a host we’ve been complaining about for months.
I'm going to go ahead and press (X) to doubt on this anecdote. You've had an issue for months and the logs were somehow so arcane, so dense, so unparseable, no one spotted these "metadata corruption issues?" I'm not going to accuse anyone of blatant fabrication, but this is very hard to swallow.
Listen, I also think we're on the precipice of re-inventing how we talk to our machines; how we automate tasks; how we find and distribute small nuggets of data. But, imo, coding just ain't it. Donald Knuth calls computer programming an art, and to rob humanity of effecting not just coding—but any art, I'd argue—would be the most cardinal of sins.
> I'm going to go ahead and press (X) to doubt on this anecdote. You've had an issue for months and the logs were somehow so arcane, so dense, so unparseable, no one spotted these "metadata corruption issues?" I'm not going to accuse anyone of blatant fabrication, but this is very hard to swallow.
I have this problem with a lot of LLM miracle anecdotes. There’s an implication that the LLM did something that was eluding people for months, but when you read more closely they don’t actually say they were working on the problem for months. Just that they were complaining about it for months.
>There’s an implication that the LLM did something that was eluding people for months, but when you read more closely they don’t actually say they were working on the problem for months. Just that they were complaining about it for months.
On the other hand, we've all probably had the experience of putting out a fire and wanting time to track down an issue only to be told to not bother since "everything is working now". Sometimes you spend months complaining about something because the people you're complaining to don't have the time to dive into an issue. Even if it would have taken mere hours for a human to hunt down the issue, someone still has to be given those hours to work on it. By contrast, copying a bunch of logs into an AI is nearly free.
> You've had an issue for months and the logs were somehow so arcane, so dense, so unparseable, no one spotted these "metadata corruption issues?" I'm not going to accuse anyone of blatant fabrication, but this is very hard to swallow.
Eh, I've worked on projects where because of un-revisited logging decisions made in the past, 1-10k error logs PER MINUTE were normal things to see. Finding the root cause of an issue often boiled down to multiple attempts at cleaning up logs to remove noise, cleaning up a tangentially related issues and waiting for it to happen again. More than one root cause was discovered by sheer happenstance of looking at the right subset of the logs at the right moment in time. I can absolutely buy that a system built for parsing large amounts of text and teasing patterns out of that found in minutes what humans could not track down over months.