Comment by ronsor
1 month ago
To anyone who thinks about our current situation for than a few minutes, AI is
* Clearly useful to people who are already competent developers and security researchers
* Utterly useless to people who have no clue what they're doing
But the latter group's incompetency does not make AI useless in the same way that a fighter jet is not useless because a toddler cannot pilot it.
The metaphor is more apt if you change fighter jets to road vehicles, which are driven by most of the population and whose incompetent use of which can very much affect you.
Imagine what your doctors will be like two generations down the road.
This is currently true. There was a perfect approach of the other side of the coin with curl again a few weeks ago (https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...)
> Clearly useful to people who are already competent developers
> Utterly useless to people who have no clue what they're doing
> the same way that a fighter jet is not useless
AI is currently like a bicycle, while we were all running hills before.
There's a skill barrier and getting less complicated each week.
The marketing goal is to say "Push the pedal and it goes!" like it was a car on a highway, but it is a bicycle, you have to keep pedaling.
The effect on the skilled-in-something-else folks is where this is making a difference.
If you were thinking of running, the goal was to strengthen your tendons to handle the pavement. And a 2hr marathon pace is almost impossible to do.
Like a bicycle makes a <2hr marathon distance "easy" for someone who does competitive rowing, while remaining impossible for those who have been training to do foot races forever.
Because the bicycle moves the problem from unsprung weights and energy recovery into a VO2 max problem, also into a novel aerodynamics problem.
And if you need to walk a rock garden, now you need to lug the bike too with you. It is not without its costs.
This AI thing is a bicycle for the mind, but a lot of people go only downhill and with no brakes.
True, AI moves the problem somewhere else. But I’m not sure the new problems are actually easier to solve in the long run.
I’m a reasonable developer with 30+ years of experience. Recently I worked on an API design project and had to generate a mock implementation based on a full openapi spec. Exactly what Copilot would be good at. No amount of prompting could make it generate a fully functional spring-boot project doing both the mock api and present the spec at a url at the same time. Yet it did a very neat job at just the mock for a simpler version of the same api a few weeks prior. Go figure.
Solid metaphor. No notes.
Though it might be bad news for the companies that got big by saying they'd be able to make infinite money selling fighter-jets to all the children of the world.
> “A good tool in the hands of a competent person is a powerful combination,” says Daniel Stenberg.
Daniel Stenberg has been vocal about AI generated patches in the past, and it's interesting to see him changing course here.
https://media.ccc.de/v/froscon2025-3407-ai_slop_attacks_on_t...
He's been against people submitting garbage they don't understand, not AI as a whole.
> * Utterly useless to people who have no clue what they're doing
I disagree.
I'm making a board game of 6 colors of hexes, and I wanted to be able to easily edit the board. The first time around, I used a screenshot of a bunch of hexagons and used paint to color them (tedious, ugly, not transparent, poor quality). This time, I asked ChatGPT to make an SVG of the board and then make a JS script so that clicking on a hex could cycle through the colors. Easier, way higher quality, adjustable size, clean, transparent.
It would've taken me hours to learn and set that up for myself, but ChatGPT did it in 10min with some back and forth. I've made one SVG in my life before this, and never written any DOM-based JS scripts.
Yes, it's a toy example, but you don't have to knwo what you're doing to get useful things from AI.
> but ChatGPT did it in 10min with some back and forth
You might be underestimating the expertise you applied in these 10 minutes. I know I often do.
> it's a toy example
This technology does exceptionally well on toy examples, I think because there are much fewer constraints on acceptable output than ‘real’ examples.
> you don't have to knwo what you're doing to get useful things from AI
You do need to know what is useful though, which can be a surprisingly high bar.
Yeah you clearly don't have "no clue" of what you're doing in this example though.
You're someone who knows the difference between a PNG and an SVG, knows enough Javascript to know that "DOM-based" JS is a thing, and has presumably previously worked in software/IT.
You're smart enough to know things, and you're also smart enough to know there's a lot that you don't know.
That's a far cry from the way a lot of laypeople, college kids, and fully nontechnical people try to use LLMs.
It seems to me then you then did not learn what you would otherwise have learned, and so did not improve receive the critical thinking and general halo of knowledge improvements which come with learning.
You sound at least somewhat experienced. You knew you wanted an SVG and that Javascript could be inserted into it. That's a pretty reasonable design starting point.
I agree AI is not "utterly useless", but its usefulness is extremely limited. If it writes all of the code for you, it tends to get into unmaintainable states very quickly, requiring manual review or guidance to overcome.