I don't necessarily agree with that... I think that it's a matter of perspective. I find that AI tends to make a lot of the same errors I've seen from actual developers, especially when interacting or leading foreign dev teams.
When I spend a lot of time in planning mode, I tend to get a lot more value out of the output and have to redirect far less. It also helps to establish your API interfaces, reference points, interactions, behaviors and even a lot of the test harnesses ahead of development cycles. You need to define a lot more ahead of letting it go.
I would say that I'm getting maybe 2.5x the value and 5-10x the output from AI... by value, I mean what the end user/customer cares about... by 5-10x I'm including the increased documentation, testing, etc.
The people I work with who find "AI" makes every part of their life easier were just bad at everything to begin with. The people who find "AI" making specific tasks easier have specialized skills and were previously relying on less specialized people for some support.
Before I switched over the a career in tech, I made my living from music - playing live, session work, etc.
Honestly, I'm probably one of the biggest skeptics when it comes to GenAI - but at least for music, the recent models (as in the past year) do not suck. They are actually really, really good for what it is.
I have yet to hear anything truly original produced by those models. They seem to converge to the mean, and end up sounding very commercial, very average sounding - but in the sense of average "professional music". Suno can generate music which would have taken real people years to learn, thousands of dollars of equipment to make / produce, and pretty much ready for airplay - most listeners will not bat an eye.
Hell, these "AI artists" have been booked to festivals, since people can't hear the difference, and are enjoying the music.
I figure it will go the same way in other fields. The average consumer loses track of what's human made and what's AI made, and frankly won't care. The people "left behind" are the artists, craftspeople, etc. that are frustrated it came to this point.
Rather than an existential threat, I could see it becoming it's own genre rather than infecting every other genre - when in the future people collectively realise it's kinda bad but has it's place as an almost retro aesthetic.
Our idea of nostalgia was not that long ago. Also it could be generated on open weight local copyright free models that are super efficient in the future :P
There have been plenty of those is it AI or a real person music tests on the street you can find on YouTube. Almost no one knows which one is AI. There’s nothing there to be able to put them in different buckets.
I think that was the point being made; if you're looking at it from the perspective of being really good at something, its tendency towards an averaged result is substandard.
I think this probably says more about music in general and the long tail of people who think good enough is just spectacular, than to the brilliance of LLMs. Most music, just like most art, isn’t particularly original. It’s a shocker, I know, but there it is. Doesn’t mean it’s bad, just not particularly original.
Copying something that exists isn’t particularly difficult. It may require immense skill and incredible dexterity in the case of some musical instruments, but it doesn’t really require much more than time, patience and the ability to follow instructions. The blueprint already exists. With LLMs we now have the ability to skip the time and patience parts of the equation, we can produce mediocrity more or less instantly.
I don’t see this as particularly different from what happened at the turn of the last century and beyond, with machines being able to sow faster, carve wood and metals at a higher pace and precision, moving folks and goods between geographical points faster than ever before, etc. etc. It’s not much different from the IKEAs of the world making mediocre copies of brilliant designs, making fortunes selling to the large masses that think good enough is just great. Because honestly man, most of the time it probably is.
I’m not surprised people go to concerts to hear a recording made by an LLM either. People have been going to see DJs sling records for decades. It’s not the music, or the artist, it’s the community. Beyoncé is an amazing singer, but people don’t necessarily come to her shows to see just her, they come to see everyone else. They might say they want to see her, but they already have a thousand times in tickelitock and myfacespacebookgrams. They come to feel connected to something, to experience community.
LLMs are incredibly good at churning out stuff. Good stuff, bad stuff, just a ton of stuff. Nothing original but that’s ok, most things pre-LLMs weren’t either. We just have more of it now, and fewer trees. The creatives that are able to harness these tools will be able to do more with less. (Ostensibly at least, until the VC subsidies… subside.) Because they are creative they might be able to form an original idea and string together enough mediocrity to realize it. They’ll probably get drowned out in a sea of mediocre copies in the end, but that’s just the same as it always was. It’s just faster now.
The platform owners and hardware manufacturers will remain king until the technology can run on my TI calculator, maybe we’ll get there before the VC money runs out. No wonder Nvidia’s been killing it. Creativity and originality will return once this bubble bursts I’m sure, the world has this amazing ability to correct itself, even if violently so at times. Or we all die perhaps. Either way, all we can do I suppose is ride this wave of mediocrity into the sunset. :o)
I think the point is, there's always someone good at what you are evaluating. Anyone with expertise in the domain will recognize how back it sucks in any given domain.
Don't get me wrong, AI can definitely be used as a tool by someone who knows what they're doing to avoid boilerplate. But anyone using it in a domain they aren't already an expert in will unknowingly accept AI f ups.
This doesn't make sense. I think you mean "If you are really good at something, you'll find AI might not be as good as the something you are really good at"
I think they mean knowing that AI only looks to be good at something pulls back the curtain in a sense and afterwards all appearances of AI seeming good at something seem fake.
> LLMs are just replacing consultants as the #1 generator of sloppy code.
The "consultants wrote sloppy code" is one of those memes that never die.
The only thing that differentiates consultants from you is the contract type. All broad strokes accusations are just a consequence of in-house employees feeling threatened by their presence and having a vested interest in portraying themselves as infinitely better than any prospective replacement. You also see the same attitude in junior devs who complain that everyone else's code is shit, but the mess they themselves created is always justifiable and understandable.
If you were moved from your project right now and you placed someone at your spot under probation, I will guarantee that your work would be extensively criticized for being an unmaintainable pile of hacks.
> The "consultants wrote sloppy code" is one of those memes that never die.
Your comment is one of those that feel intuitively right, because what you say makes sense... until faced with reality.
Most consultants that most permanent employees are likely to find are those that will do a crappy job, then be gone when shit hits the fan. Source: anyone who's ever worked with them, myself included. Actually, both sides of the desk. They tend to do crappy jobs because those are the incentives they have.
You can argue till you're blue in the face, but your theory cannot push aside the actual experience of many if not most of us.
Of course, the occasional scenarios where the consultants are solid and doing top-notch work exist, but what matters is the majority of what happens... and it's not good.
So the meme won't die, because it reflects reality.
No, it’s mainly because consulting firms love running their bait-and-switch scam and use their junior consultants to do the actual work while the seniors move on to butter up the next sucker.
> If you are really good at something, you'll find AI sucks at everything.
I think it's the other way around. AI amplifies your software development skills. If you suck at software development, AI will follow your prompts and feedback and of course it will output an unmaintainable mess that barely works.
Here we are, listening to people who can barely put together a working website complaining that AI can barely put together a working website.
At the things I’m good at, AI is a huge boon. At the things I’m bad at, AI has little to offer.
For me, for Frontend, AI is great, because I know exactly what to do, so it’s very easy to talk it into doing it for me. I know what the problem is, I know what the solution is, and I have the language to communicate both. All that’s left is the trivialities of the implementation, I’ve already done all the hard work in my head.
No you don't understand, AI is VERY BAD at front-end and CSS to the point you cannot use anything.
It's not passable even slightly.
Everybody with experience knows that FE has always been "harder" than BE - but BE the stakes are higher since it's the business. FE is often "just UI" and despite that being very important too, you can throw it away and start over a lot easier with a UI than you can with a BE platform.
You overestimate how good the average front-end developer PR is that gets merged.
My experience developing on a fairly standard SAAS is that AI (Figma, Claude) has produced UIs which look better, are more accessible, and get through review and QA and product approval, faster and more reliably than any of the FE developers I've worked with recently can.
I think this correct it’s mediocre at a lot. It’s only 10x when you don’t know what you’re doing or doing something simple.
It's also more often than not good enough, which for a specialist is bad, and for most everyone else is absolutely sufficient.
I don't necessarily agree with that... I think that it's a matter of perspective. I find that AI tends to make a lot of the same errors I've seen from actual developers, especially when interacting or leading foreign dev teams.
When I spend a lot of time in planning mode, I tend to get a lot more value out of the output and have to redirect far less. It also helps to establish your API interfaces, reference points, interactions, behaviors and even a lot of the test harnesses ahead of development cycles. You need to define a lot more ahead of letting it go.
I would say that I'm getting maybe 2.5x the value and 5-10x the output from AI... by value, I mean what the end user/customer cares about... by 5-10x I'm including the increased documentation, testing, etc.
Corollary: if you are bad at everything, you’ll find AI to be the greatest invention in the history of mankind
I am seeing this a lot.
The people I work with who find "AI" makes every part of their life easier were just bad at everything to begin with. The people who find "AI" making specific tasks easier have specialized skills and were previously relying on less specialized people for some support.
Before I switched over the a career in tech, I made my living from music - playing live, session work, etc.
Honestly, I'm probably one of the biggest skeptics when it comes to GenAI - but at least for music, the recent models (as in the past year) do not suck. They are actually really, really good for what it is.
I have yet to hear anything truly original produced by those models. They seem to converge to the mean, and end up sounding very commercial, very average sounding - but in the sense of average "professional music". Suno can generate music which would have taken real people years to learn, thousands of dollars of equipment to make / produce, and pretty much ready for airplay - most listeners will not bat an eye.
Hell, these "AI artists" have been booked to festivals, since people can't hear the difference, and are enjoying the music.
I figure it will go the same way in other fields. The average consumer loses track of what's human made and what's AI made, and frankly won't care. The people "left behind" are the artists, craftspeople, etc. that are frustrated it came to this point.
Rather than an existential threat, I could see it becoming it's own genre rather than infecting every other genre - when in the future people collectively realise it's kinda bad but has it's place as an almost retro aesthetic.
Our idea of nostalgia was not that long ago. Also it could be generated on open weight local copyright free models that are super efficient in the future :P
There have been plenty of those is it AI or a real person music tests on the street you can find on YouTube. Almost no one knows which one is AI. There’s nothing there to be able to put them in different buckets.
3 replies →
> They seem to converge to the mean
I think that was the point being made; if you're looking at it from the perspective of being really good at something, its tendency towards an averaged result is substandard.
I think this probably says more about music in general and the long tail of people who think good enough is just spectacular, than to the brilliance of LLMs. Most music, just like most art, isn’t particularly original. It’s a shocker, I know, but there it is. Doesn’t mean it’s bad, just not particularly original.
Copying something that exists isn’t particularly difficult. It may require immense skill and incredible dexterity in the case of some musical instruments, but it doesn’t really require much more than time, patience and the ability to follow instructions. The blueprint already exists. With LLMs we now have the ability to skip the time and patience parts of the equation, we can produce mediocrity more or less instantly.
I don’t see this as particularly different from what happened at the turn of the last century and beyond, with machines being able to sow faster, carve wood and metals at a higher pace and precision, moving folks and goods between geographical points faster than ever before, etc. etc. It’s not much different from the IKEAs of the world making mediocre copies of brilliant designs, making fortunes selling to the large masses that think good enough is just great. Because honestly man, most of the time it probably is.
I’m not surprised people go to concerts to hear a recording made by an LLM either. People have been going to see DJs sling records for decades. It’s not the music, or the artist, it’s the community. Beyoncé is an amazing singer, but people don’t necessarily come to her shows to see just her, they come to see everyone else. They might say they want to see her, but they already have a thousand times in tickelitock and myfacespacebookgrams. They come to feel connected to something, to experience community.
LLMs are incredibly good at churning out stuff. Good stuff, bad stuff, just a ton of stuff. Nothing original but that’s ok, most things pre-LLMs weren’t either. We just have more of it now, and fewer trees. The creatives that are able to harness these tools will be able to do more with less. (Ostensibly at least, until the VC subsidies… subside.) Because they are creative they might be able to form an original idea and string together enough mediocrity to realize it. They’ll probably get drowned out in a sea of mediocre copies in the end, but that’s just the same as it always was. It’s just faster now.
The platform owners and hardware manufacturers will remain king until the technology can run on my TI calculator, maybe we’ll get there before the VC money runs out. No wonder Nvidia’s been killing it. Creativity and originality will return once this bubble bursts I’m sure, the world has this amazing ability to correct itself, even if violently so at times. Or we all die perhaps. Either way, all we can do I suppose is ride this wave of mediocrity into the sunset. :o)
> If you are really good at something, you'll find AI sucks at everything.
Nah, just at that something :-)
I think the point is, there's always someone good at what you are evaluating. Anyone with expertise in the domain will recognize how back it sucks in any given domain.
Don't get me wrong, AI can definitely be used as a tool by someone who knows what they're doing to avoid boilerplate. But anyone using it in a domain they aren't already an expert in will unknowingly accept AI f ups.
Exactly. And this seems more and more to be an inherent property of AI, which is kind of calming.
1 reply →
This doesn't make sense. I think you mean "If you are really good at something, you'll find AI might not be as good as the something you are really good at"
I think they mean knowing that AI only looks to be good at something pulls back the curtain in a sense and afterwards all appearances of AI seeming good at something seem fake.
LLMs are just replacing consultants as the #1 generator of sloppy code.
> LLMs are just replacing consultants as the #1 generator of sloppy code.
The "consultants wrote sloppy code" is one of those memes that never die.
The only thing that differentiates consultants from you is the contract type. All broad strokes accusations are just a consequence of in-house employees feeling threatened by their presence and having a vested interest in portraying themselves as infinitely better than any prospective replacement. You also see the same attitude in junior devs who complain that everyone else's code is shit, but the mess they themselves created is always justifiable and understandable.
If you were moved from your project right now and you placed someone at your spot under probation, I will guarantee that your work would be extensively criticized for being an unmaintainable pile of hacks.
> The "consultants wrote sloppy code" is one of those memes that never die.
Your comment is one of those that feel intuitively right, because what you say makes sense... until faced with reality.
Most consultants that most permanent employees are likely to find are those that will do a crappy job, then be gone when shit hits the fan. Source: anyone who's ever worked with them, myself included. Actually, both sides of the desk. They tend to do crappy jobs because those are the incentives they have.
You can argue till you're blue in the face, but your theory cannot push aside the actual experience of many if not most of us.
Of course, the occasional scenarios where the consultants are solid and doing top-notch work exist, but what matters is the majority of what happens... and it's not good.
So the meme won't die, because it reflects reality.
No, it’s mainly because consulting firms love running their bait-and-switch scam and use their junior consultants to do the actual work while the seniors move on to butter up the next sucker.
> If you are really good at something, you'll find AI sucks at everything.
I think it's the other way around. AI amplifies your software development skills. If you suck at software development, AI will follow your prompts and feedback and of course it will output an unmaintainable mess that barely works.
Here we are, listening to people who can barely put together a working website complaining that AI can barely put together a working website.
The majority of humans are average, as is the training set
They use a curated training set.
Curated by average people in most domains.
I cant even actually receive good essay from it and still writing each word myself.
I’ve had the opposite experience.
At the things I’m good at, AI is a huge boon. At the things I’m bad at, AI has little to offer.
For me, for Frontend, AI is great, because I know exactly what to do, so it’s very easy to talk it into doing it for me. I know what the problem is, I know what the solution is, and I have the language to communicate both. All that’s left is the trivialities of the implementation, I’ve already done all the hard work in my head.
No you don't understand, AI is VERY BAD at front-end and CSS to the point you cannot use anything.
It's not passable even slightly.
Everybody with experience knows that FE has always been "harder" than BE - but BE the stakes are higher since it's the business. FE is often "just UI" and despite that being very important too, you can throw it away and start over a lot easier with a UI than you can with a BE platform.
I digress, AI sucks fucking dick at UI.
You overestimate how good the average front-end developer PR is that gets merged.
My experience developing on a fairly standard SAAS is that AI (Figma, Claude) has produced UIs which look better, are more accessible, and get through review and QA and product approval, faster and more reliably than any of the FE developers I've worked with recently can.
Google Stitch is pretty awesome at front end layouts.
What is an example UI that AI would fail to create?
[dead]