Comment by reconnecting
19 hours ago
> OpenAI's sales and marketing expenses increased to _$2 billion_ in the first half of 2025.
Looks like AI companies spend enough on marketing budgets to create the illusion that AI makes development better.
Let's wait one more year, and perhaps everyone who didn't fall victim to these "slimming pills” for developers' brains will be glad about the choice they made.
Well. I was a sceptic for a long time, but a friend recently convinced me to try Claude Code and showed me around. I revived an open source project I regularly get back to, code for a bit, have to wrestle with toil and dependency updates, and loose the joy before I really get a lot done, so I stop again.
With Claude, all it took to fix all of that drudge was a single sentence. In the last two weeks, I implemented several big features, fixed long standing issues and did migrations to new major versions of library dependencies that I wouldn’t have tackled at all on my own—I do this for fun after all, and updating Zod isn’t fun. Claude just does it for me, while I focus on high-level feature descriptions.
I’m still validating and tweaking my workflow, but if I can keep up that pace and transfer it to other projects, I just got several times more effective.
This sounds to me like a lack of resource management, as tasks that junior developers might perform don't match your skills, and are thus boring.
As a creator of an open-source platform myself, I find trusting a semi-random word generator in front of users unreliable.
Moreover, I believe it creates a bad habit. I've seen developers forget how to read documentation and instead trust AI, and of course, as a result AI makes mistakes that are hard to debug or provokes security issues that are easy to overlook.
I know this sounds like a luddite talking, but I'm still not convinced that AI in its current state can be reliable in any way. However, because of engineers like you, AI is learning to make better choices, and that might change in the future.
> as tasks that junior developers might perform don't match your skills, and are thus boring.
Yeah this sounds interesting, and matches my experience a bit. I was trying out AI for the Christmas cuz people I know are talking about it. I asked it to implement something (refactoring for better performance) that I think should be simple, it did that and looks amazing, all tests passed too! When I look into the implementation, AI got the shape right, but the internals were more complicated than needed and were wrong. Nonetheless it got me started into fixing things, and it got fixed quite quickly.
The performance of the model in this case is not great, perhaps it is also because I am new to this and don't know how to prompt it properly. But at least it is interesting.
That’s a totally fair take IMHO, and I’m very much conflicted on several ends on this topic—for example, would I want my juniors to use an agent? No; not even the mid levels, probably. As you say, it’s easy to form bad habits, and you need a good intuition for architecture and complexity, otherwise you end up with broken, unmaintainable messes. but if you have that, it’s like magic.
> OpenAI's sales and marketing expenses increased to _$2 billion_ in the first half of 2025.
I believe they include the costs of free ChatGPT user's in that $2B. Worth it considering the conversion rate they are getting (5-6% in Oct 2024[1]).
[1] https://www.cnet.com/tech/services-and-software/openai-cfo-p...
Let's wait one more year, and perhaps everyone who didn't fall victim to these "slimming pills” for developers' brains will be glad about the choice they made.
In that year, AI will get better. Will you?
AI is only getting better at consuming energy and wasting people's time communicating with this T9. However, if talented engineers continue to use it, it might eventually provide more accurate replies as a result.
Answering your question, no matter how much I personally degrade or improve, I will not be able to produce anything even remotely comparable in terms of negative impact that AI brings to humanity these days.
I see this logical pairing a lot.
1) AI is basically useless, a mere semi-random word generator. 2) And it is so powerful that it is going to hurt (or even destroy) humanity.
This is this is called "having your cake, and letting it eat you too".
5 replies →