← Back to context

Comment by jrjeksjd8d

2 days ago

The quoted revenue numbers seem insane, but I guess it's the result of corporate deals where every developer seat is hundreds of dollars a month?

My job has been publicly promoting who's on top of the "AI use dashboard" while our whole product falls apart. Surely this house of cards has to collapse at some point, better get public money before it does.

I wish there was some sort of community project where engineers could whistleblow about their product falling apart through misguided AI pushes.

I see it everywhere in my private circles, I'm not sure the story is truly reaching the big public.

I've gone through many many fads and smoke during my career, but this is the first time I'm actually worried about things falling apart.

  • The thing is even if it isn't adopted to write code, ChatGPT is part of shadow IT everywhere now. The number of screenshares I get on where a customer has ChatGPT giving them wrong information about AWS networking is staggering. Even AWS support is barfing out LLM responses. Even if it doesn't torpedo the code of your product it will negatively impact how people use your product and the platforms you rely on.

  • >I wish there was some sort of community project where engineers could whistleblow about their product falling apart through misguided AI pushes.

    It would be an awesome thing to see. But would need to be hosted in another country like PirateBay

    Also, what is their incentive?

  • Yeah, it is wild seeing with my eyes how bad these tools are in a lot of cases. We do have some vibe coders on our team but they basically are banned from my current project because they completely ruin the design and nuke throughput. HN would have me believe I'm a Luddite who shouldn't be writing code, however. I truly do not understand how to reconcile this experience and many times it is too complicated a topic to explain to someone who isn't an engineer. AI is the uiltmate Dunning-Kruger machine. You cannot fix what you do not know because you do not know that you did not know.

    As you say, I think things are just going to fall apart and we're just going to have to learn the hard way.

    • No, these tools are really great in a lot of cases. But they still don't have general intelligence or true understanding of anything - so if people using them wrong and rely on their output because it looks good and not because they verified it, then this is on the people using them.

      2 replies →

At least I’m not alone.

My company has a vibe coded leaderboard tracking AI usage.

Our token usage and number of lines changed will affect our performance review this year.

  • I have started using the most token-intensive model I can find and asking for complicated tasks (rewrite this large codebase, review the resulting code, etc.)

    The agent will churn in a loop for a good 15-20 minutes and make the leaderboard number go up. The result is verbose and useless but it satisfies the metrics from leadership.

  • > Our token usage and number of lines changed will affect our performance review this year.

    The AI-era equivalent of that old Dilbert strip about rewarding developers directly for fixing bugs ("I'm gonna write me a new mini-van this afternoon!") just substitute intentional bug creation with setting up a simple agent loop to burn tokens on random unnecessary refactoring.

  • > Our token usage and number of lines changed will affect our performance review this year.

    I'm going nuts, because as I was "growing up" as a programmer (that was 20+ years ago) it was stuff like this [1] that made me (and people like me) proud to be called a computer programmer. Copy-pasting it in here, for future reference, and because things have turned out so bleak:

    > They devised a form that each engineer was required to submit every Friday, which included a field for the number of lines of code that were written that week. (...)

    > Bill Atkinson, the author of Quickdraw and the main user interface designer, who was by far the most important Lisa implementer, thought that lines of code was a silly measure of software productivity. He thought his goal was to write as small and fast a program as possible, and that the lines of code metric only encouraged writing sloppy, bloated, broken code. (...)

    > He was just putting the finishing touches on the optimization when it was time to fill out the management form for the first time. When he got to the lines of code part, he thought about it for a second, and then wrote in the number: -2000.

    [1] https://www.folklore.org/Negative_2000_Lines_Of_Code.html

  • Could you both name and shame?

    • Name pretty much any company. Every one of my friends have said their company is doing this. Across 3 countries mind you. Especially if they already use microsoft office suite. Those folks got sold copilot on a deal it seems.

      4 replies →

I feel like a crazy person, especially when I read HN. Half or more of the comments on this thread are saying how the game is over for even writing code. Then at my job, I see people break things at a rate I can't personally keep up with. Worse, I hear more and more colleagues talk about mandated AI tooling usage and massive regression rates. My company isn't there yet, but I feel it is around the corner.

I mean, they claim they've got 15B consumer revenue and 900M weekly active users.

If that's accurate, that means what, like 11% of the human population is using their product, and the average user pays $15?

That seems incredibly high, especially for poorer countries.

Still, I do know that if I go to a random cafe in the developed world and peep at people's screens, I'm very likely to see a ChatGPT window open, even on wildly non-technical people's screens.