← Back to context

Comment by martinsnow

2 months ago

It seems like investors have bought into the idea that llms has to improve no matter what. I see it in the company I'm currently at. No matter what we have to work with whatever bullshit these models can output. I am however looking at more responsible companies for new employment.

I'd argue a lot of the current AI hype is fuelled by hopium that models will improve significantly and hallucinations will be solved.

I'm a (minor) investor, and I see this a lot: People integrate LLMs for some use case, lately increasingly agentic (i.e. in a loop), and then when I scrutinise the results, the excuse is that models will improve, and _then_ they'll have a viable product.

I currently don't bet on that. Show me you're using LLMs smart and have solid solutions for _todays_ limitations, different story.

  • Our problem is that non coding stakeholders produce garbage tiers frontend prototypes and expect us to include whatever garbage they created in our production pipeline! Wtf is going on? That's why I'm polishing my resume and getting out of this mess. We're controlled by managers who don't know Wtf they're doing.

    • Maybe a service mentality would help you make that bearable for as long as it still lasts? For my consulting clients, I make sure I inform them of risks, problems and tradeoffs the best way I can. But if they want to go ahead against my recommendation - so be it, their call. A lot of technical decisions are actually business decisions in disguise. All I can do is consult them otherwise and perhaps get them to put a proverbial canary in the coal mine: Some KPI to watch or something that otherwise alerts them that the thing I feared would happen did happen. And perhaps a rough mitigation strategy, so we agree ahead of time on how to handle that.

      But I haven't dealt with anyone sending me vibe code to "just deploy", that must be frustrating. I'm not sure how I'd handle that. Perhaps I would try to isolate it and get them to own it completely, if feasible. They're only going to learn if they have a feedback loop, if stuff that goes wrong ends up back on their desk, instead of yours. The perceived benefit for them is that they don't have to deal with pesky developers getting in the way.

    • It's been refreshing to read these perspectives as a person who has given up on using LLMs. I think there's a lot of delusion going on right now. I can't tell you how many times I've read that LLMs are huge productivity boosters (specifically for developers) without a shred of data/evidence.

      On the contrary, I started to rely on them despite them constantly providing incorrect, incoherent answers. Perhaps they can spit out a basic react app from scratch, but I'm working on large code bases, not TODO apps. And the thing is, for the year+ I used them, I got worse as a developer. Using them hampered me learning another language I needed for my job (my fault; but I relied on LLMs vs. reading docs and experimenting myself, which I assume a lot of people do, even experienced devs).

      2 replies →