← Back to context

Comment by jerojero

1 day ago

Obviously if you make the slides yourself then you'd know the content well.

The way of using these tools is not to one-shot your slide deck (unless you have plenty time to learn the content) but give it a base product you've already worked on and ask it to make it pretty, interesting, etc. and perhaps make small changes to the content which you'd review and learn.

You can probably use a knife as a fork but it wouldn't be the best way of using the knife.

> The way of using these tools is not to one-shot your slide deck

This line of thinking IMO is hopelessly naive. Yes, the responsible way to use AI and perhaps the way _you_ use it is to do some formatting/cleaning up/enhancement of slides that you primarily authored yourself. The reality is that _most_ people are using and will use AI as a way to breeze through as much work as possible either out of laziness or pressure and their "reviews" will primarily consist of "LGTM." Which is going to lead to an explosion of "did you even read this?" or "did you even test this?"-style disasters.

  • It’s even worse when you cant push back with “did you even read this”, because the politics haven’t evolved to constrain the slop.

    We are getting pre-solutioned massive epics, dozens of files, from senior leaders (non-ICs); when shit goes sideways, what do you do? Our jobs are already at risk just in general, and we have new KPIs around generative AI (as do those senior leaders). I’m not sticking my neck out get chopped off.

    Just last week I had to make some shit up in my uplevel status report to shift blame away from an AVP. Technically it’s my fault, for not digging into the 30 files (and tanking my own metrics,m); I don’t even feel like it matters - the devs just hand that off to an LLM anyway to meet their KPIs. I’m just thankful it didn’t go to prod.

  • For some reason, I read "LGTM" as "Let's Go to Market," and spooked myself with the realization that that's absolutely the way this is all headed.