← Back to context

Comment by misnome

2 months ago

I wonder where the balance of “Actual time saved for me” vs “Everyone else's time wasted” lies in this technological “revolution”.

Agreed.

I've found some AI assistance to be tremendously helpful (Claude Code, Gemini Deep Research) but there needs to be a human in the loop. Even in a professional setting where you can hold people accountable, this pops up.

If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.

I've seen some people (particularly juniors) just act as a conduit between the AI and whoever is next in the chain. It's up to more senior people like me to push back hard on that kind of behaviour. AI-assisted whatever is fine, but your role is to take ownership of the code/PR/report before you send it to me.

  • > If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.

    And then add to that the pressure to majorly increase velocity and productivity with LLMs, that becomes less practical. Humans get squeezed and reduced to being fall guys for when the LLM screws up.

    Also, Humans are just not suited to be the monitoring/sanity check layer for automation. It doesn't work for self-driving cars (because no one has that level of vigilance for passive monitoring), and it doesn't work well for many other kinds of output like code (because often it's a lot harder to reverse-engineer understanding from a review than to do it yourself).

  • >but there needs to be a human in the loop.

    More than that - there needs to be a competent human in the loop.

  • We've going from being writers to editors: a particular human must still ultimately be responsible for signing off on their work, regardless of how it was put together.

    This is also why you don't have your devs do QA. Someone has to be responsible for, and focused specifically on quality; otherwise responsibility will be dissolved among pointing fingers.

You're doing it wrong: You should just feed other peoples AI-generated responses into your own AI tools and let the tool answer for you! The loop is then closed, no human time wasted, and the only effect is wasted energy to run the AI tools. It's the perfect business model to turn energy into money.

Wasting time for others is a net positive, meaning jobs won't be lost, since some human individual still needs to make sense out of AI generated rubbish.

  • Isn’t curl open source? I was under the impression that they are all working volunteer. This isn’t a net positive. It will burn out the good willed programmers and be a net negative on OSS.

This is not unique to AI tools. I've seen it with new expense tools that are great for accounting but terrible to use, or some contract review process that makes it easier on legal or infosec review of a SaaS tool that everyone and their uncle already uses. It's always natural to push all the work off to someone else because it feels like you saved time.

Yeah when reviewing code nowadays once I'm 5-10 comments in and it becomes obvious it was AI generated, I say to go fix it and that I'll review it after. The time waste is insane.

How much time did they save if they didn't find any vulnerability? They just wasted someone's time and nothing else.

Arguably that's been a part of coding for a long time ...

I spend a lot of time doing cleanup for a predecessor who took shortcuts.

Granted I'm agreeing, just saying the methods / volume maybe changed.