Comment by marcusb

1 day ago

This reminds me of the story a few days ago about "what is your best prompt to stump LLMs", and many of the second level replies were links to current chat transcripts where the LLM handled the prompt without issue.

I think there are a couple of problems at play: 1) people who don't want the tools to have value, for various reasons, and have therefore decided the tools don't have value; 2) people who tried the tools six months or a year ago and had a bad experience and gave up; and 3) people who haven't figured out how to make good use of the tools to improve their productivity (this one seems to be heavily impacted by various grifters who overstate what the coding assistants can do, and people underestimating the effort they have to put in to get good at getting good output from the models.)

4) People that likes having reliable tools which frees them from "reviewing" the output of these tools to see if the tool didn't make an error.

Using AI is like driving a car that decides to turn even if you keep the steering wheel straight. Randomly. At various degree. If you like this because some times it let you turn in a curve without you having to steer, you do you. But some people do prefer having a car turn when and only when they turn the wheel.

  • That's covered under point #1. I'm not claiming these tools are perfect. Neither are most people, but from the standpoint of an employer, the question is going to be "does the tool, after accounting for errors, make my employees more or less productive?" A lot of people are seeing the answer to that - today - is the tools offer a productivity advantage.