← Back to context

Comment by KoolKat23

1 year ago

Why does it need to answer more than that?

You underestimate the opportunity that exists for automation out there.

In my own case I've used it to make simple custom browser extensions transcribing PDFs, I don't have the time and wouldn't of made the effort to make the extension myself, the task would of continued to be done manually. It took two hours to make and it works, that's all I need in this case.

Perfection is the enemy of good.

> Perfection is the enemy of good.

Where exactly did I write anything about perfection? For me "AIs" are incapable of producing working code: https://news.ycombinator.com/item?id=41534233

  • You said you have to babysit each line of code, I mean this is simply untrue, if it works there's no need to babysit, the only reason you'd need to babysit every single line is if you're looking for perfection or it's something very obscure or unheard of.

    Your example is perhaps valid, but there are other examples where it does work as I mentioned. I think it may be imprecise prompting, too general or with too little logic structure. It's not like Google search, the more detail and more technical you speak the better, assume it's a very precise expert. Its intelligence is very general so it needs precision to avoid confusing subject matter. A well structured logic to your request also helps as it's reasoning isn't the greatest.

    Good prompting and verifying output is often still faster than manually typing it all.

    • > You said you have to babysit each line of code, I mean this is simply untrue, if it works there's no need to babysit

      No. It either doesn't work, or works incorrectly, or the code is incomplete despite requirements etc.

      > Your example is perhaps valid, but there are other examples where it does work as I mentioned.

      It's funny how I'm supposed to assume your examples are the truth, and nothing but the truth, but my examples are "untrue, you're a perfectionist, and perhaps you're right"

      > the more detail and more technical you speak the better

      As I literally wrote in the comment you're so dismissive of: "As for "using LLMs wrong", using them "right" is literally babysitting their output and spending a lot of time trying to reverse-engineer their behavior with increasingly inane prompts."

      > assume it's a very precise expert.

      If it was an expert, as you claim it to be, it would not need extremely detailed prompting. As it is, it's a willing but clumsy junior.

      To the point that it would rewrite the code I fixed with invalid code when asked to fix an unrelated mistake.

      > Good prompting and verifying output

      How is it you repeat everything I say, and somehow assume I'm wrong and my examples are invalid?

      1 reply →