← Back to context

Comment by kakadu

6 days ago

A hammer hammers.

It hammers 100% of the time, with no failure.

It requires the same amount of labour from my part but it delivers the same outcome every time.

That is what tools do, they act as an extension and allow you to do things not easily done otherwise.

If the hammer sometimes hammers, sometimes squeaks and sometimes screws then it requires extra labour from my part just to make it do what purpose specific tools do, and that is where frustrations arise.

Make it do one thing excellent and we talk then.

This is the kind of non-serious argument he's talking about. There are plenty of tools that require supervision to get good results. That doesn't make them useless.

My 3D printer sometimes prints and sometimes makes spaghetti. Still useful.

  • They never said it was useless. You just invented that straw man in your head.

    3D printing is largely used for prototyping where its lossy output is fine. But using it for production use cases requires fine tuning it can be 99.9% reliable. Unfortunately we can't do that for LLMs hence why it's still only suitable for prototyping.

    • But you can adjust the output of a LLM and still come out ahead in both time and mental effort than writing it by hand. Unlike a 3D printer, it doesn't have to be right the first time around to still be useful.

      1 reply →

  • There is a big difference between "not entirely useless" and best tool for the job.

  • You don't use 3D printing to do large-scale production. If you agree that AI should only be used in prototype code and nothing else, then your argument makes sense.

    • Depending on your definition of "large-scale production" Prusa famously 3d prints a number of components in their production 3d printers.

IMO this is exactly the wrong mental model.

You can't hammer a nail a 1000 times and pick the best hammered nail.

You can have the hammer iterating over the structure 24/7, finding imperfections in previous hammered nails.

This is imposing arbitrary constraints on the model and that when you give a human just a hammer, they tend to start to view everything like a nail.