← Back to context

Comment by keiferski

7 days ago

I have gotten much more value out of AI tools by focusing on the process and not the product. By this I mean that I treat it as a loosely-defined brainstorming tool that expands my “zone of knowledge”, and not as a way to create some particular thing.

In this way, I am infinitely more tolerant of minor problems in the output, because I’m not using the tool to create a specific output, I’m using it to enhance the thing I’m making myself.

To be more concrete: let’s say I’m writing a book about a novel philosophical concept. I don’t use the AI to actually write the book itself, but to research thinkers/works that are similar, critique my arguments, make suggestions on topics to cover, etc. It functions more as a researcher and editor, not a writer – and in that sense it is extremely useful.

I think it's a U-shaped utility curve where abstract planning is on one side (your comment) and the chore implementation is on the other.

Your role is between the two: deciding on the architecture, writing the top-level types, deciding on the concrete system design.

And then AI tools help you zoom in and glue things together in an easily verifiable way.

I suspect that people who still haven't figured out how to make use of LLMs, assuming it's not just resentful performative complaining which it probably is, are expecting it to do it all. Which never seemed very engineer-minded.

  • You don’t empathize with the humane opinion “why bother?” I like to program so it resonates. I’m fortunate to enjoy my work so why would I want to stop doing what I enjoy?

    • Sure, don't use if you don't want to. I'm referring to versions of the claim I see around here like LLMs are useless. Being so uncurious as to refuse to figure out what a tool might be useful for is an anti-engineering mindset.

      Just like you should be able to say something positive about Javascript (async-everything instead of a bolted-on async subecosystem, event loop has its upsides, single-threaded has its upsides, has a first class promise, etc) even if you don't like using it.

      2 replies →

Agree - I tend to think of it as offloading thinking time. Delegating work to an agent just becomes more work for me, with the quality I've seen. But conversations where I control the context are both fun and generally insightful, even if I decide the initial idea isn't a good one.

  • That is a good metaphor. I frequently use ChatGPT in a way that basically boils down to: I could spend an hour thinking about and researching X basic thing I know little about, or I could have the AI write me a summary that is 95% good enough but only takes a few seconds of my time.