← Back to context

Comment by roncesvalles

10 hours ago

I like to think I'm very much abreast of the bleeding edge because I feel this anxiety myself. At this point I can't code without LLMs because I just notice things that I could hand off to LLMs and they will do it faster and there's no reason for me to do it myself (although I still could).

But the overall gain in efficiency is still a low single digit speedup. It's not a multi-OOM speedup as if e.g. doing 1000 long divisions by hand over many days versus letting a computer program do them in a split second. The "wall" that is irreducible complexity was never OOMs away from how modern pre-AI software development was done.

For me the speed-up has not been in doing things I was already an expert at doing quickly with high quality. It has been in skipping the learning curve for adjacent things.

  • Does it make the curve easier or do you skip learning it entirely and just trust the LLM? I wouldn't do the latter.

    • So far I've skipped learning it entirely. For things I want to learn, I learn the old school way--maybe with an LLM as an unreliable thesaurus and/or second search engine (where I distrust its output, but read its links). For things I want to just get done, I use an LLM. It's something close to blind trust, but not completely.

      For example, I've used LLMs to write ~1600 lines of Rust in the past few days. I'm having it make Ratatui bindings for Ruby. I haven't ever learned Rust, but I can read C-like languages so I kinda understand what's happening. I could tell when it needed to be modularized. I have a sneaking suspicion most of the Rust tests it's written are testing Ratatui, rather than testing its own bindings. But I've had the LLM cover the functionality in Ruby tests, a language I do know. So I've felt comfortable enough to ship it.