Comment by marcusb

1 day ago

I’m puzzled when I hear people say ‘oh, I only use LLMs for things I don’t understand well. If I’m an expert, I’d rather do it myself.’

In addition to the ability to review output effectively, I find the more closely I’m able to describe what I want in the way another expert in that domain would, the better the LLM output. Which isn’t really that surprising for a statistical text generation engine.

I guess it depends. In some cases, you don't have to understand the black box code it gives you, just that it works within your requirements.

For example, I'm horrible at math, always been, so writing math-heavy code is difficult for me, I'll confess to not understanding math well enough. If I'm coding with an LLM and making it write math-heavy code, I write a bunch of unit tests to describe what I expect the function to return, write a short description and give it to the LLM. Once the function is written, run the tests and if it passes, great.

I might not 100% understand what the function does internally, and it's not used for any life-preserving stuff either (typically end up having to deal with math for games), but I do understand what it outputs, and what I need to input, and in many cases that's good enough. Working in a company/with people smarter than you tends to make you end up in this situation anyways, LLMs or not.

Though if in the future I end up needing to change the math-heavy stuff in the function, I'm kind of locked into using LLMs for understanding and changing it, which obviously feels less good. But the alternative is not doing it at all, so another tradeoff I suppose.

I still wouldn't use this approach for essential/"important" stuff, but more like utility functions.

  • Would you rather it be done incorrectly when others are expecting correctness or not at all? I would choose not at all.

    • Well, given the context is math in video games, I guess I'd chose "not at all", if there was no way for me to verify it's correct or not. But since I can validate, I guess I'd chose to do it, although without fully understanding the internals.

That's why outsource most other things in our life though, why would it be different with LLMs?

People don't learn how a car works before buying one, they just take it to a mechanic when it breaks. Most people don't know how to build a house, they have someone else build it and assume it was done well.

I fully expect people to similarly have LLMs do what the person doesn't know how and assume the machine knew what to do.

  • > why would it be different with LLMs?

    Because LLMs are not competent professionals to whom you might outsource tasks in your life. LLMs are statistical engines that make up answers all the time, even when the LLM “knows” the correct answer (i.e., has the correct answer hidden away in its weights.)

    I don’t know about you, but I’m able to validate something is true much more quickly and efficiently if it is a subject I know well.

    • > competent professionals

      That requires a lot of clarity and definition if you want to claim that LLMs aren't competent professionals. I assume we'd ultimately agree that LLMs aren't, but I'd add that many humans paid for a task aren't competent professionals either and, more importantly, that I can't distinguish the competent professionals from others without myself being competent enough in the topic.

      My point was that people have a long history of outsourcing to someone else, often to someone they have never met and never will. We do it for things that we have no real idea about and trust that the person doing it must have known what they were doing. I fully expect people to end up taking the same view of LLMs.

      2 replies →