← Back to context

Comment by godelski

2 days ago

I agree with both of you and disagree (as my earlier comment implies)

Expert systems can be quite useful, especially when there's an extended knowledge base. But the major issue with expert systems is that you generally need to be an expert to evaluate them.

That's the major issue with LLMs today. They're trained on human preference. Unfortunately we humans prefer incorrect things that sound/look correct than incorrect things that sound/look correct. So that means they're optimizing so that errors are hard to detect. They can provide lots of help to very junior people because they're far from expert but it's diminishing returns and can increase workload if you're concerned with details.

They can provide a lot of help but the people most vocal about their utility usually aren't aware of these issues or admit them while talking about how to effectively use them. But then again, that can just be because you can be tricked. Like Feynman said, the first rule is to not be fooled and you're the easiest person to fool.

Personally, I'm wary of tools that mask errors. IMO a good tool makes errors loud and noticeable. To complement the tool user. I'll admit LLM coding feels faster, because it reduces my cognitive load while code is being written but if I actually time myself I find it usually takes longer and I spend more time debugging and less aware of how the system acts as a whole. So I'll use it for advice but have yet to be able to hand over trust. Even though I can't trust a junior engineer I can trust that they'll learn and listen