Comment by jononor
2 days ago
That expert systems were not financially viable in the 80-90ies does not mean that LLM-as-modern-expert-systems cannot be now. The addressable market for such solutions has expanded enormously since then, thanks to PCs, servers, and mobile phones becoming ubiquitous. The opportunity is likely 1000x (or more) what is was back in those days - which shifts the equation a lot as to what is viable or not.
BTW, have you tried out your challenges with a LLM? I expect them to be tricky to answer direct. But the systems are getting quite good at code synthesis which seems suitable. And I even see some MCP implementations for using constraint solvers as tools.
I agree with both of you and disagree (as my earlier comment implies)
Expert systems can be quite useful, especially when there's an extended knowledge base. But the major issue with expert systems is that you generally need to be an expert to evaluate them.
That's the major issue with LLMs today. They're trained on human preference. Unfortunately we humans prefer incorrect things that sound/look correct than incorrect things that sound/look correct. So that means they're optimizing so that errors are hard to detect. They can provide lots of help to very junior people because they're far from expert but it's diminishing returns and can increase workload if you're concerned with details.
They can provide a lot of help but the people most vocal about their utility usually aren't aware of these issues or admit them while talking about how to effectively use them. But then again, that can just be because you can be tricked. Like Feynman said, the first rule is to not be fooled and you're the easiest person to fool.
Personally, I'm wary of tools that mask errors. IMO a good tool makes errors loud and noticeable. To complement the tool user. I'll admit LLM coding feels faster, because it reduces my cognitive load while code is being written but if I actually time myself I find it usually takes longer and I spend more time debugging and less aware of how the system acts as a whole. So I'll use it for advice but have yet to be able to hand over trust. Even though I can't trust a junior engineer I can trust that they'll learn and listen
> BTW, have you tried out your challenges with a LLM?
I have they whiff pretty hard.
> getting quite good at code synthesis
There was a post yesterday about vibe coding basic. It pretty much reflects my experience with code for SBC's.
I run home assistant, it's got tons of sample code out there so lots for the LLM to have in its data set. Here they thrive.
It's a great tool with some known hard limits. It's great at spitting back language but it clearly lacks knowledge. It clearly lacks the reason to understand the transitive properties of things, leaving it lacking at the edges.