Comment by namaria
1 day ago
Expert systems are not the problem per se.
The problem is that AI is very often a way of hyping software. "This is a smart product. It is intelligent". It implies lightning in a bottle, a silver bullet. A new things that solves all your problems. But that is never true.
To create useful new stuff, to innovate, in a word, we need domain expertise and a lot of work. The world is full of complex systems and there are no short cuts. Well, there are, but there is always a trade off. You can pass it on (externalities) or you can hide (dishonesty) or you can use a sleight of hand and pretend the upside is so good, it's magical so just don't think about what it costs, ok? But it always costs something.
The promise of "expert systems" back then was creating "AI". It didn't happen. And there was an "AI winter" because people wised up to that shtick.
But then "big data" and "machine learning" collided in a big way. Transformers, "attention is all you need" and then ChatGPT. People got this warm fuzzy feeling inside. These chatbots got impressive, and improved fast! It was quite amazing. It got A LOT of attention and has been driving a lot of investment. It's everywhere now, but it's becoming clear it is falling very short of "AI" once again. The promised land turned out once again to just be someone else's land.
So when people look at this attempt at AI and its limitations, and start wondering "hey what if we did X" and X sounds just like what people were trying when we last thought AI might just be around the corner... Well let's just say I am having a deja vu.
You're just making a totally different point here than is relevant to this thread.
It's fine to have a hobby horse! I certainly have lots of them!
But I'm sorry, it's just not relevant to this thread.
Edit to add: To be clear, it may very well be a good point! It's just not what I was talking about here.
> Something that seems true to me is that LLMs are actually too smart
> I think it's an expert system
I respectfully disagree with the claim that my point is petty and irrelevant in this context.
I didn't say it's petty! I said it's not relevant.
My question at the beginning of the thread was: Assuming people are using a particular pattern, where LLMs are used to parse prompts and route them to purpose-specific tools (which is what the thread I was replying in is about), is it actually a good use of LLMs to implement that routing layer, or mightn't we use a simpler implementation for the routing layer?
Your point seems more akin to questioning whether the entire concept of farming out to tools makes sense. Which is interesting, but just a different discussion.
2 replies →