Comment by ehnto
1 day ago
It's the backwards reasoning that really frustrates me when using LLMs. You ask a question, it says sure do these things, they don't work out and you ask the LLM why not, and it replies yes that thing I told you to do wouldn't work because of these clear reasons.
It would be nice to start at the end of that chain of reasoning instead of the other side.
Another regular example is when it "invents" functions or classes that don't exist, when pressed about them, it will reply of course that won't work, that function doesn't exist.
Okay great, so don't tell me it does with such certainty, is what I would tell a human feeding me imagination as facts all the time. But of course an LLM is not reasoning in the same sense, so this reverse chain of thought is the outcome.
I am finding LLMs far more useful for soft skill topics than engineering type work, simply because of how often it leads me down a path that is eventually a dead end, because of some small detail that was wrong at the very beginning.
> I am finding LLMs far more useful for soft skill topics than engineering type work, simply because of how often it leads me down a path that is eventually a dead end, because of some small detail that was wrong at the very beginning.
Yeah I felt the same way in the beginning which is why I ended up writing my own chat app. What I've found while developing my spelling and grammar checker is that it is very unlikely for multiple LLMs to mess up at the same time. I know they will mess up, but I'm also pretty sure they won't at the same time.
So far, I've been able to successfully create working features that actually saved me time by pitting LLMs against their own responses and others. My process right now is, I'll ask 6+ models to implement something and then I will ask models to evaluate everyone's responses. More often than not, a model will find fault or make a suggestion that can be used to improve the prompt or code. And depending on my confidence level, I might repeat this a couple of times.
The issue right now is tracking this "chain of questioning" which is why I am writing my own chat app. I need an easy way to backtrack and fork from different points in the "chain of questioning". I think once we get a better understanding of what LLMs can and can't do as a group, we should be able to produce working solutions easier.
I believe that this is what chain of thought models attempt to address.