Comment by germandiago
3 hours ago
> A lot of humans have difficulty with very reality that they are in fact biological machines, and most of what we do is the same thing.
I think we are far and ahead from this "mix and match". A human can be much, much more unpredictable than these LLMs for the thinking process if only bc looking at a much bigger context. Contexts that are even outside of the theoretical area of expertise where you are searching for a solution.
Good solutions from humans are potentially much more disruptive.
AI has all of human knowledge and 100x more than that of just 'stuff' baked right it, in pre-train, before a single token of 'context'.
It has way more 'general inherent knowledge' than any human, just as as a starting point.
Yet they never give you replies like: oh, you see how dolphins run in the water taking advantage of sea currents if you are talking about boats and speed.
What they will do is to find all the solutions someone did and mix and match around in a mdiocre way of approaching the problem in a much more similar way to a search engine with mix and match than thinking out of the box or specifically for your situation (something also difficult to do anyway bc there will always be some detail missing in the cintext and if you really had go to give all that context each time dumping it from your brain then you would not use it as fast anymore) which humans do infinitely better. At least nowadays.
Now you will tell me that the info is there. So you can bias LLMs to think in more (or less) disruptive ways.
Then now your job is to tweak the LLMs until it behaves exactly how you want. But that is nearly impossible for every situation, because what you want is that it behaves in the way you want depending on the context, not a predefined way all the time.
At that time I wonder if it is better to burn all your time tweaking and asking alternative LLMs questions that, anyway, are not guaranteed to be reliable, or just keep learning yourself about the domain instead of just playing tweaking and absorbing real knowledge (and not losing that knowledge and replace it with machines). It is just stupid to burn several hours in making an expert you cannot check if it says real stuff instead of using that time for really learning about the problem itself.
This is a trade-off and I think LLMs are good for stimulating human thinking fast. But not better at thinking or reasoning or any of that. And if yiu just rely on them the only thing you will emd up being professional at is orompting, which a 16 year old untrained person can do almost as well as any of us.
LLMs can look better if you have no idea of the topic you talk about. However, when you go and check maybe the LLM hallucinated 10 or15% of what it said.
So you cannot rely on it nayways. I still use them. But with a lotof care.
Great for scaffolding. Bad at anything that deviates from the average task.