Comment by biophysboy
9 hours ago
The way I put this to myself is that AI gives “correct correct answers and incorrect correct answers”.
They almost always generate logically correct text, but sometimes that text has a set of incorrect implicit assumptions and decisions that may not be valid for the use case.
Generating a correct correct solution requires proper definition of the problem, which is arguably more challenging than creating the solution.
The way I phrase this to others is: Language models produce linguistically valid sentences, not factually correct sentences.
It’s simpler than that - it’s a guessing machine that has superior access to a whole load of information and capacity to process at a speed at which we humans cannot compete.
Does it make it better than us? No because ultimately the thing itself doesn’t ‘know’ right from wrong.
Better according to what standard?
The standard of most employment is already to produce mediocre, plausible outputs as cheaply and rapidly as possible. It's a match made in heaven!
I used to think otherwise, but the older I get the more I think you are correct on this one.
Yeah, very often the issue is that some context is missing. It'll say something true, but which misses the bigger point, or leads to a suboptimal result. Or it interprets an ambiguous thing in one specific way, when the other meaning makes more sense. You have to keep your wits about you to catch these things.
It's an incredible tool but it's also very derpy sometimes, full of biases, blind spots etc.