"cluster this data to try to detect groups of similar outcomes" is typically a fairly subjective task. If the objective algorithm optimizes for an objective criterion that doesn't match the subjective criteria that will be used to evaluate it, that objectivity is just as superficial.
I’m not sure I follow. Every clustering algorithm that’s not an LLM prompt has a well-known, specified mathematical/computational functioning; no matter how complex, there's a perfectly concrete structure behind it, and whether you agree or not with its results doesn’t change anything about them.
The results of an LLM are an arbitrary approximation of what a human would expect to see as the results of a query. In other words, it correlates very well with human expectations and is very good at fooling you into believing it. But can it provide you with results that you disagree with?
And more importantly, can you trust these results scientifically?
So that’s it then? We replace every well-understood, objective algorithm with well-hidden, fake, superficial surrogate answers from an AI?
"cluster this data to try to detect groups of similar outcomes" is typically a fairly subjective task. If the objective algorithm optimizes for an objective criterion that doesn't match the subjective criteria that will be used to evaluate it, that objectivity is just as superficial.
I’m not sure I follow. Every clustering algorithm that’s not an LLM prompt has a well-known, specified mathematical/computational functioning; no matter how complex, there's a perfectly concrete structure behind it, and whether you agree or not with its results doesn’t change anything about them.
The results of an LLM are an arbitrary approximation of what a human would expect to see as the results of a query. In other words, it correlates very well with human expectations and is very good at fooling you into believing it. But can it provide you with results that you disagree with?
And more importantly, can you trust these results scientifically?
2 replies →