← Back to context

Comment by d4rkn0d3z

4 hours ago

Are you sure you are not comparing to human unreason?

Most of what humans think of as reason is actually "will to power". The capability to use our faculties in a way that produces logical conclusions seems like an evolutionary accident, an off-lable use of the brain's machinery for complex social interaction. Most people never learn to catch themselves doing the former when they intended to engage in the latter, some don't know the difference. Fortunately, the latter provides a means of self-correction, the research here hopes to elucidate whether an LLM based reasoning system has the same property.

In other words, given consistent application of reason I would expect a human to eventually draw logically correct conclusions, decline to answer, rephrase the question, etc. But with an LLM, should I expect a non-determisitic infinite walk though plausible nonsense? I expect reaaoning to converge.