← Back to context

Comment by kilobaud

8 days ago

Thanks, I can relate to the parent poster, and this is a really profound comment for me. I appreciate the way you framed this. I’ve felt compelled to fact check my own LLM outputs but I can’t possibly keep up with the quantity. And it’s tempting (but seems irrational) to hand the results to a different LLM. My struggle is remembering there needs to be input/query/calculation/logic validation (without getting distracted by all the other shiny new tokens in the result)