Comment by crazygringo
4 days ago
> The stricter blends of reasoning are what everybody is so desperate to evoke from LLMs
This is definitely not true for me. My prompts frequently contain instructions that aren't 100% perfectly clear, suggest what I want rather than formally specifying it, typos, mistakes, etc. The fact that the LLM usually figures out what I meant to say, like a human would, is a feature for me.
I don't want an LLM to act like an automated theorem prover. We already have those. Their strictness makes them extremely difficult to use, so their application is extremely limited.
No comments yet
Contribute on Hacker News ↗