Comment by Jerrrrrrry

1 year ago

an LLM would complain that their internal model does not refelct their current input/output.

Since LLM's knows people knock off/test/run afoul/mistakes can be made, it would then raise that as a possibility and likely inquire.

This isn't prompt engineering, it's grammar-constrained decoding. It literally cannot respond with anything but tokens that fulfill the grammar.