Comment by marquesine 1 day ago Yes, that's the purpose of TOON.https://github.com/toon-format/toon 5 comments marquesine Reply koakuma-chan 1 day ago Is there evidence that LLMs adhere to this format better than to JSON? I doubt that. TheTaytay 1 day ago Their benchmarks compare it against other formats as input, not as output. koakuma-chan 21 hours ago Now it makes sense. iLoveOncall 1 day ago It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model. prats226 1 day ago Nice, it would be good idea to develop CFG for this as well so can embed it into all these constrained decoding libraries
koakuma-chan 1 day ago Is there evidence that LLMs adhere to this format better than to JSON? I doubt that. TheTaytay 1 day ago Their benchmarks compare it against other formats as input, not as output. koakuma-chan 21 hours ago Now it makes sense. iLoveOncall 1 day ago It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model.
TheTaytay 1 day ago Their benchmarks compare it against other formats as input, not as output. koakuma-chan 21 hours ago Now it makes sense.
iLoveOncall 1 day ago It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model.
prats226 1 day ago Nice, it would be good idea to develop CFG for this as well so can embed it into all these constrained decoding libraries
Is there evidence that LLMs adhere to this format better than to JSON? I doubt that.
Their benchmarks compare it against other formats as input, not as output.
Now it makes sense.
It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model.
Nice, it would be good idea to develop CFG for this as well so can embed it into all these constrained decoding libraries