Comment by marquesine 22 days ago Yes, that's the purpose of TOON.https://github.com/toon-format/toon 5 comments marquesine Reply koakuma-chan 22 days ago Is there evidence that LLMs adhere to this format better than to JSON? I doubt that. iLoveOncall 22 days ago It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model. TheTaytay 22 days ago Their benchmarks compare it against other formats as input, not as output. koakuma-chan 22 days ago Now it makes sense. prats226 22 days ago Nice, it would be good idea to develop CFG for this as well so can embed it into all these constrained decoding libraries
koakuma-chan 22 days ago Is there evidence that LLMs adhere to this format better than to JSON? I doubt that. iLoveOncall 22 days ago It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model. TheTaytay 22 days ago Their benchmarks compare it against other formats as input, not as output. koakuma-chan 22 days ago Now it makes sense.
iLoveOncall 22 days ago It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model.
TheTaytay 22 days ago Their benchmarks compare it against other formats as input, not as output. koakuma-chan 22 days ago Now it makes sense.
prats226 22 days ago Nice, it would be good idea to develop CFG for this as well so can embed it into all these constrained decoding libraries
Is there evidence that LLMs adhere to this format better than to JSON? I doubt that.
It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model.
Their benchmarks compare it against other formats as input, not as output.
Now it makes sense.
Nice, it would be good idea to develop CFG for this as well so can embed it into all these constrained decoding libraries