← Back to context Comment by marquesine 21 days ago Yes, that's the purpose of TOON.https://github.com/toon-format/toon 5 comments marquesine Reply koakuma-chan 21 days ago Is there evidence that LLMs adhere to this format better than to JSON? I doubt that. iLoveOncall 21 days ago It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model. TheTaytay 21 days ago Their benchmarks compare it against other formats as input, not as output. koakuma-chan 21 days ago Now it makes sense. prats226 21 days ago Nice, it would be good idea to develop CFG for this as well so can embed it into all these constrained decoding libraries
koakuma-chan 21 days ago Is there evidence that LLMs adhere to this format better than to JSON? I doubt that. iLoveOncall 21 days ago It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model. TheTaytay 21 days ago Their benchmarks compare it against other formats as input, not as output. koakuma-chan 21 days ago Now it makes sense.
iLoveOncall 21 days ago It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model.
TheTaytay 21 days ago Their benchmarks compare it against other formats as input, not as output. koakuma-chan 21 days ago Now it makes sense.
prats226 21 days ago Nice, it would be good idea to develop CFG for this as well so can embed it into all these constrained decoding libraries
Is there evidence that LLMs adhere to this format better than to JSON? I doubt that.
It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model.
Their benchmarks compare it against other formats as input, not as output.
Now it makes sense.
Nice, it would be good idea to develop CFG for this as well so can embed it into all these constrained decoding libraries