← Back to context Comment by marquesine 1 month ago Yes, that's the purpose of TOON.https://github.com/toon-format/toon 5 comments marquesine Reply koakuma-chan 1 month ago Is there evidence that LLMs adhere to this format better than to JSON? I doubt that. iLoveOncall 1 month ago It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model. TheTaytay 1 month ago Their benchmarks compare it against other formats as input, not as output. koakuma-chan 1 month ago Now it makes sense. prats226 1 month ago Nice, it would be good idea to develop CFG for this as well so can embed it into all these constrained decoding libraries
koakuma-chan 1 month ago Is there evidence that LLMs adhere to this format better than to JSON? I doubt that. iLoveOncall 1 month ago It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model. TheTaytay 1 month ago Their benchmarks compare it against other formats as input, not as output. koakuma-chan 1 month ago Now it makes sense.
iLoveOncall 1 month ago It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model.
TheTaytay 1 month ago Their benchmarks compare it against other formats as input, not as output. koakuma-chan 1 month ago Now it makes sense.
prats226 1 month ago Nice, it would be good idea to develop CFG for this as well so can embed it into all these constrained decoding libraries
Is there evidence that LLMs adhere to this format better than to JSON? I doubt that.
It is 100% guaranteed that they DON'T. Toon is 3 months old, it's not used by anyone, and it's therefore not in the training set of any model.
Their benchmarks compare it against other formats as input, not as output.
Now it makes sense.
Nice, it would be good idea to develop CFG for this as well so can embed it into all these constrained decoding libraries