← Back to context

Comment by 0x457

5 days ago

I have this toy agent I'm writing, I always laugh that I, human, write a code that generates human-readable markdown, that I feed to llm where I ask it to produce a json, so I can parse (by code I, or it wrote) and output in a consistent human-readable form.

I'm thinking about let it output freeform and then use another model to use to force that into structured.

I've found this approach brings slightly better result indeed. Let the model "think" in natural language, then translate it's conclusions to Json. (Vibe checked, not benchmarked)

IIRC yaml is easier for models than json because you don't need as much recursive syntax.

  • I doubt this is true anymore, if ever. Both require string escaping, which is the real hurdle. And they are heavily trained on JSON for tool calling.

    • I believe it could be true because I think training dataset contained a lot more yaml than json. I mean...you know how much yaml get churned out every second?