← Back to context

Comment by jcelliott

20 hours ago

Hey, engineer at Oxen.ai here. Glad the fine-tuning worked well for this project! If anyone has questions on that part of it we would be happy to answer.

We have a blog post on a similar workflow here: https://www.oxen.ai/blog/how-we-cut-inference-costs-from-46k...

On the inference cost and speed: we're actively working on that and have a pretty massive upgrade there coming soon.

I didn't know Oxen.ai. I took a look at your docs on how to fine-tune LLMs:

> Oxen.ai supports datasets in a variety of file formats. The only requirement is that you have a column where each row is a list of messages. Each message is an dictionary with a role and content key. The role can be “system”, “user”, or “assistant”. The content is the message content.

Oh, so you're forced to use the ChatML multi-turn conversation format.