← Back to context

Comment by andy99

1 hour ago

There are a bunch (currently 3) of examples of people getting funny output, two of which saying it’s in LM studio (I don’t know what that is). It does seem likely that it’s somehow being misused here and the results aren’t representative.

Definitely. Usually I'd wait 2-3 weeks for the ecosystem to catch up and iron out the kinks, or do what I did for GPT-OSS, fix it in the places where it's broken, then judge it when I'm sure it's actually used correctly.

Otherwise, in that early period of time, only use the provided scripts/tools from the people releasing the model itself, which is probably the only way in those 2-3 weeks to be sure you're actually getting the expected responses.