Comment by wrsh07

13 hours ago

Not that this means the big AI corps should relax their values (it truly doesn't), but I would be extremely surprised if the DoD/DoW doesn't have anyone capable of fine tuning an open weights model for this purpose.

And, I mean, if they don't, gpt 5.3 is going to be pretty good help

Given the volume fine tuning a small model is probably the only cost effective way to do it anyway

Contrary to benchmarks, open weight models are way behind the frontier.

  • My point is that you don't want a big model for the kind of analysis being discussed here

    Even if they were paying frontier prices they would be choosing 5 mini or nano with no thinking

    At that point, a fine tuned open source model is going to be on the pareto frontier