← Back to context

Comment by andrekandre

11 hours ago

  > But for most tasks, a fleet of tailor-made smaller models being called on by an agent seems like a solidly-precedented (albeit not singularity-triggering) bet.

not an expert by any means, but wouldn't smaller but highly refined models also output more reproducible results?

intuitively it sounds akin to the unix model...

But then again the main selling point of using LLMs as part of some code that solves a certain business need is that you don't have to finetune a usecase-specific model (like in the mid 2010s), you just prompt engineer a bit and it often magically works.