← Back to context

Comment by jjmarr

2 days ago

Seeing a task-specific model be consistently better at anything is extremely surprising given rapid innovation in foundation models.

Have you tried Aristotle on other, non-Lean tasks? Is it better at logical reasoning in general?

Is it though? There is a reason gpt has codex variants. RL on a specific task raises the performance on that task

  • Post-training doesn't transfer over when a new base model arrives so anyone who adopted a task-specific LLM gets burned when a new generational advance comes out.

    • Resouce-affording, if you are chasing the frontier of some more niche task you redo your training regime on the new-gen LLMs