Comment by embedding-shape

4 days ago

I didn't read "major failed training run" as in "the process crashed and we lost all data" but more like "After spending N weeks on training, we still didn't achieve our target(s)", which could be considered "failing" as well.

They could have done what Lightricks did with LTX-1 - build almost embarrassingly small models in the open and iteratively improve from learning.

LTX's first model felt two years behind SOTA when it launched, but they viewed it as a success and kept going.

The investment initially is low and can scale with confidence.

BFL goes radio silent and then drops stuff. Now they're dropping stuff that is clearly middle of the pack.

  • Going from launching SOTA models to launching "embarrassingly small models" isn't something investors generally are into, specially when you're thinking about what training runs to launch and their parameters. And since BFL has investors, they have to make choices that try to maximize ROI for investors rather than the community at large, so this is hardly surprising.