← Back to context

Comment by latentspacer

3 days ago

i may be wrong, but it doesn't seem like BFL is struggling to me. they were apparently founded in august 2024, and have already signed $100M+ revenue deals with customers like meta (https://www.bloomberg.com/news/articles/2025-09-09/meta-to-p...)

in fact, it seems like BFL has benefited a lot by becoming the go-to alternative for big enterprise customers who don't want to be dependent on google

Wow, I didn't hear about this. That's impressive, and kudos to the team.

That's why they raised the massive round, then.

But this just leads to more questions - I have to wonder if and for how long this is just going to be to plug in a gap for Meta's own AI product offering. At some point they'll want to build their own in-house models or perhaps just acquire BFL. Zuckerberg would not be printing AI data centers if that wasn't the case.

From a PLG standpoint, Flux isn't really what graphics designers are choosing for their work. The generations look worse than OpenAI's "piss filter". But aesthetics might not be the play the team is going after.

Hopefully they don't just raise all of this dry powder energy and burn it trying to race Google. They should start listening to designers and get in their good graces if their intent is to build tools for art and graphics design work.

A good press release would consist of lots of good looking images and a video of workflows that save artists time. This press release doesn't connect with graphics designers at all and it reads as if they aren't even the audience.

If it's something else, more "enterprise", that BFL is after, then maybe I don't know the strategy or game plan.

  • idk it seems pretty clear BFL’s target market is developers not graphic designers. and for developers at scale like Meta and Adobe, it’s pretty incredible a tiny startup like BFL has become the primary alternative to Google with 1/100th of the resources within 12 months of their founding, doing hundreds of millions of revenue

    the Chinese models are great, but no serious enterprise developer is going to bet their image workloads at scale in production on Chinese models if the market evolves anything like past developer infrastructure

    • How is an image generation model serving the market of...developers? I mean I know we all focus on these models and get excited about what they can do. But why would we pay for them for more than a few tests?

  • Reading the post the architectural change is combining a vision model (Mistral 3 in the flux.2 case) with a rectified flow transformer.

    I wonder if this architectural change makes it easier to use other vision models such as the ones in Llama 3 and 4, or possibly a future Llama 5.