← Back to context

Comment by echelon

4 days ago

How much energy does BFL have to keep playing this game against Google and ByteDance (SeeDream)?

If their new fancy model is only middle of the pack, and they're not as open source as the Chinese Qwen image models (or ByteDance / Alibaba / Lightricks video models), what's the point?

It's not just prompt adherence, the image quality of Flux models has been pretty bad. Plastic skin, inhumanely chiseled chins, that general faux "AI" aura.

Indeed, the Flux samples in your test suite that "pass" look God-awful. It might "pass" from a technical standpoint, but there's no way I'd choose Flux to solve my workflows. It looks bad.

(I wonder if they lack people on their data team with good aesthetic taste. It may be as simple as that.)

I think this company is struggling. They're pinned between Google and the Chinese. It's a tough, unenviable spot to be in.

I think a lot of the foundation model companies in media are having a really hard time: RunwayML, PikaLabs, LumaLabs. Some of them have pivoted hard away from solving media for everyone. I don't think they can beat the deep-pocketed hyperscalers or the Chinese ecosystem.

BFL just raised a massive round, so what do I know? I just can't help but feel that even though Runway raised similar money, they're struggling really hard now. And I would really not want to be fighting against Google who is already ahead in the game.

i may be wrong, but it doesn't seem like BFL is struggling to me. they were apparently founded in august 2024, and have already signed $100M+ revenue deals with customers like meta (https://www.bloomberg.com/news/articles/2025-09-09/meta-to-p...)

in fact, it seems like BFL has benefited a lot by becoming the go-to alternative for big enterprise customers who don't want to be dependent on google

  • Wow, I didn't hear about this. That's impressive, and kudos to the team.

    That's why they raised the massive round, then.

    But this just leads to more questions - I have to wonder if and for how long this is just going to be to plug in a gap for Meta's own AI product offering. At some point they'll want to build their own in-house models or perhaps just acquire BFL. Zuckerberg would not be printing AI data centers if that wasn't the case.

    From a PLG standpoint, Flux isn't really what graphics designers are choosing for their work. The generations look worse than OpenAI's "piss filter". But aesthetics might not be the play the team is going after.

    Hopefully they don't just raise all of this dry powder energy and burn it trying to race Google. They should start listening to designers and get in their good graces if their intent is to build tools for art and graphics design work.

    A good press release would consist of lots of good looking images and a video of workflows that save artists time. This press release doesn't connect with graphics designers at all and it reads as if they aren't even the audience.

    If it's something else, more "enterprise", that BFL is after, then maybe I don't know the strategy or game plan.

    • idk it seems pretty clear BFL’s target market is developers not graphic designers. and for developers at scale like Meta and Adobe, it’s pretty incredible a tiny startup like BFL has become the primary alternative to Google with 1/100th of the resources within 12 months of their founding, doing hundreds of millions of revenue

      the Chinese models are great, but no serious enterprise developer is going to bet their image workloads at scale in production on Chinese models if the market evolves anything like past developer infrastructure

      1 reply →

    • Reading the post the architectural change is combining a vision model (Mistral 3 in the flux.2 case) with a rectified flow transformer.

      I wonder if this architectural change makes it easier to use other vision models such as the ones in Llama 3 and 4, or possibly a future Llama 5.

Sadly, I tend to agree. I'm rooting for BFL, but the results from this latest model (the Pro version, of all things) have just been a bit disappointing. Google’s release of NB Pro last week certainly didn’t help either, since it set the bar so incredibly high.

Flux 2 Pro only scored a single point higher than the Kontext models they released over half a year ago.

The text-to-image side was even more frustrating. It often felt like it was actively fighting me, as evidenced by the high number of re-rolls required before it passed some of the tests (Cubed⁵, for example).