Comment by antonvs

17 hours ago

Presumably this is all referring to Alexander Wang, who's 28 now. The data-labeling company he co-founded, Scale AI, was acquired by Meta at a valuation of nearly $30 billion.

But I suppose the criticism is that he doesn't have deep AI model research credentials. Which raises the age-old question of how much technical expertise is really needed in executive management.

> how much technical expertise is really needed in executive management.

For running an AI lab? a lot. Put it this way, part of the reason that Meta has squandered its lead is because it decided to fill it's genAI dept (pre wang) with non-ML people.

Now thats fine, if they had decent product design and clear road map as to the products they want to release.

but no, they are just learning ML as they go, coming up with bullshit ideas as they go and seeing what sticks.

But, where it gets worse, is they take the FAIR team and pass them around like a soiled blanket: "You're a team that is pushing the boundaries in research, but also you need stop doing that and work on this chatbot that pretends to be a black gay single mother"

All the while you have a sister department, RL-L run by Abrash, who lets you actually do real research.

Which means most of FAIR have fucked off to somewhere less stressful, and more concentrated on actually doing research, rather than posting about how you're doing research.

Wangs misteps are numerous, the biggest one is re-platforming the training system. Thats a two year project right there, for no gain. It also force forks you from the rest of the ML teams. Given how long it took to move to MAST from fblearner, its going be a long slog. And thats before you tackle increasing GPU efficiency.

  • why did they move to fblearner

    what is the new training platform

    I must know

    • Meta has been itching to kill FBlearner for a while. Its basically an airflow style interface (much better to use as a dev, not sure about admin, I think it might even pre-date airflow)

      They are mostly moved to MAST for GPU stuff now I dpn;t think any GPUs are assigned to fblearner anymore. This is a shame because it feels a bit less integrated into python and feels a bit more like "run your exe on n machines" however, it has a more reliable mechanism for doing multi-GPU things, which is key for doing any kind of research at speed.

      My old team are not in the super intelligence org, so I don't have much details on the new training system, but there was lots of noise about "just using vercel" which is great apart from all of the steps and hoops you need to go through before you can train on any kind of non-opensource data. (FAIR had/has thier own cluster on AWS, but that meant that they couldn't use it to train on data we collected internally for research (ie paid studies and data from employees that were bribed with swag)

      I've not caught up with the drama for the other choices. Either way, its kinda funny to watch "not invented here syndrome" smashing in to "also not invented here syndrome"

> Which raises the age-old question of how much technical expertise is really needed in executive management.

For whomever you choose to set as the core decision maker, you get out whatever their expertise is with minor impact by their guides.

Scaling a business is a skill set. It's not a skill set that captures or expands the frontier of AI, so it's clearly in the realm to label the gentleman's expensive buyout is a product development play instead of a technology play.

Hopefully he isn’t referring to Alex Wang, as it would invalidate anything else he said in his comment