Comment by wrsh07
17 hours ago
This is a strange take because this is consistent with what Google has been doing for a decade with AI. AlphaGo never had the weights released. Nor has any successor (not muzero, the StarCraft one, the protein folding alphafold, nor any other that could reasonably be claimed to be in the series afaik)
You can state as a philosophical ideal that you prefer open source or open weights, but that's not something deepmind has prioritized ever.
I think it's worth discussing:
* What are the advantages or disadvantages of bestowing a select few with access?
* What about having an API that can be called by anyone (although they may ban you)?
* Vs finally releasing the weights
But I think "behind locked down API where they can monitor usage" makes sense from many perspectives. It gives them more insight into how people use it (are there things people want to do that it fails at?), and it potentially gives them additional training data
All of what you said makes sense from the perspective of a product manager working for a for-profit company trying to maximize profit either today or eventually.
But the submission blog post writes:
> To advance scientific research, we’re making AlphaGenome available in preview via our AlphaGenome API for non-commercial research, and planning to release the model in the future. We believe AlphaGenome can be a valuable resource for the scientific community, helping scientists better understand genome function, disease biology, and ultimately, drive new biological discoveries and the development of new treatments.
And at that point, they're painting this release as something they did in order to "advance scientific research" and because they believe "AlphaGenome can be a valuable resource".
So now they're at a cross-point, is this release actually for advancing scientific research and if so, why aren't they doing it in a way so it actually maximizes advancing scientific research, which I think is the point parent's comment.
Even the most basic principle for doing research, being able to reproduce something, goes out the window when you put it behind an API, so personally I doubt their ultimate goal here is to serve the scientific community.
Edit: Reading further comments it seems like they've at least claimed they want to do a model+weights release of this though (from the paper: "The model source code and weights will also be provided upon final publication.") so remains to be seen if they'll go through with it or not.
Key question is the license they attach to model and weights. I have been seeing an increasing amount of releases in this space under non-commercial licenses.
I think companies in the space should either totally open source or not publish at all.
I can see publishing like this as achieving one (or more) of a several objectives:
1. Marketing software to for sales / licensing
2. Marketing startup to investors
3. Crowdsourcing use cases or product features from academia
Now here are the problems with those:
1. Selling software (exclusively) to drug companies is a terrible business model. Very low ceiling there. You can make more from one drug.
2. Indicates company focus is producing models and not drugs. See point one.
3. Computational labs want to release open source, so not viable to build on restricted tooling. Experimental labs may just be using to algo-wash prior hypotheses / biases.
Now weigh against disadvantage of letting competitors know what you are working on, how far you have progressed, as well as your methods.
But to add some historical context.
Similarly with alpha Go they claimed to do it "to advance go" and help go community, but they played Lee se dol, released few curated self play games, collected publicity and abandoned go with no artifacts like source or weights.
But in hindsight their paper turned out to be almost 100% reproducible and resulted in super-human open-source alternative less than a year later.
So the story might repeat here. And they will achieve started goal without releasing anything
To be clear: I agree that opening up model + weights makes it possible for third parties to distill or fine tune
If you look at the frenzy of activity that happened after midjourney became accessible, that was awesome for everyone. Midjourney probably got help running their model efficiently and a ton of progress was quickly made.
I'm pretty sympathetic to a company doing a windowing strategy: prepare the API as a sort of beta release timed with the announcement. Spend some time cleaning up the code for public release (at Google this means ripping out internal dependencies that aren't open source), and then release a reference inference implementation along with the weights.
That's pretty reasonable. I wanted to push back on this idea that "the reason Google isn't dropping model + weights is because the corporate screws are coming down hard"
Google isn't waiting to release the weights so that they can profit from this. It's essentially the first step in the process, and serving via API gives them valuable usage data they they might not get if/when it's open sourced
> serving via API gives them valuable usage data
It might give them a bit, but AFAIK most institutions (especially non-American ones) aren't exactly overly happy about using closed American APIs in order to do science, especially not because API usage isn't reproducible.
Sure, they might be able to play around with some toy data, but for Google to actually get valuable usage data, then they need to let people actually use the thing for real things, and then you cannot gate it behind a API, it isn't feasible in a real-world environment.
I take most of your points except the last one. The feedback would come in the form of publications, definitely from academia and to a lesser degree industry (admittedly a slow iteration time). Also just public discourse - there was no dearth of very specific, highly technical feedback for any of the releases of alphafold on twitter, for example.
But I can’t use this at all at work (a pharma company) because it would leak confidential information. So anything they learn from usage data is systematically excluding (the vast majority of?) people working on therapeutics.
1 reply →
I feel like this take is missing a sense of balance. You can have a goal of advancing scientific research while also still making money. You don’t have to choose one extreme end of the scale.
I’d argue that the product providing some monetary value for Google will help ensure that this team doesn’t get moved some more profitable project instead. That way they can continue improving this tool and make more tools like it in the future.
Working in research and development for 20 years, I can assure you that the only science that leaves the lab is that which can make money.
The predecessor to this model Enformer, which was developed in collaboration with Calico had a weight release and a source release.
The precedent I'm going with is specifically in the gene regulatory realm.
Furthermore, a weight release would allow others to finetune the model on different datasets and/or organisms.
I think that from a research/academic view of the landscape, building off a mutable API is much less preferred than building of a set of open weights. It would be even better if we had the training data, along with all code and open weights. However, I would take open weights over almost anything else in the current landscape.
If it came to light that somebody found a way to use this API in a way that is harmful to society would you be happy that Google could revoke access? Or unhappy?
This is a real tradeoff of freedom vs _. I agree that I'm not always a fan of Google being the one in control, but I'm much happier that they are even releasing an API. That's not something they did for go! (Of course there was a book written so someone got access)
If it came to light that somebody found a way to use this API in a way that is beneficial to society, would you be happy that Google could revoke access? Or unhappy?
[dead]