Comment by COGlory
2 years ago
Before my comment gets dismissed, I will disclaim I am a professional structural biologist that works in this field every day.
These threads are always the same: lots of comments about protein folding, how amazing DeepMind is, how AlphaFold is a success story, how it has flipped an entire field on it's head, etc. The language from Google is so deceptive about what they've actually done, I think it's actually intentionally disingenuous.
At the end of the day, AlphaFold is amazing homology modeling. I love it, I think it's an awesome application of machine learning, and I use it frequently. But it's doing the same thing we've been doing for 2 decades: pattern matching sequences of proteins with unknown structure to sequences of proteins with known structure, and about 2x as well as we used to be able to.
That's extremely useful, but it's not knowledge of protein folding. It can't predict a fold de novo, it can't predict folds that haven't been seen (EDIT: this is maybe not strictly true, depending on how you slice it), it fails in a number of edge cases (remember, in biology, edge cases are everything) and again, I can't stress this enough, we have no new information on how proteins fold. We know all the information (most of at least) for a proteins final fold is in the sequence. But we don't know much about the in-between.
I like AlphaFold, it's convenient and I use it (although for anything serious or anything interacting with anything else, I still need a real structure), but I feel as though it has been intentionally and deceptively oversold. There are 3-4 other deep learning projects I think have had a much greater impact on my field.
EDIT: See below: https://news.ycombinator.com/item?id=32265662 for information on predicting new folds.
Not sure if you should be reminded of how alpha fold started, it started by winning a competition thought un winnable by academics. Top labs working in protein structure prediction have fundamentally changed direction after alpha fold and are working to do the same even better.
This is not the first (or even tenth) time I’m seeing an academic trying to undermine genuine progress almost to the level of gaslighting. Comparing alphafold to conventional homology modeling is disingenuous at its most charitable interpretation.
Not sure what else to say. Structural biology has always been the weirdest field I’ve seen, the way students are abused (crystallize and publish in nature or go bust), and how every nature issue will have three structure papers as if that cures cancer every day. I suppose it warps one’s perception of outsiders after being in such a bubble?
signed, someone with a PhD in biomedical engineering, did a ton of bio work.
> Not sure if you should be reminded of how alpha fold started, it started by winning a competition thought un winnable by academics. Top labs working in protein structure prediction have fundamentally changed direction after alpha fold and are working to do the same even better.
Not sure what part of "it does homology modeling 2x better" you didn't see in my comment? AlphaFold scored something like 85% in CASP in 2020, in CASP 2016, I-TASSER had I think 42%? So it's ~2x as good as I-TASSER which is exactly what I said in my comment.
>This is not the first (or even tenth) time I’m seeing an academic trying to undermine genuine progress almost to the level of gaslighting. Comparing alphafold to conventional homology modeling is disingenuous at its most charitable interpretation.
It literally is homology modeling. The deep learning aspect is to boost otherwise unnoticed signal that most homology modeling software couldn't tease out. Also, I don't think I'm gaslighting, but maybe I'm wrong? If anything, I felt gaslit by the language around AlphaFold.
>Not sure what else to say. Structural biology has always been the weirdest field I’ve seen, the way students are abused (crystallize and publish in nature or go bust), and how every nature issue will have three structure papers as if that cures cancer every day. I suppose it warps one’s perception of outsiders after being in such a bubble?
What on earth are you even talking about? The vast, VAST majority of structures go unpublished ENTIRELY, let alone published in nature. There are almost 200,000 structures on deposit in the PDB.
What ramraj is talking about: if you go into a competitive grad program to get a PhD in structural biology, your advisor will probably expect that in 3-4 years you will: crystallize a protein of interest, collect enough data to make a model, and publish that model in a major journal. Many people in my program could not graduate until they had a Nature or Science paper (my advisor was not an asshole, I graduated with just a paper in Biochemistry).
In a sense both of you are right- DeepMind is massively overplaying the value of what they did, trying to expand its impact far beyond what they actually achieved (this is common in competitive biology), but what they did was such an improvement over the state of the art that it's considered a major accomplishment. It also achieved the target of CASP- which was to make predictions whose scores are indistinguishable from experimentally determined structures.
I don't think academics thought CASP was unwinnable but most groups were very surprised that an industrial player using 5 year old tech did so well.
16 replies →
> Not sure what part of "it does homology modeling 2x better" you didn't see in my comment? AlphaFold scored something like 85% in CASP in 2020, in CASP 2016, I-TASSER had I think 42%? So it's ~2x as good as I-TASSER which is exactly what I said in my comment.
Wait, stop, I don't know anything about proteins but 84% success is not ~2x better than 42%.
It doesn't really make sense to talk about 2x better in terms of success percentages, but if you want a feel, I would measure 1/error instead (a 99% correct system is 10 times better than a 90% correct system), making AlphaFold around 3.6 times better.
4 replies →
> AlphaFold scored something like 85% in CASP in 2020, in CASP 2016, I-TASSER had I think 42%? So it's ~2x as good as I-TASSER
As someone who doesn't know proteins, but is decent at math, I would not describe it this way. You are assuming a linear relationship between effort and value, but more often than not, effort has diminishing returns. 80dB is not 2x as loud as 40 dB. An 8K image doesn't have 2x the fidelity of a 4K image. If Toyota unveiled a new engine that was 60% efficient tomorrow, no one in their right mind would say "eh, it's just 2x better". If we came out with a CPU that could clock up to 10Ghz we wouldn't say "meh, that's just 2x what we had".
Without being able to define the relationship here, I could just as well say that 85% is 1000x better than 42%. There's just no way to put a number on it. What we can say is that we completely blew all projections out of the water.
Again, I'm not someone working with proteins, but to me it sounds as revolutionary as a 60%+ efficient engine, or a 10Ghz CPU. No one saw it coming or thought it feasible with current technology.
I think the debate between "does amazing on metric X" versus "doesn't really understand the problem" reappears many places and doesn't have any direct way to be resolved.
That's more or less because "really understands the problem" generally winds-up being a placeholder for things the system can't. Which isn't to say it's not important. One thing that is often included in "understanding" is the system knowing the limits of its approach - current AI systems have a harder time giving a certainty value than giving a prediction. But you could have a system that satisfied a metric for this and other things would pop up - for example, what kind of certainty or uncertainty are we talking about (crucial for decision making under uncertainty).
> Comparing alphafold to conventional homology modeling is disingenuous at its most charitable interpretation.
It's really not - have you played around with AF at all? Made mutations to protein structures and asked it to model them? Go look up the crystal structures for important proteins like FOXA1 [1], AR [2], EWSR1 [3], etc (i.e. pretty much any protein target we really care about and haven't previously solved) and tell me with a straight face that AF has "solved" protein folding - it's just a fancy language model that's pattern matching to things it's already seen solved before.
signed, someone with a PhD in biochemistry.
[1] https://alphafold.ebi.ac.uk/entry/P55317 [2] https://alphafold.ebi.ac.uk/entry/P10275 [3] https://alphafold.ebi.ac.uk/entry/Q01844
I can see the loops in these structures. I dont see the problem. It still added a structure to every embl page, and people are free to judge the predictions themselves. For all I care (ostensibly as the end customer of these structures) I don’t mind having a low confidence structure for any arbitrary protein at all. It’s only marginally less useful to actual biology than full on X-ray structures anyway.
1 reply →
This isn’t a good use of the term gaslighting. Accusing someone of gaslighting takes what we used to call a ‘difference of opinion’ and mutates it into deliberate and wicked psychological warfare.
Incidentally, accusing someone of gaslighting is itself a form of gaslighting.
Well, it can be gaslighting but not always. A knowingly false accusation, repeated often enough and in a way to make the accused question their own perception of reality, would be gaslighting.
Not only is CASP not "unwinnable," it's not even a contest. The criteria involved are rated as "moderately difficult." Alphafold is a significant achievement but it sure as hell hasn't "revealed the structure of the protein universe," whatever that means.
Which top labs have changed direction? Because Alphafold can't predict folds, just identify ones it's seen.
I've directly communicated with the leaders of CASP and at DM that they should stop representing this as a form of protein folding and just call it "crystal/cryoEM structure prediction" (they filter out all the NMR structures from PDB since they aren't good for prediction). They know it's disingenuous and they do it on purpose to give it more impact than it really deserves.
I would like to correct somethign here- it does predict structures de novo and predict folds that haven't been seen before. That's because of the design of the NN- it uses sequence information to create structural constraints. If those constraints push the modeller in the direction of a novel fold, it will predict that.
To me what's important about this is that it demonstrated the obvious (I predicted this would happen eventually, shortly after losing CASP in 2000).
>I would like to correct somethign here- it does predict structures de novo and predict folds that haven't been seen before. That's because of the design of the NN- it uses sequence information to create structural constraints. If those constraints push the modeller in the direction of a novel fold, it will predict that.
Could you expand on this? Basically it looks at the data, and figures out what's an acceptable position in 3D space for residues to occupy, based on what's known about other structure?
I will update my original post to point out I may be not entirely correct there.
The distinction I'm trying to make is that there's a difference between looking at pre-existing data and modeling (ultimately homology modeling, but maybe slightly different) and understanding how protein folding works, being able to predict de novo how an amino acid sequence will become a 3D structure.
Also thank you for contacting CASP about this.
From what I can tell, the model DM built is mining subtle relationships between aligned columns of multiple sequence alignments and any structural information which is tangibly related to those sequences. Those relationships can be used to infer rough atomic distances ("this atom should be within 3 and 7 angstroms of this other atom"). A large matrix (partially filled out) of distances is output, and those distances are used as constraints in a force field (which also includes lots of prior knowledge about protein structure) and then they run simulations which attempt to minimize both the force field and constraint terms.
In principle you don't even need a physical force field- if you have enough distance information between pairs of atoms, you can derive a plausible structure by embedding the distances in R3 (https://en.wikipedia.org/wiki/Distance_geometry and https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.21...
Presumably, the signal they extract includes both rich local interactions (amino acids near in sequence) and distant ones inferred through sequence/structure relationships, and the constraints could in fact push a model towards a novel fold, presumably through some extremely subtle statistical relationships to other evolutionarily related proteins that adopt a different fold.
> The distinction I'm trying to make is that there's a difference between looking at pre-existing data and modeling (ultimately homology modeling, but maybe slightly different) and understanding how protein folding works, being able to predict de novo how an amino acid sequence will become a 3D structure.
Your objection is that alphafold is a chinese room?
What does that matter? Either it generates useful results or it doesn't. That is the metric we should evaluate it on.
2 replies →
> There are 3-4 other deep learning projects I think have had a much greater impact on my field.
Don't leave us hanging... which projects?
1) Isonet - takes low SNR cryo-electron tomography images (that are extremely dose limited, so just incredibly blurry and frequently useless) and does two things:
* Deconvolutes some image aberrations and "de-noises" the images
* Compensates for missing wedge artifacts (missing wedge is the fact that the tomography isn't done -90° --> +90°, but usually instead -60° --> +60°, leaving a 30° wedge on the top and bottom of basically no information) which usually are some sort of directionality in image density. So if you have a sphere, the top and bottom will be extremely noisy and stretched up and down (in Z).
https://www.biorxiv.org/content/10.1101/2021.07.17.452128v1
2) Topaz, but topaz really counts as 2 or 3 different algorithms. Topaz has denoising of tomograms and of flat micrographs (i.e. images taken with a microscope, as opposed to 3D tomogram volumes). That denoising is helpful because it increases contrast (which is the fundamental problem in Cryo-EM for looking at biomolecules). Topaz also has a deep learning particle picker which is good at finding views of your protein that are under-represented, or otherwise missing, which again, normally results in artifacts when you build your 3D structure.
https://emgweb.nysbc.org/topaz.html
3) EMAN2 convolutional neural network for tomogram segmentation/Amira CNN for segmentation/flavor of the week CNN for tomogram segmentation. Basically, we can get a 3D volume of a cell or virus or whatever, but then they are noisy. To do anything worthwhile with it, even after denoising, we have to say "this is cell membrane, this is virus, this is nucleic acid" etc. CNNs have proven to be substantially better at doing this (provided you have an adequate "ground truth") than most users.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5623144/
I asked a structural biologist friend of mine (world class lab) about the impact of alphafold.
They said it's minimal.
In most cases, having a "probably" isn't good enough. They use alphafold to get early insights, but then they still use crystallography to confirm the structure. Because at the end of the day, you need to know for sure.
I'm not a biologist, but that doesn't sound minimal if crystallography is expensive.
It sounds like how we model airplanes in computers, but still test the real thing - i wouldn't call the impact of computer modelling on airplane design to be minimal.
> it can't predict folds that haven't been seen
This seems strange to me. The entire point of these types of models is to predict things on unseen data. Are you saying Deepmind is completely lying about their model?
Deepmind solved CASP, isn't the entire point of that competition to predict unseen structures?
If AlphaFold doesn't predict anything then what are you using it to do?
AlphaFold figures out that my input sequence (which has no structural data) is similar to this other protein that has structural data. Or maybe different parts of different proteins. It does this extremely well.
This is a gross misrepresentation of the method.
12 replies →
Disclaimer: I'm a professional (computational) structural biologist. My opinion is slightly different.
The problem with the structure prediction problem is not a loss/energy function problem, even if we had an accurate model of all the forces involved we'd still not have an accurate protein structure prediction algorithm.
Protein folding is a chaotic process (similar to the 3 body problem). There's an enormous number of interactions involved - between different amino acids, solvent and more. Numerical computation can't solve chaotic systems because floating point numbers have a finite representation, which leads to rounding errors and loss of accuracy.
Besides, Short range electro static and van der waals interactions are pretty well understood and before alphafold many algorithms (like Rosetta) were pretty successful in a lot of protein modeling tasks.
Therefore, we need a *practical* way to look at protein structure determination that is akin to AlphaFold2.
as an outsider learning more about protein folding, could you elaborate on the assertion that the sequence is (mostly) all you need (transformer/ML reference intended).
doesn't this assume the final fold is static and invariant of environmental and protein interactions?
put another way, how do we know that a protein does not fold differently under different environmental conditions or with different molecular interactions?
i realize this is a long-held assumption, but after studying scientific research for the past year, i realize many long-held assumptions aren't supported by convincing evidence.
These threads are always the same: lots of comments about protein folding, how amazing DeepMind is, how AlphaFold is a success story, how it has flipped an entire field on it's head, etc.
I don't think that's necessarily so - there is a lot of justified scepticism about the wilder claims of ML in this forum; it is in fact quite difficult at times to know as an outsider to the field in question how kneejerk it is.
Additionally, folding doesn't focus on what matters. Generally you want to understand the active site, you already know the context (globular, membrane, embedded, conjugated) of the protein. It is interesting whether the folding could help identify active sites for further analysis. But -- I don't think alphago is identifying new active sites or improving our understanding of their nuances.
Right, but even a speed up / quality increase can flip workflows on their head. Take ray tracing for example, when you speed it up by an order of magnitude, you can suddenly go from taking a break every time you want to render a scene, vs being able to iteratively work on a scene and preview it as you work.
I got a lot of shit (still do) when the news first broke for pushing back against the notion that AlphaFold "solved" protein folding. People really wanted to attach that word to the achievement. Thank you for providing a nuanced take on exactly why that doesn't make any sense.
I'm curious to read more on the 3-4 other deep learning projects you mentioned that have had a larger impact on your fields. Can you share some links to those works?
Yup. It’s great, but there are still many aspects to unpack and work on. Hence why Rosetta is a thing.
Rosetta methods are also moving towards ML. Here’s an article from last week: https://www.science.org/doi/10.1126/science.abn2100
> AlphaFold is amazing homology modeling
If it is homology modelling, then how can it work without input template structures?
It has template structures. AlphaFold uses the following databases:
Those databases are used to derive the evolutionary couplings and distance matrices used by the algorithm. Several of those databases aren’t even structural ones. Furthermore, AlphaFold can function with only a MSA as an input, without retrieving a single PDB coordinate.
6 replies →
“Disclaim” stopped me.
Disclaim means to deny or renounce.
Can we just chill on the whole “using this single word incorrectly breaks your whole argument” thing?
A lot of folks on HN end posts about a company with a sentence like “Disclaimer: I used to work for X”. This language (probably taken from contract law or something) is meant an admission of possible bias but in practice is also a signal that this person may know what they’re talking about more-so than the average person. After reading a lot of posts like this, it might feel reasonable for someone to flip the word around say something like “I need to disclaim…” when beginning a post, in order to signal their proximity to a topic or field as well as any sort of insider bias they may possess.
So sure, “I need to disclose” would’ve been the better word choice, but we all knew what GP was saying. It seems pedantic to imply otherwise.
Let me translate. They said, “I will disclaim I am a professional structural biologist that works in this field every day.”
That is synonymous with saying, “I will deny I am a professional structural biologist that works in this field every day.”
The person posting is actually a structural biologist. What they stated was cognitively dissonant with the intent of their post, and that’s what stopped me.
I don’t pay attention to typos or minor usage issues, but in this case, I read two more sentences and said, “What??”
EDIT: Two more things. First, I found the post interesting and useful. I didn’t say anything about breaking the argument.
Second, “I need to disclose…” is the exact opposite of what they said.
1 reply →
>we all knew what GP was saying
I was confused initially too.
Or to make a disclaimer.. like the OP post did?
Merriam webster[1]: " Definition of disclaim
intransitive verb 1 : to make a disclaimer ... "
[1]: https://www.merriam-webster.com/dictionary/disclaim
The verb was used transitively.
Transitive verb:
2 : DENY, DISAVOW disclaimed any knowledge of the contents of the letter
I mean like whats this about AlphaFold is gone