Comment by spywaregorilla
2 years ago
I think it's a naive quote. Sounds wise. Is actually dumb. At least broadly applied in this context.
Lots of science is done without explanations. It's useful still. A lot of genetic research is just turning one gene off at a time and seeing if things work different without it. And then you say gene X causes Y. Why? Dunno. Genetics is not unique on this. Answering questions is useful. Answering questions about the answers to those questions is useful. But it spirals down infinitely and we stop at every layer because every layer is useful.
But moreso, machine learning models do embed explanations. LLMs can often explain the principles of their claims. Look at code generating models. Or code explaining models. Simple decision trees can illustrate the logic of newton's laws as mathematical rules.
Putting up claims of things that are proof of human specialness is just a reductive drawdown similar to how we used to explain everything as God's will.
> And then you say gene X causes Y. Why? Dunno.
Now this is definitely naive. Geneticicts definitely look for an explanation why this happened. Does looking for an answer involve randomly turning on and off some stuff? Yes. It doesn't mean scientists don't look for an answer.
As I said. Some do look further. Some do not. For the particular niche of genetics research, most of the time we actually don't, and that's fine, because it's not particularly actionable whereas the base layer understanding of a genetic interaction is helpful for things like personalized medicine.
We don't shit on the scientists that decide to stop searching "wait but why" and instead answer higher level questions. Because... obviously that is not always the appropriate thing to do.
I was gonna say, I've been on experiments where we literally just blasted the shit out of genomes to do the knockouts and then grew up the plants and phenotyped them to compare with the knockouts.
The point is that we know many things as facts that we cannot explain. We may be looking for the explanation but, as of yet, we don't know why many things are as they are (as in the example above).
Actually, LLMs are also a good example. We don't know why chatGPT generates apparently cogent text and answers. What we know is that, if we train it this way and do a bunch of optimizations we get a machine that appears to be thinking or, at least, we can have a decent conversation with it. There are many efforts to explain it (I remember reading recently a paper analysing the GPT3 neuron that determines 'an' vs 'a' in English)
Finally, all science is falsifiable by definition, so, what we think we know now may be be disproven tomorrow.
Emergent properties is one of the places where pure understanding tends to break down under incomprehensibly huge problem space.
For example people have been doing accidental science the start of human agriculture by selective breeding without understanding the mechanics of DNA transfer. And your right Geneticists look for answers and attempt to minimize the size of the problem space in order to attempt to find answer faster, but the staggering number of interactions that can be caused by a single gene expression pretty much require to pick one place to look at with a microscope and ignore everything else going on around it in order to get an answer in a human lifetime.
Though calling it pseudoscience would be insane.
Nobody knows how general anaesthetics work. It's a stone cold mystery. Solving that mystery might lead to a new generation of anaesthetic agents or some other useful medical technology, but nobody is particularly perturbed by our ignorance; a practical knowledge of how to safely and reliably induce anaesthesia is immeasurably more valuable than a theoretical understanding.
Science might aspire to rationality, but reality is Bayesian.
> LLMs can often explain the principles of their claims.
Tbf those explanations are often just straight up bullshit nonsense.
I don't really care if this generation of LLMs is good or not. But fwiw, that's really not the case in my experience. On its face it seems unlikely that you can argue a machine that infers what a reasonable answer would be does not have an internal representation of the mechanics and actors present in the question. Otherwise it would not work. They clearly work well beyond regurgitating specific examples they learned from.
That doesn't exactly mean that those representations are in any way correct. I may be anthropomorphizing too much here, but it feels exactly like asking someone who's done nothing but rote learning and seeing them try to apply probable reasons to things they fundamentally do not understand. The instant assumption that if the asker talks about something then it must be true.
1 reply →
For the purposes of the article, it's fine if they're bulshit, it only matter that they are there
Randomly turning genes on and off to see what they do is experimentation. It leads to a better understanding of genes. Biology is messy and complex, so it's difficult to trace all the causes and effects. But there is some understanding of the mechanisms by which genes turn into phenotypes.
Certainly. And that's great. And often its also great to simply care about, say, gene-drug interactions and not find the root cause.
> Sounds wise. Is actually dumb.
Another excellent early entrant in the 2023 Accidental HN Slogan Contest.
Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.
https://news.ycombinator.com/newsguidelines.html