Comment by zackmorris
4 months ago
Good stuff.
The main misconception is probably due to me not remembering the "other genes" part. I read it in an article and it clicked so immediately that I took it for granted and didn't save it. I still can't find it, but I think what it said was: if we tell CRISPR to remove a gene sequence, then it will remove all instances of that sequence everywhere. So it has to be tuned to find a very specific sequence, possibly taking into account unrelated genes around the site so it's unique. And I think there was maybe a limit to how long that pattern could be.
> Not even close. Ironically, while the technologies are pretty good in general, every edit requires a ton of engineering work. CRISPR systems are notoriously idiosyncratic - they might edit one target in 80% of cells, and 0% at another target, for no apparent reason. There are definitely open problems with base and prime editing, and those will probably get more-or-less solved in 5-10 years, but I'm reasonably sure there will be genetic disorders for which there is no treatment for decades.
Ok that makes sense, and I kind of wondered what the holdup was. It sounds like there is some probability involved, and also non-obvious blockers that "if we just had these figured out" could let things move forward. I'm sure it's so much more complex than anything I could imagine, I get it.
What I was trying to say though is that biotech isn't abstracted like computer science (yet). It deals a lot with low-level details and legwork. And regulatory and market limitations, as you rightly pointed out. It would be like a programmer never getting free from assembly language, or even having to build their computer from scratch, then spending 10 years on an app just to be told that they weren't allowed to sell it. That's "real work" that programmers generally shy away from. Although I think that deep down a lot of hackers dream of working on real problems like that if they could ever get free of the minutia they're stuck fixing every day.
My experience with AI so far has been that it turns any worker into a manager. So I've largely stopped coding directly, and just tell the AI what to do, and it does it every time in imaginative ways that constantly surprise me. Rather than thinking of it as one AI, I think of it as a team of agents that all work in parallel using context from the current conversation. So I can have several todos lined up, then work through them in real time as if every prompt is being chewed on by one of those agents. They never get tired or complain like me. So the main holdup right now is that I don't know how to orchestrate them effectively, so am having to micromanage them.
So far I'm about 3-4 times more productive, so I'm wrapping up a huge refactor outside my area of expertise that took about 2 weeks, but would have taken 2 months if I was working alone. And its speed will double every couple of years, so 5 years from now AI will be at least 10 times faster than any human programmer. It will also modify itself to quickly grow beyond what we can teach it. I'd estimate that the one I work with has about a 120-150 IQ (at least for programming), although it can be hopelessly naive sometimes. Gaining 10 IQ points per year will put it outside our realm of understanding when it passes 200.
Ray Kurzweil predicted that AI would reach human-level intelligence by 2029 and evolve to AGI to merge with us in the Singularity by 2045. So far we're ahead of schedule IMHO, although I spent most of my life disappointed that things weren't progressing until just a couple of years ago. That ended up being more due to economic forces (it took the intervention of a billionaire after a 20 year suppression of R&D after the Dot Bomb and resulting underemployment/wealth inequality) rather than any technical limitations.
I can't really speak to the specific challenges you mentioned, but I can recommend turning them around. Imagine if the hardest things for us are the easiest things for AI. Filtering large amounts of data? Done. Exploring multiple solutions in a problem space? Straightforward. Imagining novel solutions? Get ready to be surprised. AI will quickly turn all of the unknowns into a neat and tidy flowchart and even test it in simulation. Rinse, repeat, until every disorder is healed.
- some woo woo stuff -
I know it's hard to believe, but we'll have to let go of the unknown. Problem-solving is becoming a solved problem. My gut feeling is that we're entering an age of disillusionment, where work becomes performative, where questions are answered as quickly as they're asked. So that we enter a long now like Star Wars where there isn't so much invention as recombination of recipes. Today will be no different than 10,000 years ago. Unless it's more like Dune where they ban AI and performative work becomes all there is, so that life becomes a play within a fascistic culture that celebrates pageantry. Not so different from spirit deciding to reincarnate in human form to experience suffering and remembrance (a useful model for tech being indistinguishable from magic). Yet today would still be no different than 10,000 years ago. That's why I think the long now is inevitable, regardless of whether or not we constrain AI.
No comments yet
Contribute on Hacker News ↗