Comment by cs702
3 years ago
In the short run, technological progress is much slower than anyone hopes, but in the long run it is much faster than anyone expects.
My favorite example is that in 1987, the scientific consensus was that it would take "at least 100 years" and likely much longer to sequence the entire human gnome.[a] But the vast majority of it was sequenced by 2000, only 13 years later. Moreover, just two decades later, anyone could get their genome checked for known markers for pocket change by companies like 23andMe, founded in 2006.
Having some expertise and interest in AI, I regularly watch presentations by all companies working on self-driving and also look at the videos posted by beta testers online. While it's fun to watch the failures, I'm more interested in judging whether the technology is continuing to improve.
My perceptions contradict this article: (1) The technology is progressing faster than is generally recognized, with vehicles getting progressively better at dealing with edge cases and handling failures gracefully. (2) Judging by the videos I've watched online, Tesla is significantly ahead of everyone else.
Prediction: Before the end of the decade, this article will seem... short-sighted.
--
[a] https://www.nature.com/scitable/topicpage/sequencing-human-g...
--
EDIT: I edited my statement about 23andMe based on sausagefeet's comment below.
[Disclaimer] I worked for Cruise for 4 years.
I agree with everything you said, but chuckled at this particular part (which is very wrong): > Tesla is significantly ahead of everyone else.
Everyone in the industry knows that Tesla is nowhere near the tip of the technology. What Tesla does is _fantastic marketing_. Their whole self-driving division is just a mechanism to sell more cars.
At a high level, this is why:
- The hard thing about self driving isn't the first 95%, it's the impossibly long tail of the last 5% with unique, chaotic and rare scenarios (think, a reflective citern tank with a reflection of the back of a truck transporting stop signs, or terrible weather illusions with fog).
- Doing well on the last 5% is where most of the energy from Waymo/Cruise goes (the two leaders by quite a margin).
- Tesla is camera only. Weather alone means you can't reach critical safety because of this. Cameras don't do fog well, precipitation well, or sunsets/bad lighting well (see many Tesla crashes on freeways bc of this)
- Tesla does well on the 95% and Elon is a marketing genius, with those 2 things it's easy to convince outsiders that "Tesla is significantly ahead of everyone else".
My prediction: before the end of the decade, cruise and waymo have commoditized fleets doing things that most people today would find unbelievable. Tesla is still talking a big game but ultimately won't have permits for you to be in a Tesla with your hands off of the wheel.
edit: formatting and typo
My favorite case so far of the '5%' that you mention happened on my Tesla irt:(object recognition) and I still laugh about it to this day.
I was driving down the road as normal, 4 lane divided highway that's a bit hilly. Suddenly my car starts having what I can only describe as a panic attack saying I'm running a stop sign and blaring alarms.
It was detecting a giant 40ft tall red circle sign a bit away as a stop sign...
That's interesting, because my car never has blared an alarm for that. I live in a part of the country where, in certain parts of semi-rural and hilly areas, they've decided to introduce 4-way stops instead of red lights, or to slow down traffic on a straightaway. As a result, every so often I would not realize there's a 4-way stop (luckily google maps now shows them) and stop a little late. This happened 2-3 times in the last year or two and never did the Tesla make a peep.
Replying to my own comment because I just remembered this as well...
I definitely saw a case of them overfitting their neural networks lately.
Going over a single lane bridge that's an exit ramp, the car started decoding the "other side" of the concrete barrier as oncoming traffic lanes...when there was nothing there.
Excellent example: "reflective cistern tank with a reflection of the back of a truck transporting stop signs"
People tend to forget just how hard the edge cases in vision are!
It’s weird to me people forget how bad vision can be for driving and Exocet Tesla to somehow be better then our own eyes.
How often a do you encounter situations like bad fog/sun set/rain at night where it’s a total struggle to drive and you slow right down to a crawl and even then only do alright because of a ton of inference?
i think Tesla deciding to go vision only will be regarded of one of the greatest blunders is self driving history.
26 replies →
As someone who also follows the progress online and watch the presentations by Cruise, Tesla, etc, I agree that Cruise is well ahead of Tesla.
It feels like Tesla’s main strategy is to add more data, more compute power, more simulation, and hope for “convergence”. Maybe that will work, but right now it feels like Cruise’s technology feels more mature and thought through.
Is it? Remarkably Tesla’s compute bill does not seems to be growing exponentially with the amount of data it is supposedly collecting.
Point is I would not take anything Elon says at face value. He’s a marketer who is constantly bending the truth.
1 reply →
They’re not though.
As of today I can’t buy a production car with Cruise. You don’t get points for building something that’s theoretically superior but not an actual product. It’s the same story with companies like Apple that wipe the floor of wannabe hardware companies with theoretically better specs. Like Apple, Tesla actually ships.
4 replies →
What happens if you drop a Cruise car at a random place in North America it has never seen before?
I believe that Waymo and Cruise have more capable platforms that can more accurately measure the environment at those boundary conditions, where as Tesla's camera only approach more closely mimics human perception. But where do they stand in terms of datasets used for training their models?
It seems like Tesla has a huge advantage in terms of training data by leveraging a fleet of millions of vehicles.
Which is more valuable, experience or technology?
Having ridden the self driving vans from Waymo in Arizona several times, it really does feel like stepping into the future. Although they only cover a specific geographical area, they have really refined the riding experience within it.
George Hotz is 'in the industry' and seems to disagree with you. I have heard others say the same. It seems people who work at Waymo/Cruise are totally convinced by those things.
Different approaches lead to different paths to solutions, I am not convinced that either will be successful and not convinces Waymo/Cruise is ahead.
Unlike those, Tesla actually makes money and uses the technology stack in more limited forms.
If Tesla wants to be serious about self driving cars they should be seeking regulatory approval and running pilot programs with test drivers. Without regulatory approval it’s just a toy, albeit an incredibly dangerous one.
95% is a ridiculous exaggeration. How well can these cars do in the winter? You know, that season we have that can easily keep the ground covered with snow and/or ice for 30% of the year in many cities. I lived in Chicago and it was genuinely difficult to drive for 3-4 months of the year. How well can these cars do during the very rainy hurricane season in Florida?
Do people who work in this industry actually think they've solved 95% of driving scenarios because their software can manage driving in sunny California, Nevada, and Arizona?
> Everyone in the industry knows that Tesla is nowhere near the tip of the technology. What Tesla does is _fantastic marketing_.
Between one and two years ago, that was my perception too. But the rapid progress I've seen with Tesla FSD Beta over the past couple of years, and over the last year in particular, has forced me to change my mind. (Note: I'm talking only about Tesla's beta software. Tesla's production software is behind by dozens of versions and is without a doubt technologically inferior to Waymo and Cruise.)
Less than two years ago, I would have said FSD Beta could only deal with the "first 95%" too. Now, my perception is that FSD Beta routinely handles the first > 99% and fails only on < 1% of situations. Moreover, the failures have become more graceful -- e.g., the car will stop at intersections perceived as risky and ask the driver to confirm go-ahead by pressing the accelerator. If FSD Beta continues to improve, sooner or later it will cross the threshold at which it becomes safer than most human drivers.
Of course, IF I see new evidence that contradicts my perceptions, I'll change my mind again. There's no shame in changing our minds when the facts disagree with our views. FWIW, I'd love to see videos of Cruise and Waymo vehicles, filmed by tens of thousands of regular consumers driving autonomously on fully unrestricted roads, with zero editorial input from Cruise or Waymo.
With cost of lidar plummeting, I wonder how long it will be before Musk changes his mind.
Musk over and over again has shown that if something that he wants is expensive he will invest as much money as required in it to develop and to build it in large quantities. If Musk had thought lidar cost was the big problem, they would be building a lidar giga-factory right now. But they don't because not using lidar wasn't about cost.
He might be right or wrong about that, but he wouldn't just ignore lidar because its currently on on the market cheaply.
Tesla is far ahead on their ability to train a myriad number of heavy AI models (computer vision) at the same time.
That asset will probably be worth far more than the self driving system.
> but in the long run it is much faster than anyone expects
I saw the first Moon landing.
At the time, everyone, the experts, the public, everyone, expected us to colonize the Solar System within a few decades. We expected that fusion power would be too cheap to meter and that burning fossil fuels would be a thing of the past. We expected human life expectancy would soon rise to over a century in developed countries.
Serious, respected scientists said all these things, and everyone took them for granted.
None of these things did in fact come to pass.
Humans have not ventured in person even as far as the Moon in fifty years.
Our first atomic "pile" was 80 years ago in a few months, and we still don't have a fusion reactor. The tokamak was the big fusion design 50 years ago, and it still is, and we are better at them, but we still are nowhere near actually producing real power.
Life expectancy increases stalled quite fast, and then life expectancies started to regress. Americans have lost about two years of their life, basically because of their group mistrust of medical science.
It is simply diminishing returns on the amazing discovery that is the scientific method - it is unavoidable.
> Americans have lost about two years of their life, basically because of their group mistrust of medical science.
I don't think that's the actual reason for the decline, it's more likely another effect of the real cause. While I'm not American, it's the same story all over the world: our diet keeps getting worse, there is plastic everywhere which breaks down into micro plastics which end up in the water we drink etc. We're preparing our food with carcinogenic utensils (everything "non stick") and all of our western societies went from a healthy worker/academic career choice into mostly just getting exploited by the established players.
There have been a lot of societal changes since the post WW2 times, and summing that up into "losing trust in medical science" is a very confusing take, especially if you consider all the medical professionals that spread outright lies to profit. (The person that started the antivax movement was a doctor as a notorious example)
And drug companies that sold bad medicines (knowingly!) As they judged the cost of liability to be lower then the cost of not selling the drug.
Just to be clear, I'm not saying that any of these things are causing this decline either. Our societies have just changed too much since to make any confident claims about their impact.
I'm genuinely curious why you think Tesla is ahead compared to Waymo and Cruise. Autopilot struggles in my car on some fairly boring roads, while Waymo and Cruise are both operating real taxi services in vehicles without drivers. I can understand the argument that Tesla has lots of data from real world driving. But Google also has fleets of cars mapping out every road in the world.
Please keep in mind that I'm talking about FSD Beta, not the current production software, which is dozens of versions behind.
Are you on Beta 10.69.2.3?
If I had to articulate my reasons:
* Judging by the videos available online, my perception is that many situations that were impossible for Tesla FSD Beta a year ago have become uneventful in recent weeks. Take a look at Chuck Cook's videos for example (I like the fact that he always highlights the failures).
* Judging again by the videos available online, my perception is that Tesla FSD Beta has encountered and had to deal with more crazy edge cases than any other system. A possible explanation for this is that for a long time Tesla FSD Beta hasn't been geofenced or restricted only to certain types of roads, like highways. You can test it anywhere in North America.
* Tesla FSD Beta currently has 160,000 individuals testing it without road restrictions. As far as I know, no other system has been exposed to similar open-ended large-scale testing.
* Occupancy networks look like a real breakthrough to me -- DNNs that predict whether each voxel in a 3D model is occupied by an object, using only video data as an input. I understood the high-level explanation of these DNNs on AI Day 2. I haven't seen anything like it from anyone else.
* Tesla's DOJO also looks like a breakthrough to me. I understood the high-level explanation of it on AI Day 2. IIRC, DOJO cabinets are 6x faster at training existing neural networks than Nvidia rigs, at 6x lower cost, so call it ~36x more efficient.
As someone who also likes to watch Chuck Cook, I don't think Tesla is close to waymo.
Tesla fsd in its current state will either crash or do some serious fuck up if you let it unattended for a few hours or maybe less (based on the disengagements in those videos). Forget about driverless Tesla with the current fsd. Waymo has been operating driverless since 2019.
I do agree that it is progressing very nicely. Imo tesla fsd needs 2 more years and a hardware update and it will be there.
8 replies →
I'd sum up your points 1,2,3 as "more data". This would be a reason to think they can one day be ahead if they can take advantage of this, but not evidence that they are currently ahead.
Occupancy networks: waymo has published research on this before Tesla announced this at AI day (not clear to me who got there first though https://arxiv.org/pdf/2203.03875v1.pdf)
Tesla's Dojo -> Waymo has TPUs to train on
To me all of this is outweighed by the fact that Waymo has a driverless deployment and Tesla does not. I am pretty biased because as a Tesla owner I am pretty pissed off at this point at how the false positives on the system in detecting close following are stopping my safety score from getting high enough to even be able to access the product I purchased.
But it is pretty hard to say one way or another.
4 replies →
>haven't seen anything like it from anyone else.
Have you considered that other companies don't make it a priority to market these things? Elon knows his audience: people who will go on message boards and talk about it. Most people don't care about the underlying AI tech.
Do you think the other companies aren't making any breakthroughs? How do they have Robotaxis then?
Your entire claimed expertise seems to come from YouTube promotional videos. Maybe take a step back from marketing hype.
I really feel that you have been duped here. There is no reason to believe that Tesla Dojo even exists. At Hot Chips last year they showed some 3D renders of their supposed board. At Hot Chips this year they showed the same renders. At "AI Day" last month they showed a retarded humanoid robot. We have no basis to conclude that Dojo does, can, or will exist.
2 replies →
They’re operating Taxi services in San Francisco. A city that doesn’t experience any real-world weather, with an area of like 50 square miles where speeds generally never exceed 25MPH. They also have humans watching cameras that take over when the self driving breaks down.
It’s a completely different problem space, like claiming someone built a train and therefore they can easily build self driving cars since they are both “driverless”.
Yes, but you're choosing only one metric to evaluate on. Waymo/Cruise are level 4+. Anecdotally (does anyone have comparable data?), they also have a much lower accident rate. Solving a problem partially for all conditions and areas rather than ~completely for a specific but large area and set of conditions doesn't seem like it puts you meaningfully ahead.
Edit: and surely Waymo/Cruise could launch everywhere with performance that's lower than their current launch cities, but they choose not to. I don't think there's any compelling reason to assume their tech doesn't work outside of SF or Arizona or wherever, they just don't want to be in the news for their cars plowing someone into a highway divider or running over a pedestrian.
Aren't Waymo and Cruise constrained to specially tailored cities? Autonomous driving that can be activated literally anywhere in the country is far more impressive, even if it has cases where it doesn't perform as well.
To me actual driverless cars are more impressive than FSD which doesn't allow me to take my hands off the wheel, yes.
1 reply →
I have a double friends with FSD that rarely use it because it "scares [them]" by occasionally making dangerous decisions. Surely Waymo or Cruise could launch anywhere, but they make the conscious choice not to so they avoid this exact problem.
The more far flung reaches are are served well by traditional vehicles and the people are trained to use driving machines since most of the workers are agricultural. The city centers and suburbs are a big enough win even if it isn't as magical as taking a self driving trip from a small cottage in northern Vermont all the way to a campsite up in Big Sur.
1 reply →
Compared to general self-driving, it's relatively simple to make self-driving taxis work in just a couple of cities (preferably ones with minimum adversarial weather, too) - you just test the code against that particular dataset until it performes reasonably well, manually fixing the edge cases along the way if need be. It's still a monumental, multi-billion dollar project, but I would be surprised if it wasn't achievable.
Do you know that their tech is specifically optimized for those cities and won't work well elsewhere? Or is that speculation? As far as I'm aware, unless you were able to compare the performance of FSD against Waymo on an arbitrary road, you can't really make that argument.
1 reply →
Not sure about the current state of the actual ML, but compared to other self driving companies Tesla has a treasure trove of data because they have so many vehicles on the road at all hours of every day. The edge cases are the parts that are hard to identify and solve so having all that drive time data to identify edge cases would seem to give them a big advantage.
Most self-driving is about avoiding collissions, and signalling intent, especially when streets are narrow and there's merging or shared use. The physics of cars, people, bikes and kids around roads are well understood (acceleration, velocity). This can be simulated, and a game engine can generate data for virtual sensors to be trained. There's no reason to require time on the road.
5 replies →
If the data doesn't have the details required to build accurate models, then the data is just costing Tesla money. Since Tesla's are just cameras only, with telemetry, they can replay scenarios with existing roads, but what happens when someone cones off half of the road?
1 reply →
> Moreover, just two decades later, anyone could get their genome sequenced for pocket change by companies like 23andMe, founded in 2006.
Sequencing your genome is still relatively expensive, Probably around $15k. 23andme does not sequence your genome. They look at specific regions looking for specific markers. This doesn't invalidate your point of how great progress has been, though.
You're right. I updated my comment. Thank you!
Whole genome sequencing is commercially available for $299.
https://nebula.org/whole-genome-sequencing-dna-test/
1 reply →
A person overestimates what he/she can do in a day and underestimates what he/she can do in a year. I read that somewhere and I often think of that. Seems also applicable to a bigger scale.
> In the short run, technological progress is much slower than anyone hopes, but in the long run it is much faster than anyone expects.
But it doesn't happen in predictable ways. A particular area of technology hits a wall and plateaus all the time, for myriad reasons. In 1950, you might have thought that by 2022 we would have done a lot more with nuclear technology or supersonic airplanes than we have.
Yeah. There's a lot of variance in the timing of milestones, which leads to people drawing all sorts of overgeneralizations from the few examples they happen to know. The most popular example being "Moore's Law and FAANG market caps prove that everything in technology develops exponentially, which is a word that means super-fast"
My perceptions contradict this article: (1) The technology is progressing faster than is generally recognized, with vehicles getting progressively better at dealing with edge cases and handling failures gracefully. (2) Judging by the videos I've watched online, Tesla is significantly ahead of everyone else.
We must be watching different videos. And experiencing Teslas differently. I see Teslas constantly slamming on their breaks on freeways, swerving across lanes, and avoiding collisions by the narrowest margins only because their owners took control before certain death. And that's just driving around in L.A. traffic, the YouTube videos are even worse. Tesla's vaunted camera-based system still can't recognize white semis or other broad, flat obstacles that a human or radar-based system would recognize instantly.
Tesla was ahead of their competitors, several years ago. Now they're way behind, and dropping further behind with every "update" that addresses the problems that got media coverage with "solutions" that indicate brittle, manual programmer overrides rather than any sort of scalable AI-driven capability.
And it's irrelevant that Tesla has 160,000 drivers on the road "training" the system, since they selected the drivers who drive in the safest road conditions using a "safe driver" metric that has no relationship to safe driving. This means that Tesla's "AI" (to the extent it can be called that) is being overwhelmed with tons of useless data that overtrains it to drive easy roads and with almost no training for difficult conditions or edge cases. For point of comparison, most vehicles today with advanced cruise control can drive the same roads that FSD can safely drive...but they don't need advanced AI to do it.
It doesn't matter how far ahead you were at the beginning of the race, it matters how far ahead you are at the finish line.
Oh my dog.
Part of this is the fault of Tesla's marketing, but you are wildly off mark. The cars you are seeing are Autopilot, not FSD. Most of them are even the even older, radar-based autopilot.
Tesla Vision has no issues detecting white semis crossing your path. Vehicles with radar, on the other hand, struggle with discerning those from overhead bridges, so if one appears close to a bridge, you're SOL due to whitelisting.
Tesla Vision in FSD is a much, much more developed version which has been excellent about detecting its environment, especially now with the new occupancy network. Its decision making needs work but you will notice, when watching all those videos, that detection of vehicles - even occluded ones - is not a problem at all.
Your comment about useless data is also wrong. They are experts in their field and they know exactly what type of data they need. Both Tesla and Karpathy himself have shown on multiple presentations that they focus on training unique/difficult situations because more data from perfect conditions is not useful to them anymore. They have shown exactly how they do it, and even showed the great infrastructure they've built for autolabeling.
Your claim about cruise control from competitors being equal to FSD is laughable. They don't even match Autopilot: https://www.youtube.com/watch?v=xK3NcHSH49Q&list=PLVa4b_Vn4g...
Going to post this here as a rebuttal, a video made by Tesla fans that shows some severe shortcomings in the current version of FSD.
https://insideevs.com/news/616509/tesla-full-self-driving-be...
TLDR: a Tesla can't identify a box in the road. IO can finally identify people, but it still doesn't do a good job of avoiding them.
Tesla Vision has no issues detecting white semis crossing your path. Vehicles with radar, on the other hand, struggle with discerning those from overhead bridges, so if one appears close to a bridge, you're SOL due to whitelisting.
Both of these statements are false. Tesla Vision still has trouble detecting white semis as of October 2022. There are no self-driving vehicles that use radar for navigation(you appear to be mixing up radar with LIDAR, which has range-sensing built in, and all of Tesla's competitors are able to tell trucks apart from bridges; truck identification failure is unique to Tesla), though many regular modern cars do use it for autobraking systems. As these systems are only intended for use at extremely short ranges directly in front of the vehicle, it's irrelevant whether the object detected is a bridge or a semi.
Tesla Vision in FSD is a much, much more developed version which has been excellent about detecting its environment, especially now with the new occupancy network. Its decision making needs work but you will notice, when watching all those videos, that detection of vehicles - even occluded ones - is not a problem at all.
This does not match reality. At all. Teslas still regularly swerve themselves across lanes of traffic and into oncoming traffic. In a brand new Tesla acquired by a co-worker several weeks ago, Tesla FSD could not identify cyclists on the road, failed to identify a number of pedestrians crossing at a crosswalk, did not successfully distinguish between semi trucks and the open sky, and only successfully identified about 1/2 of the other cars on the road with it. Maybe the super-duper secret version of Tesla Vision performs well, but the one actually available on Tesla vehicles right now performs worse than a drunk teenager.
Both Tesla and Karpathy himself have shown on multiple presentations that they focus on training unique/difficult situations because more data from perfect conditions is not useful to them anymore. They have shown exactly how they do it, and even showed the great infrastructure they've built for autolabeling.
This is demonstrably false; admission into the FSD program requires a safety score which cannot be achieved in areas with rough or steep roads, and is almost impossible to achieve in urban traffic, ergo, they are by definition not focusing on training unique/difficult situations. Moreover, as they still can't identify semi trucks, other cars, cyclists, or pedestrians with any reliability, the "great infrastructure" for "autolabeling" is basically just fraud.
I think the quote you are referring to in the article [a] states "At the time, that [the 500 genes already sequenced] was thought to be about 1% of the total, and given the pace of discovery, it was believed that complete sequencing of the human genome would take at least 100 years"
The 'given pace of discovery' I would read as an extrapolation using (then) available technologies, not as a scientific consensus about what would be possible using new technologies in the near future.
I remember the gnome thing. But that is nothing I have any knowledge about. So this area was for me like unknowledged people learn about things.
Automation is actually my area. Self-droving cars are in development for more than 50 years. In the 1970s it was more analogous technology that needed a lot of space. Fact is today technology is not that much better than that. Except, at the days it was highly supervised by professionals.
Many hardware devices we have in our cars today profited from those developments. Because radar and things are now cheaply avalaible. But self-driving is still not a thing. All the improvements in AI even didn't realy helped.
What many people seems to forget AI is about to detect (learned) patterns and to make decisions based on those patterns. But AI is completely unpreditcable on patterns it can't recognise. AI will classify those into a know pattern. But this is random and as such the judgedment is random. That is the big difference to a human, which can still cope with an unknown situation.
That is the reason why many people from the industry say: real self-driving cars will take very much longer than the average publics thinks and was told by Elon, Uber and others.
> Prediction: Before the end of the decade, this article will seem... short-sighted.
Can we say that all the self-driving optimists that were completely wrong about the last decade were shortsighted?
Also, no idea what progress in biotech has anything to do with self driving. Your argument is that because some tech advanced all other possible tech will advance, seems like a logical fallacy.
My gut feeling is that we'll solve AGI before we solve the niche case of fully autonomous vehicles.
AGI is a generalization and super set of self driving cars and has a far wider impact and set of people researching it.
The two problems seem equivalently hard in that fully autonomous vehicles must be five nines (?) reliably safe. That's a ridiculously hard problem.
It is another 80/20 rule with project management / prediction / projection. The last 20% is going to take as much time as the first 80%. And I would argue we are not even reaching 80% yet.
Not to mention most of the edge case training it had in US may have little resemblance to say UK or AUS. And that is ignoring culture difference in places like India or China. I am not an expert in DNA, but I don’t understand how DNA sequencing isn’t a finite problem. Comparing to AV, in a real world with so many edge cases, you are practically looking at an infinite problem.
>The technology is progressing faster than is generally recognized
How can you possibly make this claim when the leader of the company you say is "ahead of everyone" said they'd have FSD years ago?
Also, FYI, watching Tesla fanatics promote FSD in edited videos isn't a good judge of the technology.
In the long run, the technological progress is also in fields people didn't predict, or entirely new fields.
A lot of ideas just never pan out, despite considerable investment. Or, they get it to work, but proves to be of niche use. Progress happens elsewhere.
Self-driving cars aren't a technological problem.
We have had self-driving trains and self-flying planes for decades. Despite this, train operators and airplane pilots aren't going away, and, if anything, are becoming even more important professions.
That's because they're in the "providing security" business, not in the "operating machinery" business. We pay pilots to take responsibility so passengers feel safe.
It's exactly the same for cars and trucks. The self-driving car racket is a scam, because they're not solving the actual problem society wants solved.
Human genome sequencing is a well defined problem, but self driving is not.
To be clear 23andMe does not do sequencing. They check for certain markers.
However full sequencing IS available and runs around $1000.
If I recall right the first human genome sequenced cost about $1 trillion. But we now have much better algorithms, mostly due to algorithms from works such as Knuth's AOCP.
Really interested how this one trillion figure came out, is it the sum of all medical research for decades? Are you talking US dollars
Was going from memory. Official number was $2.7 billion in 1991 dollars. So now it's $1000, or someone said $299, 9 million times less.