Comment by tibbar
2 days ago
It seems increasingly likely that LLM development will follow the path of self-driving cars. Early on in the self-driving car race, there were many competitors building similar solutions and leaders frequently hyped full self-driving as just around the corner.
However, it turned out to be a very difficult and time-consuming process to move from a mostly-working MVP to a system that was safe across the vast majority of edge cases in the real world. Many competitors gave up because the production system took much longer than expected to build. However, today, a decade or more in to the race, self-driving cars are here.
Yet even for the winners, we can see some major concessions from the original vision: Waymo/Tesla/etc have each strictly limited the contexts you can use self-driving, so it's not a 100% replacement for a human driver in all cases, and the service itself is still very expensive to run and maintain commercially, so it's not necessarily cheaper to get a self-driving car than a human driver. Both limitations seem like to be reduced in the years ahead: the restrictions on where you can use self-driving will gradually relax, and the costs will go down. So it's plausible that fleets of self-driving cars are an everyday part of life for many people in the next decade or two.
If AI development follows this path, we can expect that many vendors will run out of cash before they can actually return their massive capital investment, and a few dedicated players will eventually produce AIs that can handle useful subsets of human thoughtwork in a decade or so, for a substantial fee. Perhaps in two decades we will actually have cost-effective AI employees in the world at large.
> However, today, a decade or more in to the race, self-driving cars are here.
In a limited fashion, though. We don't have generalized fully autonomous vehicles just yet.