I like Steve's content, but the ending misses the mark.
With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence.
I say this as someone who has worked for 7 years implementing AI research for production, from automated hardware testing to accessibility for nonverbals: I don't think founders need to obsess even more than they do now about implementing AI, especially in the front end.
This AI hype cycle is missing the mark by building ChatGPT-like bots and buttons with sparkles that perform single OpenAI API calls. AI applications are not a new thing, they have always been here, now they are just more accessible.
The best AI applications are beneath the surface to empower users, Jeff Bezos says that (in 2016!)[1]. You don't see AI as a chatbot in Amazon, you see it for "demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations."
"With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence."
I'm missing something here. First, I thought Steve's point was that the carriage makers did not see "individual transportation" as their business, and they should have--if they had, they might have pivoted like Studebaker did.
So if "most companies are not in the field of Artificial Intelligence", that could mean that they ought to be.
However, I draw a somewhat different conclusion: the business that companies ranging from Newsweek to accountants to universities to companies' HR departments should see themselves in is intelligence, regardless of whether that's artificial or otherwise. The question then becomes which supplies that intelligence better: humans or LLM-type AI (or some combination thereof)? I'm not at all sure that the answer at present is LLM-AI, but it is a different question, and the answer may well be different in the near future.
There are of course other kinds of AI, as you (jampa) mention. In other words, AI is not (for now) one thing; LLMs are just one kind of AI.
This is a different way of saying, people must learn how to use a new technology. I think like cars, radio, internet or smart phones. It took a while for people to understand somethings are so disruptive, eventually it will find a way into your life in all forms.
Im guessing for someone in laundry or restaurant business it might be hard to understand how AI could change their lives. And that is true, at least at this stage in the adoption and development of AI. But eventually it will find a way into their business in some form or the other.
There are stages to this. Pretty sure the first jobs to go will be the most easiest. This is the case with Software development too. When people say writing code has gotten easier, they really are talking about projects that were already easy to build getting even more easier. Harder parts of software development are still hard. Making changes to larger code bases with a huge user base comes with problems where writing code is kind of irrelevant. There are bigger issue to address like regression, testing, stability, quality, user adoption etc etc.
Second stage is of course once the easy stuff gets too easy to build. There is little incentive to build it. With modern building techniques we aren't building infinite huts, are we? We pivoted to building sky scrapers. I do believe most of AI's automation gains will be soaked up in the first wave and there will little incentive to build easy stuff and harder stuff will have more productivity demands from people than ever before.
First, I thought Steve's point was that the carriage makers did not see "individual transportation" as their business, and they should have--if they had, they might have pivoted like Studebaker did
But all 400+ carriage maker had pivoted, would they have had a chance to survive very long? Would they have all made more money pivoting? The idea that all this is only a "lack of vision" rather than hard business choices is kind of annoying.
Commercial endeavors exist to provide goods and services to consumer and users.
The implication of the author here is that those providing services that continue using human resources rather than AI, are potentially acting like carriage manufacturers.
Of course that assumes improvements in technology, which is not guaranteed.
I can strain the analogy just enough to get something useful from it.
If we laboriously create software shops in the classical way, and suddenly a new shop appears that is buggy, noisy, etc but eventually outperforms all other shops, then the progenitors of those new shops are going to succeed while the progenitors of these old shops are not going to make it.
It's a strain. The problem is AI is a new tech that replaces an entire process, not a product. Only when the process is the product (eg the process of moving people) does the analogy even come close to working.
I'd like to see analysis of what happened to the employees, blacksmiths, machinists, etc. Surely there are transferrable skills and many went on to work on automobiles?
This SE q implies there was some transition rather than chaos.
Stretching just a bit further, there might be a grain of truth to the "craftsman to assembly line worker" when AI becomes a much more mechanical way to produce, vs employing opinionated experts.
I agree as I point out in other comments here - you said it with more detail.
AGI + robot is way beyond a mere change in product conception or implementation. It's beyond craftsmen v. modern forms of manufacturing we sometimes read about with guns.
It is a strain indeed to get from cars v.buggies to AGI. I dare say that without AGI as part and parcel to AI the internalization of AI must be necessarily quite different.
> With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence.
Agreed. The analogy breaks down because the car disrupted a single vertical but AI is a horizontal, general-purpose technology.
I think this also explains why we're seeing "forced" adoption everywhere (e.g., the ubiquitous chatbot) -- as a result of:
1. Massive dose of FOMO from leadership terrified of falling behind
2. A fundamental lack of core competency. Many of these companies companies (I'm talking more than just tech) can't quickly and meaningfully integrate AI, so they just bolt on a product
Viewed from a different angle I think he's probably close. A service provider changing the back end while leaving the front end UI similar is not dissimilar to early cars being built like carriages. But when the product can shift from "give me an app that makes it easier to do my taxes" to "keep me current on my taxes and send me status updates" that's a pretty radical difference in what the customer sees.
> But when the product can shift from "give me an app that makes it easier to do my taxes" to "keep me current on my taxes and send me status updates" that's a pretty radical difference in what the customer sees.
For a bunch of stuff - banks, online shopping, booking a taxi, etc - this shift already happened with non-LLM-based "send me notifications of unusual account activity" or even the dead-simple "send me an email about every transaction on my bank account." Phone notifications moved it from email to built-into-the-OS even.
The "LLM hype cycle" tweak becomes something like "have an LLM summarize the email instead of just listing the three transactions" which is of dubious use to the average user.
Mobility is not an analogy for AI, it's an analogy to whichever industry you work in. If you publish a magazine, you may think you're in the 'publishing' business and that AI as a weak competitor, maybe capable of squashing crappy blogs but not prestigious media like yours. But maybe what you're really in is the 'content' business, and you need to recognize that sooner or later, AI is going to beat you at the content game even if it couldn't beat you at the publishing game. The kicker being that there no longer exists a publishing game, because AI.
Or more likely, you are in the publishing business but the tech world unilaterally deemed everything creative to be a fungible commodity and undertook a multi-billion dollar campaign to ingest actual creative content and compete with everyone that creates it in the same market with cheap knockoffs. Our society predictably considers this progress because nothing that could potentially make that much money could possibly be problematic. We continue in the trend of thinking small amounts of good things are not as good as giant piles of crap if the crap can be made more cheaply.
> The best AI applications are beneath the surface to empower users
Not this time, tho. ChatGPT is the iphone moment for "AI" for the masses. And it was surprising and unexpected both for the experts / practitioners and said masses. Working with LLMs pre gpt3.5 was a mess, hackish and "in the background" but way way worse experience overall. Chatgpt made it happen just like the proverbial "you had me at scroll and pinch-to-zoom" moment in the iphone presentation.
The fact that we went from that 3.5 to whatever claude code thing you can use today is mental as well. And one of the main reasons we got here so fast is also "chatgpt-like bots and buttons with sparkles". The open-source community is ~6mo behind big lab SotA, and that's simply insane. I would not have predicted that 2 years ago, and I was deploying open-source LLMs (GPT-J was the first one I used live in a project) before chatgpt launched. It is insane!
You'll probably laugh at this, but a lot of fine-tuning experimentation and gains in the open source world (hell, maybe even at the big labs, but we'll never know) is from the "horny people" using local llms for erotica and stuff. I wouldn't dismiss anything that happens in this space. Having discovered the Internet in the 90s, and been there for every hype cycle in this space, this one is different, no matter how much anti-hype tokens get spent on this subject.
ChatGPT wasn’t the iphone moment, because the iphone wasn’t quickly forgotten.
Outside of software, most adult professionals in my network had a play with chatgpt and have long since abandoned their accounts. They can’t use chatbots for work (maybe data is sensitive, or their ‘knowledge work’ isn’t the kind that produces text output). Our native language is too poorly supported for life admin (no Gemini summaries or ‘help writing an email’). They just don’t have any obvious use case for LLMs in their life.
It may be true but Bezos' comment is also classic smoke blowing. "Oh well you can't see us using <newest hype machine> or quantify it's success but it's certainly in everything we do!"
But it’s completely true — Amazon undoubtedly has a pretty advanced logistics set up and certainly uses AI all over the place. Even if they’re not a big AI researcher.
There are a lot of great use cases for ML outside of chatbots
There's a qualitative difference between ok transport and better transport vs AI.
If we're going to talk cars, I think what the Japanese did to the big three in the 1980s would have been far more on point.
AI is encumbered by AGI which is further encumbered by the delta between what is claimed possible (around the corner) and what is. That's a whole different ball game with wildly different risk/reward tradeoffs.
Learning about history post buggies didn't do much for me.
The amazon store chatbot is mongst the worst implementations I've seen. The old UI which displayed the customer questions and allowed searching them was infinitely better.
Are you seriously suggesting the crappy AI bot on Amazon product pages is evidence of an ‘AI’ revolution? The thing sucks. If I’m ready to spend money on a product, it’s worth my time to do a traditional keyword search and quickly scroll through the search returns to get the contextualized information, rather then hoping an LLM will get it right.
Right. The point is that in frothy market conditions and a general low-integrity regime in business and politics there is a ton of incentive to exploit FOMO far beyond it's already "that's a stiff sip there" potency and this leads to otherwise sane and honest people getting caught up into doing concrete things today based on total speculation about technology that isn't even proposed yet. A good way to really understand this intuitively is to take the present-day intellectual and emotional charge out of it without loss of generality: we can go back and look at Moore's Law for example, and the history of how the sausage got made on reconciling a prediction of exponential growth with the realities of technological advance. It's a fascinating history, there's at least one great book [1] and the Asionometry YouTube documentary series on it is great as always [2].
There is no point in doing business and politics and money motivated stuff based on the hypothetical that technology will become self-improving, if that happens we're through the looking glass, not in Kansas anymore, "Roads? Where we're going, we won't need roads." It won't matter or at least it won't be what you think it'll be some crazy thing.
Much, much, much, much more likely is that this is like all the other times we made some real progress, people got too excited, some shady people made some money, and we all sobered up and started working on the next milestone. This is by so far both A) The only scenario you can do anything about and B) The only scenario honest experts take seriously, so it's a double "plan for this one".
The quiet ways that Jetson Orin devices and shit will keep getting smarter and more trustworthy to not break shit and stuff, that's the bigger story, it will make a much bigger difference than snazzy Google that talks back, but it's taking time and appearing in the military first and comes in fits and starts and has all the other properties of ya know, reality.
Let us see how this will age. The current generation of AI models will turn out to be essentially a dead end. I have no doubt that AI will eventually fundamentally change a lot of things, but it will not be large language models [1]. And I think there is no path of gradual improvement, we still need some fundamental new ideas. Integration with external tools will help but not overcome fundamental limitations. Once the hype is over, I think large language models will have a place as simpler and more accessible user interface just like graphical user interfaces displaced a lot of text based interfaces and they will be a powerful tool for language processing that is hard or impossible to do with more traditional tools like statistical analysis and so on.
[1] Large language models may become an important component in whatever comes next, but I think we still need a component that can do proper reasoning and has proper memory not susceptible to hallucinating facts.
> The current generation of AI models will turn out to be essentially a dead end.
It seems a matter of perspective to me whether you call it "dead end" or "stepping stone".
To give some pause before dismissing the current state of the art prematurely:
I would already consider LLM based current systems more "intelligent" than a housecat. And a pets intelligence is enough to have ethical implications, so we arguably reached a very important milestone already.
I would argue that the biggest limitation on current "AI" is that it is architected to not have agency; if you had GPT-3 level intelligence in an easily anthropomorphizeable package (furby-style, capable of emoting/communicating by itself) public outlook might shift drastically without even any real technical progress.
I think the main thing I want from an AI in order to call it intelligent is the ability to reason. I provide an explanation of how long multiplication works and then the AI is capable of multiplying arbitrary large numbers. And - correct me if I am wrong - large language models can not do this. And this despite probably being exposed to a lot of mathematics during training whereas in a strong version of this test I would want nothing related to long multiplication in the training data.
Intelligence alone does not have ethical implications w.r.t. how we treat the intelligent entity. Suffering has ethical implications, but intelligence does not imply suffering. There's no evidence that LLMs can suffer (note that that's less evidence than for, say, crayfish suffering).
>I would already consider LLM based current systems more "intelligent" than a housecat.
An interesting experiment would be to have a robot with an LLM mind and see what things it could figure out, like would it learn to charge itself or something. But personally I don't think they have anywhere near the general intelligence of animals.
It may be that LLM-AI is a dead end on the path to General AI (although I suspect it will instead turn out to be one component). But that doesn't mean that LLMs aren't good for some things. From what I've seen, they represent a huge improvement in (machine) translation, for example. And reportedly they're pretty good at spiffing up human-written text, and maybe even generating text--provided the human is on the lookout for hallucinations (and knows how to watch for that).
You might even say LLMs are good with text in the same way that early automobiles were good for transportation, provided you watched out for the potholes and stream crossings and didn't try to cross the river on the railroad bridge. (DeLoreans are said to be good at that, though :).)
This is a surprising take. I think what's available today can improve productivity by 20% across the board. That seems massive.
Only a very small % of the population is leveraging AI in any meaningful way. But I think today's tools are sufficient for them to do so if they wanted to start and will only get better (even if the LLMs don't, which they will).
Sure, if I ask about things I know nothing about, then I can get something done with little effort. But when I ask about something where I am an expert, then large language models have surprisingly little to offer. And because I am an expert, it becomes apparent how bad they are, which in turn makes me hesitate to use them for things I know nothing about because I am unprepared to judge the quality of the response. As a developer I am an expert on programming and I think I never got something useful out of a large language model beyond pointers to relevant APIs or standards, a very good tool to search through documentation, at least up to the point that it starts hallucinating stuff.
When I wrote dead end, I meant for achieving an AI that can properly reason and knows what it knows and maybe is even able to learn. For finding stuff in heaps of text, large language models are relatively fine and can improve productivity, with the somewhat annoying fact that one has to double check what the model says.
I think that what's available today is a drain on productivity, not an improvement, because it's so unreliable that you have to babysit it constantly to make sure it hasn't fucked up. That is not exactly reassuring as to the future, in my view.
Isn't this entirely missing the point of the article?
> When early automobiles began appearing in the 1890’s — first steam-powered, then electric, then gasoline –most carriage and wagon makers dismissed them. Why wouldn’t they? The first cars were: Loud and unreliable, Expensive and hard to repair, Starved for fuel in a world with no gas stations, Unsuitable for the dirt roads of rural America
That sounds like complaints against today's LLM limitations. It will be interesting to see how your comment ages in 5-10-15 years. You might be technically right that LLMs are a dead end. But the article isn't about LLMs really, it's about the change to an "AI" world from a non-AI world and how the author believes it will be similar to the change from the non-car to the car world.
Sorry but to say current LLMs are a "dead end" is kind of insane if you compare with the previous records at general AI before LLMs. The earlier language models would be happy to be SOTA in 5 random benchmarks (like sentiment or some types of multiple choice questions) and SOTA otherwise consisted of some AIs that could play like 50 Atari games. And out of nowhere we have AI models that can do tasks which are not in training set, pass turing tests, tell jokes, and work out of box on robots. It's literally insane level of progress and even if current techniques don't get to full human-level, it will not have been a dead end in any sense.
I think large language models have essentially zero reasoning capacity. Train a large language model without exposing it to some topic, say mathematics, during training. Now expose the model to mathematics, feed it basic school books and explanations and exercises just like a teacher would teach mathematics to children in school. I think the model would not be able to learn mathematics this way to any meaningful extend.
This kind of just-so story is easy to write after the fact. It's harder to see the future at the time.
How many people read a version of the same story and pivoted their company to focus on SecondLife, NFTs, blockchain or whatever else technology was hyped at the time and tanked? That's the other half of this story.
You can replicate real life, but it's kind of boring.
- 3D printing
Became a useful industrial tool, but home 3D printing never went mainstream. At one point Office Depot offered 3D printing. No longer.
- Self-driving minibuses
Several startups built these, and some were deployed. Never really caught on. You'd think that airport parking shuttles and such would use these, but they don't.
- Small gas turbines
Power for cars, buses, trucks, backup power, and other things where you need tens to hundreds of kilowatts in a small package. All those things were built and worked. But the technology never became cheap. Aircraft APUs for large aircraft and the US Army's M1 tank variants remain one of the few deployed applications. The frustration of turbine engines is that below bizjet size, smaller units are not much cheaper.
- 3D TV
That got far enough that 3D TV sets were in stores. But they didn't sell.
- Nuclear power
Works, mostly, but isn't really cost-effective. Failures are very expensive and require evacuating sizable areas.
- Proof of correctness for programs
After forty years, it's still a clunky process.
- Maglev trains
Works, but insanely expensive.
- The Segway
Works, but scooters do the same job with less expense.
- 3D input devices
They used to be seen at trade shows, but it turns out that they don't make 3D input easier.
Metaverse (virtual worlds) did catch on - virtual offices and storefronts didn't really catch on, but people enjoy virtual worlds for: competitive and cooperative gaming; virtual fashion and environment construction; chat and social interaction; storytelling; performance; etc. Mostly non-commerce recreation activities. Look at the success of fortnite, minecraft, world of warcraft, etc. These share the dimension of shared recreational experiences and activities that give people a reason to spend time in the virtual world.
We have a system to which I can upload a generic video, and which captures eveeeeeerything in it, from audio, to subtitles onscreen, to skewed text on a mug, to what is going on in a scene. It can reproduce it, reason about it, and produce average-quality essays about it (and good-quality essays if prompted properly), and, still, there are so many people who seem to believe that this won't revolutionize most fields?
The only vaguely plausible and credible argument I can entertain is the one about AI being too expensive or detrimental to the environment, something which I have not looked sufficiently into to know about. Other than that, we are living so far off in the future, much more than I ever imagined in my lifetime! Wherever I go I see processes which can be augmented and improved though the use of these technologies, the surface of which we've only barely scratched!
Billions are being poured trying to use LLMs and GenAI to solve problems, trying to create the appropriate tools that wrap "AI", much like we had to do with all the other fantastic technology we've developed throughout the years. The untapped potential of current-gen models (let alone next-gen) is huge. Sure, a lot of this will result in companies with overpriced, over-engineered, doom-to-fail products, but that does not mean that the technology isn't revolutionary.
From producing music, to (in my mind) being absolutely instrumental in a new generation of education or mental health, or general support for the lonely (elderly and perhaps young?), to the service industry!...the list goes on and on and on. So much of my life is better just with what little we have available now, I can't fathom what it's going to be like in 5 years!
I'm sorry I highjacked your comment, but it boggles the mind how so many people so adamantly refuse to see this, to the point that I often wonder if I've just gone insane?!
So would a universal cancer vaccine, but no one is acting like it's just around the corner.
I'm old enough to remember when "big data" and later "deep data" was going to enable us to find insane multi-variable correlations in data and unlock entire new levels of knowledge and efficiency.
AI as currently marketed is just that with an LLM chatbot.
I definitely don't think so. You're seeing companies who have a lot of publicity on the internet. There are tons of very successful SMBs who have no real idea of what to do with AI, and they're not jumping on it at all. They're at risk.
There is some truth to this, but the biggest concerns I have about AI are not related to who will realize the change is coming. They are moral/ethical concerns that transcend any particular market. Things connected to privacy, creativity, authorship, inequality and the like. This means that AI isn't really the cause of these concerns, it's just the current front line of these larger issues, which have persisted across all manner of disruptions across all manner of industry.
> Even with evidence staring them in the face, carriage companies still did not pivot, assuming cars were a fad.
I like this quote. But this analogy doesn’t exactly work. Withe this hype cycle, CEOs are getting out and saying that AI will replace humans, not horses. Unlike previous artisans making carriages, the CEOs saying these things have very clear motivations to make you believe the hype.
I think ceos that think this way are a self fulfilling prophecy of doom. If they think of their employees as cogs that can be replaced, they get cogs that can be replaced.
Moreover, there was at least one company which did pivot --- the Chevy Malibu station wagon my family owned in the mid-70s had a badge on the door openings:
>Body by Fisher
which had an image of the carriages which they had previously made.
I wonder if there is something noteworthy about Studebacker - yes, they were the only carriage maker out of 4000 to start making cars, and therefore the CEO "knew better" than the other ones.
But then again, Studebacker was the single largest carriage maker and a military contractor for the Union - in other words they were big and "wealthy" enough to consider the "painful transformation" as the article puts it.
How many of the 3999 companies that didn't acutally had any capacity to do so?
Is it really a lesson in divining the future, or more survivorship bias?
Agreed. The autombile was two innovations, not one. If Ford had created a carriage assembly line in an alternate history without automobiles, how many carriage makers would he have put out of business? The United States certainly couldn't have supported 4000 carriage assembly lines. Most of those carriage makers did not have the capacity or volume to finance and support an assembly line.
Also, the auto built on some technologies that were either invented or refined by the bicycle industry: Pneumatic tires, ball bearings, improved steel alloys, and a gradual move to factory production. Many of the first paved roads were the result of demand from bicyclists.
I've listened to so many CEOs in various industries (not just tech) salivating at the potential ability to cutout the software engineering middle man to make their ideas come to life (from PMs, to Engineers, to Managers, etc.). They truly believe the AI revolution is going to make them god's gift to the world.
I on the other hand, see the exact opposite happening. AI is going to make people even more useful, with significant productivity gains, in actuality creating MORE WORK for humans and machines alike to do.
Leaders who embrace this approach are going to be the winners. Leaders who continue to follow the hype will be the losers, although there will probably be some scam artists who are winners in the short term who are riding the hype cycle just like crypto.
An interesting aspect that doesn't seem captured by TFA and similar articles is that it is not a specific kind of business that is being disrupted, but rather an entire genre of labor on which they all rely to varying extents: knowledge work. Furthermore, "knowledge work" is a very broad term that encompasses an extremely broad variety of skillsets (engineering, HR, sales, legal, medical...) And knowledge workers are indeed being rapidly disrupted by GenAI.
This is an interesting phenomenon that probably has no historical equivalent and hence may not have been fully contemplated in any literature, and so comparisons like TFA fall short of capturing the full implications.
Whether these companies see themselves an AI company seems orthogonal to the fact that they should acknowledge this sea-change and adapt. However, currently all industries seem to be thinking they should be an "AI company" and are responding by trying to stuff AI into any product they can. Maybe the urgency for them to adapt should be based on the degree to which knowledge work is critical to their business.
>In each of the three companies that survived, it was the founders, not hired CEOs that drove the transition.
This is how VCs destroy businesses by bring in adult supervision. CEOs are not incentivized to play the long game.
The difference between the mobility & transportation industry, whether it by carriage and horse, or motor car, was that it was in demand by 99% of the population. AI, on the other hand, is only demanded by say 5%-10% of the population. How many people truly want an AI fridge or dishwasher? They just want fresh food and clean dishes.
It's an interesting story but a weird analogy and moral. What would have been better if the other 3,999 carriage companies had all tried to make automobiles? Probably about 3,990 shitty cars and a few more mild successes. I'm not sure that's any better.
That's what I see with AI. Every company wants to suddenly "be an AI company", although few are sure what that means. Companies that were legitimately very good at a specific thing are now more interested in being mediocre at the same thing as everyone else. Maybe this will work out in the long tun but right now it's a pain in the ass.
> He founded Buick in 1904 and in 1908 set up General Motors. ... In 1910 Durant would be fired by his board. Undeterred, Durant founded Chevrolet, took it public and in 1916 did a hostile takeover of GM and fired the board. He got thrown out again by his new board in 1920 and died penniless managing a bowling alley.
Linux won on cost once it was "good enough". AI isn't free (by any definition of free) and is a long way away from "good enough" to be a general replacement for the status quo in a lot of domains.
The areas where it does make sense to use, it's been in use for years, if not longer, without anyone screaming from the rooftops about it.
By the time Linux won it was better - by 2003 you could take a workload that took eight hours on some ridiculous Sun machine and run it in 40 minutes on a Xeon box.
>- Starved for fuel in a world with no gas stations
Actually, gasoline was readily available in its rôle as fuel for farm and other equipment, and as a bottled cleaning product sold at drug stores and the like.
>- Unsuitable for the dirt roads of rural America
but the process of improving roads for the new-fangled bicycle was well underway.
This reminds me of Mary Anderson [0], who invented the windshield wiper so early that her patent expired by the time Cadillac made them standard equipment.
"We're all in on Blockchain! We're all in on VR! We're all in on self-driving! We're all in on NoSQL! We're all in on 3D printing!" The Gardner Hype Cycle is alive and well.
I don't like this article one bit, starting from the title "Missed" the Future.
It implies that not jumping on the latest disruptive technology, at the early stage where the tech hasn't taken hold yet and it's not known if it will (see: disruptive tech graveyard), reducing or pivoting from your established business, is a bad thing, or a failure.
It's also ok to go out of business. Really disruptive technology often (usually?) spurs growth and jobs shift, so there's no loss in aggregate. Of course a few people that can't retrain will be left behind. For his specific example of carriage to car, there were 4000 carriage makers because they were fairly small businesses, with shallow supply chains. Just a couple of car makers (and the full supply chain) dwarf the total employment required for all those 4000 carriage makers.
This article is simply written with the benefit of hindsight.
Fundamentally this article is reasoning in units of “companies,” but the story is different when reasoning in terms of people.
It turns out automobile companies need way more employees than carriage companies, so the net impact on employment was positive. Then add in all the jobs around automobiles like oil, refining, fueling, repair, road construction, etc.
Do we care if companies put each other out of business via innovation? On the whole, not really. People who study economics largely consider it a positive: “creative destruction.”
The real question of LLM AI is whether it will have a net negative impact on total employment. If so, it would be the first major human technology in history to do that. In the long run I hope it does, because the human population will soon level off. If we want to keep economic growth and standards of living, we will need major advances in productivity.
Stepping back from the specifics these are stories of human nature.
We tag “complacency” as bad, but I think it’s just a byproduct of our reliance on heuristics and patterns which is evolutionarily useful overall.
On the other hand we worry (sometimes excessively) about how the future might unfold and really much of that is unknown.
Much more practical (and rewarding) to keep improving oneself or organisation to meet the needs of the world today withe an eye on how the world is evolving, rather than try to be some oracle or predict too far out (in which case you need to both get the prediction and the execution right!).
As an aside, it seems a recent fashion to love these big bets these days (AI, remember Metaverse), and to make big high conviction statements about the future, but that’s more to do with their individual specific circumstances and motivations.
The shift described in the article is more about craftsmanship vs mass production (Ford's conveyor belt and so on) and disruption is not the right word as it took place over decades. Most people that started as coach builders could probably keep their jobs as fewer and fewer people started.
There were some classes of combustion engines that smaller shops did manufacture, such as big hot-bulb engines for ships and factories. Miniaturised combustion engines or electric motors are not suitable for craftsman-like building but rather standardised procedures with specialised machines.
The main mechanism is not "disruption" but rather a trend of miniaturisation and mass production.
Thing is, those companies can't do much if whole lines of business will become obsolete. Behind every company, there is a core competence that forms the value and the rest of the business is just a wrapper. When core competence is worthless, the company is just out. Even if they know it's coming there's little they can do. In fact, best thing they can actually do is turning the company into a milk cow to extract all value they can here and now, stopping all investments into future - will probably generate enormous profits for a few years. Extract them and invest into the wider stock market.
I feel this at a personal level. I started as an Android developer and stayed so. Not venturing into hybrid/etc or even trying to be into iOS as well, let alone backend, full stack (let's not even begin to talk of AI) - while kind of always seeing this might happen. Now I see the world pass by kind of. I don't think it's always missing the future. Maybe a comfort zone thing - institutional or personal? Sometimes it's just vehement refusal to believe something. I think it's just foolish hope against the incoming tidal shift.
I don't know if the problems at the company that I worked for, came from the CEO, or many of the powerful General Managers.
At my company, "General Manager" positions were the ones that actually set much of the planning priorities. Many of them, eventually got promoted to VP, and even, in the case of my former boss, the Chairman of the Board.
When the iPhone came out, one of my employees got one (the first version). I asked to borrow it, and took it to our Marketing department. I said "This is gonna be trouble for us."
I was laughed out of the room. They were following the strategy set down from the General Managers, which involved a lot of sneering at the competition.
The iPhone (and the various Android devices that accompanied it), ate my company for breakfast, and picked their teeth with our ribs.
A couple of the GMs actually anticipated the issues, but they were similarly laughed out of their rooms.
I saw the same thing happen to Kodak (the ones that actually invented digital photography), with an earlier disruption. I was at a conference, hosted by Kodak, and talked to a bunch of their digital engineers and Marketing folks.
They all had the same story: They were being deliberately kneecapped by the film people (with the direct support of the C-Suite).
At that time, I knew they were "Dead Man Walking." That was in 1996 or so.
There was an excellent thread(s? I think) about Nokia around these parts a few months back that covered this in detail by various commentators (perhaps you were one of them).
Wish I'd bookmarked them; some great reading in those
Enjoyed the history, but don't get the premise. Has any tech been watched more closely or adopted faster by incumbents?
> The first cars were expensive, unreliable, and slow
We can say the same about the AI features being added to every SaaS product right now. Productization will take a while, but people will figure out where LLMs add value soon enough.
For the most part, winning startups look like new categories rather than those beating an incumbent. Very different than SaaS winners.
This articles assumes that a company is like an organism trying to survive. In fact the company is owned by people who want to make money and how may well decide that the easiest way to do that is to make as much money as possible in the existing business and then to shut it down.
This kind of article has to be a subgenre of business writing.
Why didn't all the carriage makers (400+) become Ford, General Motors and Chrysler?
Why didn't hundreds of catalogue sales companies become Amazon?
Why didn't hundreds of local city taxi services become Uber and Lyfe.
Hint: there's hundreds on one side of these questions and a handful on the other.
Beyond the point that a future market doesn't necessary have space for present players, the "Oo, look how foolish, they missed the next wave" articles miss the point that present businesses exist to make money in the present and generally do so. If you're horseshoe maker, you may know your days are numbered but you have equipment and you're making money. Liquidating to jump into this next wave may not make any sense - make your product 'till demand stops and retire. Don't reinvest but maybe raise prices and extract all you can from the operation now. Basically, "failed to pivot" applies to startups that don't have a capital investment and income stream with a given technology. If you have those, speculative pivoting is ignoring your fiduciary duty to protect that stuff while it's making making even if the income stream is declining.
And sure, I couldn't even get to the part about AI this offended most economist part so much...
Interestingly, my grandfather worked as a mechanic at a family-owned Chrysler car dealership for 30 years that previously sold carriages. It's in their logo and they have one on the roof.
Yes, would have been a much better article if it told us how to be sure AI is the next automobile and that AI is not the next augmented reality, metaverse, blockchain, Segway, or fill-in-your-favorite-fad.
HN (not YC, who readily invest in blockchain companies) are usually about a decade out regarding blockchain knowledge. Paying 2-6% of all your transactions to intermediaries of varying value-add may seem sensible to you. That's fine.
Merchants aren't the customer target for credit cards, consumers are. Credit card payments are reversible and provide a reward. There are lots of options available that are better for merchants than credit cards (cash, debit cards, transfers, etc). But they all lose because the consumer prefers credit cards.
I like Steve's content, but the ending misses the mark.
With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence.
I say this as someone who has worked for 7 years implementing AI research for production, from automated hardware testing to accessibility for nonverbals: I don't think founders need to obsess even more than they do now about implementing AI, especially in the front end.
This AI hype cycle is missing the mark by building ChatGPT-like bots and buttons with sparkles that perform single OpenAI API calls. AI applications are not a new thing, they have always been here, now they are just more accessible.
The best AI applications are beneath the surface to empower users, Jeff Bezos says that (in 2016!)[1]. You don't see AI as a chatbot in Amazon, you see it for "demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations."
[1]: https://www.aboutamazon.com/news/company-news/2016-letter-to...
"With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence."
I'm missing something here. First, I thought Steve's point was that the carriage makers did not see "individual transportation" as their business, and they should have--if they had, they might have pivoted like Studebaker did.
So if "most companies are not in the field of Artificial Intelligence", that could mean that they ought to be.
However, I draw a somewhat different conclusion: the business that companies ranging from Newsweek to accountants to universities to companies' HR departments should see themselves in is intelligence, regardless of whether that's artificial or otherwise. The question then becomes which supplies that intelligence better: humans or LLM-type AI (or some combination thereof)? I'm not at all sure that the answer at present is LLM-AI, but it is a different question, and the answer may well be different in the near future.
There are of course other kinds of AI, as you (jampa) mention. In other words, AI is not (for now) one thing; LLMs are just one kind of AI.
This is a different way of saying, people must learn how to use a new technology. I think like cars, radio, internet or smart phones. It took a while for people to understand somethings are so disruptive, eventually it will find a way into your life in all forms.
Im guessing for someone in laundry or restaurant business it might be hard to understand how AI could change their lives. And that is true, at least at this stage in the adoption and development of AI. But eventually it will find a way into their business in some form or the other.
There are stages to this. Pretty sure the first jobs to go will be the most easiest. This is the case with Software development too. When people say writing code has gotten easier, they really are talking about projects that were already easy to build getting even more easier. Harder parts of software development are still hard. Making changes to larger code bases with a huge user base comes with problems where writing code is kind of irrelevant. There are bigger issue to address like regression, testing, stability, quality, user adoption etc etc.
Second stage is of course once the easy stuff gets too easy to build. There is little incentive to build it. With modern building techniques we aren't building infinite huts, are we? We pivoted to building sky scrapers. I do believe most of AI's automation gains will be soaked up in the first wave and there will little incentive to build easy stuff and harder stuff will have more productivity demands from people than ever before.
First, I thought Steve's point was that the carriage makers did not see "individual transportation" as their business, and they should have--if they had, they might have pivoted like Studebaker did
But all 400+ carriage maker had pivoted, would they have had a chance to survive very long? Would they have all made more money pivoting? The idea that all this is only a "lack of vision" rather than hard business choices is kind of annoying.
3 replies →
Commercial endeavors exist to provide goods and services to consumer and users.
The implication of the author here is that those providing services that continue using human resources rather than AI, are potentially acting like carriage manufacturers.
Of course that assumes improvements in technology, which is not guaranteed.
I can strain the analogy just enough to get something useful from it.
If we laboriously create software shops in the classical way, and suddenly a new shop appears that is buggy, noisy, etc but eventually outperforms all other shops, then the progenitors of those new shops are going to succeed while the progenitors of these old shops are not going to make it.
It's a strain. The problem is AI is a new tech that replaces an entire process, not a product. Only when the process is the product (eg the process of moving people) does the analogy even come close to working.
I'd like to see analysis of what happened to the employees, blacksmiths, machinists, etc. Surely there are transferrable skills and many went on to work on automobiles?
This SE q implies there was some transition rather than chaos.
https://history.stackexchange.com/questions/46866/did-any-ca...
Stretching just a bit further, there might be a grain of truth to the "craftsman to assembly line worker" when AI becomes a much more mechanical way to produce, vs employing opinionated experts.
I agree as I point out in other comments here - you said it with more detail.
AGI + robot is way beyond a mere change in product conception or implementation. It's beyond craftsmen v. modern forms of manufacturing we sometimes read about with guns.
It is a strain indeed to get from cars v.buggies to AGI. I dare say that without AGI as part and parcel to AI the internalization of AI must be necessarily quite different.
> With the carriage / car situation, individual transportation is their core business, and most companies are not in the field of Artificial Intelligence.
Agreed. The analogy breaks down because the car disrupted a single vertical but AI is a horizontal, general-purpose technology.
I think this also explains why we're seeing "forced" adoption everywhere (e.g., the ubiquitous chatbot) -- as a result of:
1. Massive dose of FOMO from leadership terrified of falling behind
2. A fundamental lack of core competency. Many of these companies companies (I'm talking more than just tech) can't quickly and meaningfully integrate AI, so they just bolt on a product
3. Layoffs in all but name, mainly in response to a changing tax environment. See also: RTO.
Viewed from a different angle I think he's probably close. A service provider changing the back end while leaving the front end UI similar is not dissimilar to early cars being built like carriages. But when the product can shift from "give me an app that makes it easier to do my taxes" to "keep me current on my taxes and send me status updates" that's a pretty radical difference in what the customer sees.
> But when the product can shift from "give me an app that makes it easier to do my taxes" to "keep me current on my taxes and send me status updates" that's a pretty radical difference in what the customer sees.
For a bunch of stuff - banks, online shopping, booking a taxi, etc - this shift already happened with non-LLM-based "send me notifications of unusual account activity" or even the dead-simple "send me an email about every transaction on my bank account." Phone notifications moved it from email to built-into-the-OS even.
The "LLM hype cycle" tweak becomes something like "have an LLM summarize the email instead of just listing the three transactions" which is of dubious use to the average user.
2 replies →
Mobility is not an analogy for AI, it's an analogy to whichever industry you work in. If you publish a magazine, you may think you're in the 'publishing' business and that AI as a weak competitor, maybe capable of squashing crappy blogs but not prestigious media like yours. But maybe what you're really in is the 'content' business, and you need to recognize that sooner or later, AI is going to beat you at the content game even if it couldn't beat you at the publishing game. The kicker being that there no longer exists a publishing game, because AI.
Or more likely, you are in the publishing business but the tech world unilaterally deemed everything creative to be a fungible commodity and undertook a multi-billion dollar campaign to ingest actual creative content and compete with everyone that creates it in the same market with cheap knockoffs. Our society predictably considers this progress because nothing that could potentially make that much money could possibly be problematic. We continue in the trend of thinking small amounts of good things are not as good as giant piles of crap if the crap can be made more cheaply.
4 replies →
> The best AI applications are beneath the surface to empower users
Not this time, tho. ChatGPT is the iphone moment for "AI" for the masses. And it was surprising and unexpected both for the experts / practitioners and said masses. Working with LLMs pre gpt3.5 was a mess, hackish and "in the background" but way way worse experience overall. Chatgpt made it happen just like the proverbial "you had me at scroll and pinch-to-zoom" moment in the iphone presentation.
The fact that we went from that 3.5 to whatever claude code thing you can use today is mental as well. And one of the main reasons we got here so fast is also "chatgpt-like bots and buttons with sparkles". The open-source community is ~6mo behind big lab SotA, and that's simply insane. I would not have predicted that 2 years ago, and I was deploying open-source LLMs (GPT-J was the first one I used live in a project) before chatgpt launched. It is insane!
You'll probably laugh at this, but a lot of fine-tuning experimentation and gains in the open source world (hell, maybe even at the big labs, but we'll never know) is from the "horny people" using local llms for erotica and stuff. I wouldn't dismiss anything that happens in this space. Having discovered the Internet in the 90s, and been there for every hype cycle in this space, this one is different, no matter how much anti-hype tokens get spent on this subject.
I’ll spend an anti-hype token :)
ChatGPT wasn’t the iphone moment, because the iphone wasn’t quickly forgotten.
Outside of software, most adult professionals in my network had a play with chatgpt and have long since abandoned their accounts. They can’t use chatbots for work (maybe data is sensitive, or their ‘knowledge work’ isn’t the kind that produces text output). Our native language is too poorly supported for life admin (no Gemini summaries or ‘help writing an email’). They just don’t have any obvious use case for LLMs in their life.
5 replies →
It may be true but Bezos' comment is also classic smoke blowing. "Oh well you can't see us using <newest hype machine> or quantify it's success but it's certainly in everything we do!"
But it’s completely true — Amazon undoubtedly has a pretty advanced logistics set up and certainly uses AI all over the place. Even if they’re not a big AI researcher.
There are a lot of great use cases for ML outside of chatbots
3 replies →
There's a qualitative difference between ok transport and better transport vs AI.
If we're going to talk cars, I think what the Japanese did to the big three in the 1980s would have been far more on point.
AI is encumbered by AGI which is further encumbered by the delta between what is claimed possible (around the corner) and what is. That's a whole different ball game with wildly different risk/reward tradeoffs.
Learning about history post buggies didn't do much for me.
Just today I used the AI service on the amazon product page to get more information about a specific product, basically RAG on the reviews.
So maybe your analysis is outdated?
The amazon store chatbot is mongst the worst implementations I've seen. The old UI which displayed the customer questions and allowed searching them was infinitely better.
2 replies →
Are you seriously suggesting the crappy AI bot on Amazon product pages is evidence of an ‘AI’ revolution? The thing sucks. If I’m ready to spend money on a product, it’s worth my time to do a traditional keyword search and quickly scroll through the search returns to get the contextualized information, rather then hoping an LLM will get it right.
An Amazon AI chatbot is also the only way to request a refund after you haven't received your packet.
Right. The point is that in frothy market conditions and a general low-integrity regime in business and politics there is a ton of incentive to exploit FOMO far beyond it's already "that's a stiff sip there" potency and this leads to otherwise sane and honest people getting caught up into doing concrete things today based on total speculation about technology that isn't even proposed yet. A good way to really understand this intuitively is to take the present-day intellectual and emotional charge out of it without loss of generality: we can go back and look at Moore's Law for example, and the history of how the sausage got made on reconciling a prediction of exponential growth with the realities of technological advance. It's a fascinating history, there's at least one great book [1] and the Asionometry YouTube documentary series on it is great as always [2].
There is no point in doing business and politics and money motivated stuff based on the hypothetical that technology will become self-improving, if that happens we're through the looking glass, not in Kansas anymore, "Roads? Where we're going, we won't need roads." It won't matter or at least it won't be what you think it'll be some crazy thing.
Much, much, much, much more likely is that this is like all the other times we made some real progress, people got too excited, some shady people made some money, and we all sobered up and started working on the next milestone. This is by so far both A) The only scenario you can do anything about and B) The only scenario honest experts take seriously, so it's a double "plan for this one".
The quiet ways that Jetson Orin devices and shit will keep getting smarter and more trustworthy to not break shit and stuff, that's the bigger story, it will make a much bigger difference than snazzy Google that talks back, but it's taking time and appearing in the military first and comes in fits and starts and has all the other properties of ya know, reality.
[1] https://www.amazon.com/Moores-Law-Silicon-Valleys-Revolution...
[2] https://www.youtube.com/@Asianometry
Let us see how this will age. The current generation of AI models will turn out to be essentially a dead end. I have no doubt that AI will eventually fundamentally change a lot of things, but it will not be large language models [1]. And I think there is no path of gradual improvement, we still need some fundamental new ideas. Integration with external tools will help but not overcome fundamental limitations. Once the hype is over, I think large language models will have a place as simpler and more accessible user interface just like graphical user interfaces displaced a lot of text based interfaces and they will be a powerful tool for language processing that is hard or impossible to do with more traditional tools like statistical analysis and so on.
[1] Large language models may become an important component in whatever comes next, but I think we still need a component that can do proper reasoning and has proper memory not susceptible to hallucinating facts.
> The current generation of AI models will turn out to be essentially a dead end.
It seems a matter of perspective to me whether you call it "dead end" or "stepping stone".
To give some pause before dismissing the current state of the art prematurely:
I would already consider LLM based current systems more "intelligent" than a housecat. And a pets intelligence is enough to have ethical implications, so we arguably reached a very important milestone already.
I would argue that the biggest limitation on current "AI" is that it is architected to not have agency; if you had GPT-3 level intelligence in an easily anthropomorphizeable package (furby-style, capable of emoting/communicating by itself) public outlook might shift drastically without even any real technical progress.
I think the main thing I want from an AI in order to call it intelligent is the ability to reason. I provide an explanation of how long multiplication works and then the AI is capable of multiplying arbitrary large numbers. And - correct me if I am wrong - large language models can not do this. And this despite probably being exposed to a lot of mathematics during training whereas in a strong version of this test I would want nothing related to long multiplication in the training data.
4 replies →
Intelligence alone does not have ethical implications w.r.t. how we treat the intelligent entity. Suffering has ethical implications, but intelligence does not imply suffering. There's no evidence that LLMs can suffer (note that that's less evidence than for, say, crayfish suffering).
2 replies →
If you asked your cat to make a REST API call I suppose it would fail, but the same applies if you asked a chatbot to predict realtime prey behavior.
1 reply →
>I would already consider LLM based current systems more "intelligent" than a housecat.
An interesting experiment would be to have a robot with an LLM mind and see what things it could figure out, like would it learn to charge itself or something. But personally I don't think they have anywhere near the general intelligence of animals.
It may be that LLM-AI is a dead end on the path to General AI (although I suspect it will instead turn out to be one component). But that doesn't mean that LLMs aren't good for some things. From what I've seen, they represent a huge improvement in (machine) translation, for example. And reportedly they're pretty good at spiffing up human-written text, and maybe even generating text--provided the human is on the lookout for hallucinations (and knows how to watch for that).
You might even say LLMs are good with text in the same way that early automobiles were good for transportation, provided you watched out for the potholes and stream crossings and didn't try to cross the river on the railroad bridge. (DeLoreans are said to be good at that, though :).)
This is a surprising take. I think what's available today can improve productivity by 20% across the board. That seems massive.
Only a very small % of the population is leveraging AI in any meaningful way. But I think today's tools are sufficient for them to do so if they wanted to start and will only get better (even if the LLMs don't, which they will).
Sure, if I ask about things I know nothing about, then I can get something done with little effort. But when I ask about something where I am an expert, then large language models have surprisingly little to offer. And because I am an expert, it becomes apparent how bad they are, which in turn makes me hesitate to use them for things I know nothing about because I am unprepared to judge the quality of the response. As a developer I am an expert on programming and I think I never got something useful out of a large language model beyond pointers to relevant APIs or standards, a very good tool to search through documentation, at least up to the point that it starts hallucinating stuff.
When I wrote dead end, I meant for achieving an AI that can properly reason and knows what it knows and maybe is even able to learn. For finding stuff in heaps of text, large language models are relatively fine and can improve productivity, with the somewhat annoying fact that one has to double check what the model says.
I think that what's available today is a drain on productivity, not an improvement, because it's so unreliable that you have to babysit it constantly to make sure it hasn't fucked up. That is not exactly reassuring as to the future, in my view.
1 reply →
Isn't this entirely missing the point of the article?
> When early automobiles began appearing in the 1890’s — first steam-powered, then electric, then gasoline –most carriage and wagon makers dismissed them. Why wouldn’t they? The first cars were: Loud and unreliable, Expensive and hard to repair, Starved for fuel in a world with no gas stations, Unsuitable for the dirt roads of rural America
That sounds like complaints against today's LLM limitations. It will be interesting to see how your comment ages in 5-10-15 years. You might be technically right that LLMs are a dead end. But the article isn't about LLMs really, it's about the change to an "AI" world from a non-AI world and how the author believes it will be similar to the change from the non-car to the car world.
Sorry but to say current LLMs are a "dead end" is kind of insane if you compare with the previous records at general AI before LLMs. The earlier language models would be happy to be SOTA in 5 random benchmarks (like sentiment or some types of multiple choice questions) and SOTA otherwise consisted of some AIs that could play like 50 Atari games. And out of nowhere we have AI models that can do tasks which are not in training set, pass turing tests, tell jokes, and work out of box on robots. It's literally insane level of progress and even if current techniques don't get to full human-level, it will not have been a dead end in any sense.
Something can be much better than before but still be a dead end. Literally a dead end road can take you closer but never get you there.
9 replies →
I think large language models have essentially zero reasoning capacity. Train a large language model without exposing it to some topic, say mathematics, during training. Now expose the model to mathematics, feed it basic school books and explanations and exercises just like a teacher would teach mathematics to children in school. I think the model would not be able to learn mathematics this way to any meaningful extend.
3 replies →
This kind of just-so story is easy to write after the fact. It's harder to see the future at the time.
How many people read a version of the same story and pivoted their company to focus on SecondLife, NFTs, blockchain or whatever else technology was hyped at the time and tanked? That's the other half of this story.
Ideas that worked but didn't catch on:
- Virtual worlds / metaverses
You can replicate real life, but it's kind of boring.
- 3D printing
Became a useful industrial tool, but home 3D printing never went mainstream. At one point Office Depot offered 3D printing. No longer.
- Self-driving minibuses
Several startups built these, and some were deployed. Never really caught on. You'd think that airport parking shuttles and such would use these, but they don't.
- Small gas turbines
Power for cars, buses, trucks, backup power, and other things where you need tens to hundreds of kilowatts in a small package. All those things were built and worked. But the technology never became cheap. Aircraft APUs for large aircraft and the US Army's M1 tank variants remain one of the few deployed applications. The frustration of turbine engines is that below bizjet size, smaller units are not much cheaper.
- 3D TV
That got far enough that 3D TV sets were in stores. But they didn't sell.
- Nuclear power
Works, mostly, but isn't really cost-effective. Failures are very expensive and require evacuating sizable areas.
- Proof of correctness for programs
After forty years, it's still a clunky process.
- Maglev trains
Works, but insanely expensive.
- The Segway
Works, but scooters do the same job with less expense.
- 3D input devices
They used to be seen at trade shows, but it turns out that they don't make 3D input easier.
It's quite possible to guess wrong.
Metaverse (virtual worlds) did catch on - virtual offices and storefronts didn't really catch on, but people enjoy virtual worlds for: competitive and cooperative gaming; virtual fashion and environment construction; chat and social interaction; storytelling; performance; etc. Mostly non-commerce recreation activities. Look at the success of fortnite, minecraft, world of warcraft, etc. These share the dimension of shared recreational experiences and activities that give people a reason to spend time in the virtual world.
I like the historical part of this article, but the current problem is the reverse.
Everyone is jumping on the AI train and forgetting the fundamentals.
AI will plausibly disrupt everything
We have a system to which I can upload a generic video, and which captures eveeeeeerything in it, from audio, to subtitles onscreen, to skewed text on a mug, to what is going on in a scene. It can reproduce it, reason about it, and produce average-quality essays about it (and good-quality essays if prompted properly), and, still, there are so many people who seem to believe that this won't revolutionize most fields?
The only vaguely plausible and credible argument I can entertain is the one about AI being too expensive or detrimental to the environment, something which I have not looked sufficiently into to know about. Other than that, we are living so far off in the future, much more than I ever imagined in my lifetime! Wherever I go I see processes which can be augmented and improved though the use of these technologies, the surface of which we've only barely scratched!
Billions are being poured trying to use LLMs and GenAI to solve problems, trying to create the appropriate tools that wrap "AI", much like we had to do with all the other fantastic technology we've developed throughout the years. The untapped potential of current-gen models (let alone next-gen) is huge. Sure, a lot of this will result in companies with overpriced, over-engineered, doom-to-fail products, but that does not mean that the technology isn't revolutionary.
From producing music, to (in my mind) being absolutely instrumental in a new generation of education or mental health, or general support for the lonely (elderly and perhaps young?), to the service industry!...the list goes on and on and on. So much of my life is better just with what little we have available now, I can't fathom what it's going to be like in 5 years!
I'm sorry I highjacked your comment, but it boggles the mind how so many people so adamantly refuse to see this, to the point that I often wonder if I've just gone insane?!
2 replies →
So would a universal cancer vaccine, but no one is acting like it's just around the corner.
I'm old enough to remember when "big data" and later "deep data" was going to enable us to find insane multi-variable correlations in data and unlock entire new levels of knowledge and efficiency.
AI as currently marketed is just that with an LLM chatbot.
I definitely don't think so. You're seeing companies who have a lot of publicity on the internet. There are tons of very successful SMBs who have no real idea of what to do with AI, and they're not jumping on it at all. They're at risk.
> They're at risk.
They're at risk of what? It's easy to hand-wave about disruption, but where's the beef?
2 replies →
It's only a risk if there's a moat. What's the moat for jumping in early?
There is some truth to this, but the biggest concerns I have about AI are not related to who will realize the change is coming. They are moral/ethical concerns that transcend any particular market. Things connected to privacy, creativity, authorship, inequality and the like. This means that AI isn't really the cause of these concerns, it's just the current front line of these larger issues, which have persisted across all manner of disruptions across all manner of industry.
> Even with evidence staring them in the face, carriage companies still did not pivot, assuming cars were a fad.
I like this quote. But this analogy doesn’t exactly work. Withe this hype cycle, CEOs are getting out and saying that AI will replace humans, not horses. Unlike previous artisans making carriages, the CEOs saying these things have very clear motivations to make you believe the hype.
I'm not sure I agree much
Cynically, there's no difference from a CEO's perspective between a human employee and a horse
They are both expenses that the CEO would probably prefer to do without whenever possible. A line item on a balance sheet, nothing more
I think ceos that think this way are a self fulfilling prophecy of doom. If they think of their employees as cogs that can be replaced, they get cogs that can be replaced.
12 replies →
Moreover, there was at least one company which did pivot --- the Chevy Malibu station wagon my family owned in the mid-70s had a badge on the door openings:
>Body by Fisher
which had an image of the carriages which they had previously made.
the CEOs saying these things have very clear motivations to make you believe the hype
And conversely, people who fear that they might be replaced have very clear motivations to claim that AI is useless.
Great read!
I wonder if there is something noteworthy about Studebacker - yes, they were the only carriage maker out of 4000 to start making cars, and therefore the CEO "knew better" than the other ones.
But then again, Studebacker was the single largest carriage maker and a military contractor for the Union - in other words they were big and "wealthy" enough to consider the "painful transformation" as the article puts it.
How many of the 3999 companies that didn't acutally had any capacity to do so?
Is it really a lesson in divining the future, or more survivorship bias?
Agreed. The autombile was two innovations, not one. If Ford had created a carriage assembly line in an alternate history without automobiles, how many carriage makers would he have put out of business? The United States certainly couldn't have supported 4000 carriage assembly lines. Most of those carriage makers did not have the capacity or volume to finance and support an assembly line.
That's the part missing from TFA, there were thousands of auto 'startups', but only a handful survived the depression.
1 reply →
Also, the auto built on some technologies that were either invented or refined by the bicycle industry: Pneumatic tires, ball bearings, improved steel alloys, and a gradual move to factory production. Many of the first paved roads were the result of demand from bicyclists.
I've listened to so many CEOs in various industries (not just tech) salivating at the potential ability to cutout the software engineering middle man to make their ideas come to life (from PMs, to Engineers, to Managers, etc.). They truly believe the AI revolution is going to make them god's gift to the world.
I on the other hand, see the exact opposite happening. AI is going to make people even more useful, with significant productivity gains, in actuality creating MORE WORK for humans and machines alike to do.
Leaders who embrace this approach are going to be the winners. Leaders who continue to follow the hype will be the losers, although there will probably be some scam artists who are winners in the short term who are riding the hype cycle just like crypto.
An interesting aspect that doesn't seem captured by TFA and similar articles is that it is not a specific kind of business that is being disrupted, but rather an entire genre of labor on which they all rely to varying extents: knowledge work. Furthermore, "knowledge work" is a very broad term that encompasses an extremely broad variety of skillsets (engineering, HR, sales, legal, medical...) And knowledge workers are indeed being rapidly disrupted by GenAI.
This is an interesting phenomenon that probably has no historical equivalent and hence may not have been fully contemplated in any literature, and so comparisons like TFA fall short of capturing the full implications.
Whether these companies see themselves an AI company seems orthogonal to the fact that they should acknowledge this sea-change and adapt. However, currently all industries seem to be thinking they should be an "AI company" and are responding by trying to stuff AI into any product they can. Maybe the urgency for them to adapt should be based on the degree to which knowledge work is critical to their business.
If "knowledge work" is under such threat from GenAI, it is revealing what extent it is actually a euphemism for "clerical work".
>In each of the three companies that survived, it was the founders, not hired CEOs that drove the transition.
This is how VCs destroy businesses by bring in adult supervision. CEOs are not incentivized to play the long game.
The difference between the mobility & transportation industry, whether it by carriage and horse, or motor car, was that it was in demand by 99% of the population. AI, on the other hand, is only demanded by say 5%-10% of the population. How many people truly want an AI fridge or dishwasher? They just want fresh food and clean dishes.
It's an interesting story but a weird analogy and moral. What would have been better if the other 3,999 carriage companies had all tried to make automobiles? Probably about 3,990 shitty cars and a few more mild successes. I'm not sure that's any better.
That's what I see with AI. Every company wants to suddenly "be an AI company", although few are sure what that means. Companies that were legitimately very good at a specific thing are now more interested in being mediocre at the same thing as everyone else. Maybe this will work out in the long tun but right now it's a pain in the ass.
> He founded Buick in 1904 and in 1908 set up General Motors. ... In 1910 Durant would be fired by his board. Undeterred, Durant founded Chevrolet, took it public and in 1916 did a hostile takeover of GM and fired the board. He got thrown out again by his new board in 1920 and died penniless managing a bowling alley.
There is no hope, after all :(
At my workplace, when managers are done reading their business books, they go on a bookshelf in the break room.
There's an entire shelf devoted to "disruption."
From the article:
_____
The first cars were:
- Loud and unreliable
- Expensive and hard to repair
- Starved for fuel in a world with no gas stations
- Unsuitable for the dirt roads of rural America
_____
Reminds me of Linux in the late 90s. Talking to Solaris, HPUX or NT4 advocates, many were sure Linux was not going to succeed because:
- It didn't support multiple processors
- There was nobody to pay for commercial support
- It didn't support the POSIX standard
Linux won on cost once it was "good enough". AI isn't free (by any definition of free) and is a long way away from "good enough" to be a general replacement for the status quo in a lot of domains.
The areas where it does make sense to use, it's been in use for years, if not longer, without anyone screaming from the rooftops about it.
By the time Linux won it was better - by 2003 you could take a workload that took eight hours on some ridiculous Sun machine and run it in 40 minutes on a Xeon box.
>- Starved for fuel in a world with no gas stations
Actually, gasoline was readily available in its rôle as fuel for farm and other equipment, and as a bottled cleaning product sold at drug stores and the like.
>- Unsuitable for the dirt roads of rural America
but the process of improving roads for the new-fangled bicycle was well underway.
The historical part completely misses the first boom of EV from 1890s to 1910s besides mentioning that they existed.
The history of those is the big untold story here.
It doesn't help if you're betting on the right tech too early.
Clearly superior in theory, but lacking significant breakthroughs in battery reasearch and general spottyness of electrification in that era.
Tons of Electric Vehicle companies existed to promote that comparably tech.
Instead the handful of combustion engine companies drove everyone else out of the market eventually, not last gasoline was marketed as more manly.
https://www.theguardian.com/technology/2021/aug/03/lost-hist...
Yep. Too early is as bad as too late. The EV was invented but the supporting technology wasn't there.
Lots of ideas that failed in the first dotcom boom in the late 1990s are popular and successful today but weren't able to find a market at the time.
This reminds me of Mary Anderson [0], who invented the windshield wiper so early that her patent expired by the time Cadillac made them standard equipment.
[0] https://en.wikipedia.org/wiki/Mary_Anderson_(inventor)
Articles like this are exercises in survivor bias.
Let's see a similar story for, say, dirigibles.
History is full of examples of execs hedging on the wrong technology, arriving too early, etc.
"We're all in on Blockchain! We're all in on VR! We're all in on self-driving! We're all in on NoSQL! We're all in on 3D printing!" The Gardner Hype Cycle is alive and well.
I don't like this article one bit, starting from the title "Missed" the Future.
It implies that not jumping on the latest disruptive technology, at the early stage where the tech hasn't taken hold yet and it's not known if it will (see: disruptive tech graveyard), reducing or pivoting from your established business, is a bad thing, or a failure.
It's also ok to go out of business. Really disruptive technology often (usually?) spurs growth and jobs shift, so there's no loss in aggregate. Of course a few people that can't retrain will be left behind. For his specific example of carriage to car, there were 4000 carriage makers because they were fairly small businesses, with shallow supply chains. Just a couple of car makers (and the full supply chain) dwarf the total employment required for all those 4000 carriage makers.
This article is simply written with the benefit of hindsight.
Fundamentally this article is reasoning in units of “companies,” but the story is different when reasoning in terms of people.
It turns out automobile companies need way more employees than carriage companies, so the net impact on employment was positive. Then add in all the jobs around automobiles like oil, refining, fueling, repair, road construction, etc.
Do we care if companies put each other out of business via innovation? On the whole, not really. People who study economics largely consider it a positive: “creative destruction.”
The real question of LLM AI is whether it will have a net negative impact on total employment. If so, it would be the first major human technology in history to do that. In the long run I hope it does, because the human population will soon level off. If we want to keep economic growth and standards of living, we will need major advances in productivity.
Stepping back from the specifics these are stories of human nature.
We tag “complacency” as bad, but I think it’s just a byproduct of our reliance on heuristics and patterns which is evolutionarily useful overall.
On the other hand we worry (sometimes excessively) about how the future might unfold and really much of that is unknown.
Much more practical (and rewarding) to keep improving oneself or organisation to meet the needs of the world today withe an eye on how the world is evolving, rather than try to be some oracle or predict too far out (in which case you need to both get the prediction and the execution right!).
As an aside, it seems a recent fashion to love these big bets these days (AI, remember Metaverse), and to make big high conviction statements about the future, but that’s more to do with their individual specific circumstances and motivations.
The shift described in the article is more about craftsmanship vs mass production (Ford's conveyor belt and so on) and disruption is not the right word as it took place over decades. Most people that started as coach builders could probably keep their jobs as fewer and fewer people started.
There were some classes of combustion engines that smaller shops did manufacture, such as big hot-bulb engines for ships and factories. Miniaturised combustion engines or electric motors are not suitable for craftsman-like building but rather standardised procedures with specialised machines.
The main mechanism is not "disruption" but rather a trend of miniaturisation and mass production.
Thing is, those companies can't do much if whole lines of business will become obsolete. Behind every company, there is a core competence that forms the value and the rest of the business is just a wrapper. When core competence is worthless, the company is just out. Even if they know it's coming there's little they can do. In fact, best thing they can actually do is turning the company into a milk cow to extract all value they can here and now, stopping all investments into future - will probably generate enormous profits for a few years. Extract them and invest into the wider stock market.
I feel this at a personal level. I started as an Android developer and stayed so. Not venturing into hybrid/etc or even trying to be into iOS as well, let alone backend, full stack (let's not even begin to talk of AI) - while kind of always seeing this might happen. Now I see the world pass by kind of. I don't think it's always missing the future. Maybe a comfort zone thing - institutional or personal? Sometimes it's just vehement refusal to believe something. I think it's just foolish hope against the incoming tidal shift.
I don't know if the problems at the company that I worked for, came from the CEO, or many of the powerful General Managers.
At my company, "General Manager" positions were the ones that actually set much of the planning priorities. Many of them, eventually got promoted to VP, and even, in the case of my former boss, the Chairman of the Board.
When the iPhone came out, one of my employees got one (the first version). I asked to borrow it, and took it to our Marketing department. I said "This is gonna be trouble for us."
I was laughed out of the room. They were following the strategy set down from the General Managers, which involved a lot of sneering at the competition.
The iPhone (and the various Android devices that accompanied it), ate my company for breakfast, and picked their teeth with our ribs.
A couple of the GMs actually anticipated the issues, but they were similarly laughed out of their rooms.
I saw the same thing happen to Kodak (the ones that actually invented digital photography), with an earlier disruption. I was at a conference, hosted by Kodak, and talked to a bunch of their digital engineers and Marketing folks.
They all had the same story: They were being deliberately kneecapped by the film people (with the direct support of the C-Suite).
At that time, I knew they were "Dead Man Walking." That was in 1996 or so.
There was an excellent thread(s? I think) about Nokia around these parts a few months back that covered this in detail by various commentators (perhaps you were one of them).
Wish I'd bookmarked them; some great reading in those
This one? https://news.ycombinator.com/item?id=42724761
The article seemed more apropos to the US automobile industry than SaaS.
Enjoyed the history, but don't get the premise. Has any tech been watched more closely or adopted faster by incumbents?
> The first cars were expensive, unreliable, and slow
We can say the same about the AI features being added to every SaaS product right now. Productization will take a while, but people will figure out where LLMs add value soon enough.
For the most part, winning startups look like new categories rather than those beating an incumbent. Very different than SaaS winners.
This articles assumes that a company is like an organism trying to survive. In fact the company is owned by people who want to make money and how may well decide that the easiest way to do that is to make as much money as possible in the existing business and then to shut it down.
Innovators Dilemma, mentioned here, is great. If you enjoyed this article, don't overlook that recommendation.
This kind of article has to be a subgenre of business writing.
Why didn't all the carriage makers (400+) become Ford, General Motors and Chrysler? Why didn't hundreds of catalogue sales companies become Amazon? Why didn't hundreds of local city taxi services become Uber and Lyfe.
Hint: there's hundreds on one side of these questions and a handful on the other.
Beyond the point that a future market doesn't necessary have space for present players, the "Oo, look how foolish, they missed the next wave" articles miss the point that present businesses exist to make money in the present and generally do so. If you're horseshoe maker, you may know your days are numbered but you have equipment and you're making money. Liquidating to jump into this next wave may not make any sense - make your product 'till demand stops and retire. Don't reinvest but maybe raise prices and extract all you can from the operation now. Basically, "failed to pivot" applies to startups that don't have a capital investment and income stream with a given technology. If you have those, speculative pivoting is ignoring your fiduciary duty to protect that stuff while it's making making even if the income stream is declining.
And sure, I couldn't even get to the part about AI this offended most economist part so much...
Interestingly, my grandfather worked as a mechanic at a family-owned Chrysler car dealership for 30 years that previously sold carriages. It's in their logo and they have one on the roof.
Kodak is, for me, a leading example of leader in an industry who were unable to disrupt themselves.
TV networks, relative to Netflix is another.
And who can forget BlackBerry?
All of the owners of the TV networks moved to streaming with varying degrees of success.
- Disney has owned ABC forever and Disney+ is fairly successful
- NBC is owned by Comcast and Comcast has moved more toward being a dumb pipe, streaming and is divesting much of its linear TV business.
- CBS/Paramount just paid off Trump and it yet to be seen what will happen to it
Ironic to read this on a site that's unusable on mobile.
This somehow reminds me of Jack Dorsey and Howard Schulz.
“disruption doesn’t wait for board approval”
Great line.
nice article, but then end with the brain dead "jump on [current fad]".
If this was published a few months ago, it would be telling everyone to jump into web3.
Yes, would have been a much better article if it told us how to be sure AI is the next automobile and that AI is not the next augmented reality, metaverse, blockchain, Segway, or fill-in-your-favorite-fad.
Has that ended well?
HN (not YC, who readily invest in blockchain companies) are usually about a decade out regarding blockchain knowledge. Paying 2-6% of all your transactions to intermediaries of varying value-add may seem sensible to you. That's fine.
Credit cards are not the only alternative to crypto currencies.
My bank transfers within the country cost me nothing to send or receive, for example.
Merchants aren't the customer target for credit cards, consumers are. Credit card payments are reversible and provide a reward. There are lots of options available that are better for merchants than credit cards (cash, debit cards, transfers, etc). But they all lose because the consumer prefers credit cards.
4 replies →
That only happens in the US. Europe has much lower credit card fees and most countries have already figured out cashless low cost payments.