Comment by stevenjgarner

4 months ago

"It is 1958. IBM passes up the chance to buy a young, fledgling company that has invented a new technology called xerography. Two years later, Xerox is born, and IBM has been kicking themselves ever since. It is ten years later, the late '60s. Digital Equipment DEC and others invent the minicomputer. IBM dismisses the minicomputer as too small to do serious computing and, therefore, unimportant to their business. DEC grows to become a multi-hundred-million dollar corporation before IBM finally enters the minicomputer market. It is now ten years later, the late '70s. In 1977, Apple, a young fledgling company on the West Coast, invents the Apple II, the first personal computer as we know it today. IBM dismisses the personal computer as too small to do serious computing and unimportant to their business." - Steve Jobs [1][2][3]

Now, "IBM CEO says there is 'no way' spending on AI data centers will pay off". IBM has not exactly had a stellar record at identifying the future.

[1] https://speakola.com/ideas/steve-jobs-1984-ad-launch-1983

[2] https://archive.org/details/1983-10-22-steve-jobs-keynote

[3] https://theinventors.org/library/inventors/blxerox.htm

> IBM has not exactly had a stellar record at identifying the future.

IBM invented/developed/introduced magnetic stripe cards, UPC Barcodes, the modern ATM, Hard drives, floppies, DRAM, SQL, the 360 Family of Mainframes, the PC, Apollo guidance computers, Deep Blue. IBM created a far share of the future we're living in.

I'm no fan of much of what IBM is doing at the moment but it could be argued that its consultancy/service orientation gives it a good view of how business is and is planning to use AI.

  • They also either fairly accurately predicted the death of HDDs by selling off their research division before the market collapsed, or they caused the end of the HDD era by selling off their research division. They did a lot of research.

  • The other way to look at it is that the entire consulting industry is teetering on catastrophe. And IBM, being largely a consulting company now, is not being spared.

    • IBM isn't failing, though. They're a profitable company with healthy margins, and enterprises continue to hire them for all sorts of things, in large numbers.

      2 replies →

    • > The other way to look at it is that the entire consulting industry is teetering on catastrophe

      Oh? Where'd you get that information?

      If you mean because of AI, it doesn't seem to apply much to IBM. They are probably not great at what they do like most such companies, but they are respectable and can take the blame if something goes wrong. AI doesn't have these properties.

      1 reply →

    • This is a separate argument though. A failing company may still be right in identifying other companies failure modes.

      You can be prescient about failure in one area and still fail yourself. There's no gotcha.

      4 replies →

    • The whole point of a consultant is to let the execs blame someone else.

      Nobody got fired for buying something Gartner recommended, or for following EY's advice to lay off/hire

      I don't see AI taking that blame away.

  • > IBM invented/developed/introduced magnetic stripe cards, UPC Barcodes, the modern ATM, Hard drives, floppies, DRAM, SQL, the 360 Family of Mainframes, the PC, Apollo guidance computers, Deep Blue. IBM created a far share of the future we're living in.

    Well put. “IBM was wrong about computers being a big deal” is a bizarre take. It’s like saying that Colonel Sanders was wrong about chicken because he, uh… invented the pressure fryer.

  • Nitpicking, IBM did non develop _the_ Apollo Guidance Computer (the one in the spacecraft with people), it was Raytheon. They did, however, developed the Launch Vehicle Digital Computer that controlled the Saturn rocket in Apollo missions. AGC had very innovative design, while LVDC was more conventional for that time.

  • I've heard some second hand stories about IBM's way of using "AI" and it is pretty much business oriented and not much of the glamour and galore promises the other companies make (of course you still have shiny new things in business terms). It's actually good entertainment hearing all the internal struggles of business vs fancy during the holidays.

> In 1977, Apple, a young fledgling company on the West Coast, invents the Apple II, the first personal computer as we know it today. IBM dismisses the personal computer as too small to do serious computing and unimportant to their business.

IBM released the 5100 in September 1975 [0] which was essentially a personal computer in feature set. The biggest problem with it was the price tag - the entry model cost US$8975, compared to US$1298 for the entry Apple II released in June 1977 (close to two years later). The IBM PC was released in August 1981 for US$1565 for the most basic system (which almost no one bought, so in practice they cost more). And the original IBM PC had model number 5150, officially positioning it as a successor to the 5100.

IBM’s big problem wasn’t that they were disinterested in the category - it was they initially insisted on using expensive IBM-proprietary parts (often shared technology with their mainframe/midrange/minicomputer systems and peripherals), which resulted in a price that made the machine unaffordable for everyone except large businesses, governments, universities (and even those customers often balked at the price tag). The secret of the IBM PC’s success is they told the design team to use commercial off-the-shelf chips from vendors such as Intel and Motorola instead of IBM’s own silicon.

[0] https://en.wikipedia.org/wiki/IBM_5100

  • And outsourcing the operating system to Microsoft, because they didnt consider it that important.

This is the exact kind of thinking that got us into this mess in the first place, and I'm not blaming you for it, it seems to be something all of us do to an extent. We don't look to Meta, who only a few years ago thought that the Metaverse would be the "next big thing" as an example of failure to identify the future, we look to IBM who made that mistake almost 30 years ago. Underestimating a technology seems to stick much harder than overestimating one.

If you want to be seen as relevant in this industry, or as a kind of "thought leader", the easy trick seems to be to hype up everything. If you do that and you're wrong, people will quickly forget. If you don't and you're wrong, that will stain your reputation for decades.

  • Good point. That kind of thinking is an absurdity. Saying IBM dropped the ball 70 years ago without acknowledging that lessons were learned, leadership has changed hands a lot since then, and most importantly, the tech landscape back then was very different from today unless you grossly oversimplify everything amounts to nothing more than a fallacious opinion.

    Not even much of an IBM fan, myself, but I respect their considerable contribution to the industry. Sure, they missed a shot back then, but I think this latest statement is reliably accurate based on the information we currently have.

  • It’s easy to be a pessimist. Most things don’t work. So in 9 out of 10 cases you’re right.

    But human breakthrough progress came mostly through optimists, who tried things no one else dared to do.

  • The amount of hate I've received here for similar statements is astonishing. What is even more astonishing is that it takes 3-rd grade math skills to work out that the current AI(even ignoring the fact that there is nothing intelligent about the current AI) costs are astronomical and they do not deliver on the promises and everyone is operating at wild loses. At the moment we are at "if you owe 100k to your bank, you have a problem but if you owe 100M to your bank, your bank has a problem". It's the exact same bullshitter economy that people like musk have been exploiting for decades: promise a ton, never deliver, make a secondary promise for "next year", rinse and repeat -> infinite profit. Especially when you rope in fanatical followers.

    • I don't want to defend musk in any way but I think you are making a mistake there using him as an example because what boosted him quite a lot is that he actually delivered what he claimed. Always late but still earlier than anybody was guesstimating. And now he is completely spiraling but its a lot harder to lose a billion than to gain one so he persists and even gets richer. Plus his "fanatical" followers are poor. It just doesn't match the situation.

      3 replies →

  • > We don't look to Meta, who only a few years ago thought that the Metaverse would be the "next big thing" as an example of failure to identify the future, we look to IBM who made that mistake almost 30 years ago.

    The grandparent points to a pattern of failures whereas you point to Meta’s big miss. What you miss about Meta, and I am no fan, is that Facebook purchased Whatsapp and Instagram.

    In other words, two out of three ain’t bad; IBM is zero for three.

    While that’s not the thrust of your argument, which is about jumping on the problem of jumping on every hype train, the post to which you reply is not on about hype cycle. Rather, that post calls out IBM for a failure to understand the future of technology and does so by pointing to a history of failures.

    • > In other words, two out of three ain’t bad; IBM is zero for three.

      Many others in this thread have pointed out IBM's achievements but regardless, IBM is far from "zero for three".

      1 reply →

Got anything vis-a-vis the message as opposed to the messenger?

I'm not sure these examples are even the gotchas you're positing them as. Xerox is a dinosaur that was last relevant at the turn of the century, and IBM is a $300bn company. And if it wasn't obvious, the Apple II never made a dent in the corporate market, while IBM and later Windows PCs did.

In any case, these examples are almost half a century old and don't relate to capex ROI, which was the topic of dicussion.

  • If it's not obvious, Steve's quote is ENTIRELY about capex ROI, and I feel his quote is more relevant to what is happening today than anything Arvind Krishna is imagining. The quote is posted in my comment not to grandstand Apple in any sense, but to grandstand just how consistently wrong IBM has been about so many opportunities that they have failed to read correctly - reprography, mini computers and microcomputers being just three.

    Yes it is about ROI: "IBM enters the personal computer market in November ’81 with the IBM PC. 1983 Apple and IBM emerged as the industry’s strongest competitors each selling approximately one billion dollars worth of personal computers in 1983, each will invest greater than fifty million dollars for R&D and another fifty million dollars for television advertising in 1984 totaling almost one quarter of a billion dollars combined, the shakeout is in full swing. The first major firm goes bankrupt with others teetering on the brink, total industry losses for 83 out shadow even the combined profits of Apple and IBM for personal computers."

    • I have no horse in this race.

      I don’t think this is really a fair assessment. IBM is in fact a huge company today and it is possible that they are because they took the conservative approach in some of their acquisition strategy.

      It is a bit like watching someone play poker and fold and then it turns out they had the high hand after all. In hindsight you could of course know that the risk would have been worth it but at the moment perhaps it did not seem like it given the money the first player would be risking.

      1 reply →

    • A big difference is that in the past things like the potential of the PC were somewhat widely underestimated. And then the internet was again as well.

      But in modern times it's rather the opposite scenario. The average entity is diving head first into AI simply expecting a revolutionary jump in capability that a more 'informed', for lack of any less snooty term, perspective would suggest is quite unlikely to occur anytime in the foreseeable future. Basically we have a modern day gold rush where companies and taking out unbelievably massive loans to invest in shovels.

      The only way this doesn't catastrophically blow up is if AI companies manage to convince the government they're too big to fail, and get the Boeing, Banks, et al treatment. And I expect that's exactly the current strategy, but that's rather a high risk, low reward, type strategy.

      1 reply →

    • I have no special knowledge about IBM Vs Apple historically, but: a quarter billion in CAPEX when you've earned a billion in revenue in a single year is extremely different to what we're seeing now. These companies are spending all of their free cash flow, then taking on debt, to the tune of percentage points of world GDP, and multiples of any revenue they've seen so far. That kind of oversupply is a sure fire way to kill any ROI.

  • >the message as opposed to the messenger?

    Exactly.

    The message is plain to see with very little advanced math.

    The only news is that it is the CEO of IBM saying it out loud.

    IMHO he has some of the most credible opinions at this scale that many people have seen.

    It's "highly unlikely" that all this money will be paid back to everyone that invested at this point. The losers probably will outnumber the winners, and nobody knows whether it will end up becoming a winner-take-all situation yet. A number of wealthy players remain at the table, raising stakes with each passing round.

    It's so much money that it's already too late to do anything about it, and the full amount hasn't even changed hands yet.

    And the momentum from something so huge can mean that almost the entire amount will have to change hands a second time before a stable baseline can be determined relative to pre-existing assets.

    This can take longer than anyone gives credit for just because of massiveness, in the mean time, established real near-term growth opportunities may languish or even fade as the skew in rationality/solvency balance awaits the rolling dice to come to rest.

  • > Got anything vis-a-vis the message as opposed to the messenger?

    Sure: People disagree. It's not like there is anything particularly clever that IBM CEO provided here. The guy not investing in something saying it won't work is about as good as the people who do saying it will. It's simply different assumptions about the future.

  • Would you read this if I (a nobody) told you and not the "CEO of IBM"? In that case it's completely fair to question the messenger.

I read the actual article.

He is pointing out that the current costs to create the data centres means you will never be able to make a profit to cover those costs. $800 Billion just to cover the interest.

OpenAI is already haemorrhaging money and the space data centres has already been debunked. There is even a recent paper that points out that LLMs will never become AGI.

The article also finishes out with some other experts giving the same results.

[edit] Fixed $80 to $800

  • Sry to say but the fact that you argue with LLMs never become AGI, you are not up-to-date.

    People don't assume LLM will be AGI, people assume that World Models will lead us to AGI.

    I personally never asumed LLM will become AGI, i always assumed that LLM broke the dam for investment and research into massivce scale compute ML learning and LLMs are very very good in showing were the future goes because they are already so crazy good that people can now imagine a future were AGI exists.

    And that was very clear already when / as soon as GPT-3 came out.

    The next big thing will probably be either a LOT more RL or self propelling ai architecture discovery. Both need massive compute to work well but then will potentially provide even faster progress as soon as humans are out of the loop.

    • > People don't assume LLM will be AGI,

      I wish that was true.

      > people assume that World Models will lead us to AGI.

      Who are these people? There is no consensus around this that I have seen. You have anything to review regarding this?

      > as soon as GPT-3 came out.

      I don't think that was true at all. It was impressive when it came out, but people in the field clearly saw the limitations and what it is.

      RL isn't magical either. Google AlphaGo as an example often required human intervention to get the RL to work correctly.

      2 replies →

    • Are OpenAI or Anthropic et al seriously building towards “world models”? I haven’t seen any real evidence of that. It seems more like they are all in on milking LLMs for all they are worth.

      1 reply →

IBM is an interesting beast when it comes to business decisions. While I can't give exact details, their business intelligence and ability to predict monetary things is uncannily spot-on at times.

So, when their CEO says that this investment will not pay off, I tend to believe them, because they most probably have the knowledge, insight and data to back that claim, and they have ran the numbers.

Oh, also, please let's not forget that they dabbled in "big AI" before everyone else. Anyone remembers Deep Blue and Watson, the original chatbot backed by big data?

  • As evidenced by the fact that they are a 100+ year old company that still exists. People forget that.

We can cherry-pick blunders made by any big company to make a point. Maybe it would be more honest to also list companies IBM passed on that turned out to be rubbish? And all the technologies that IBM did invest in that made them a ton of money and became industry standards?[0]

Today, Xerox has less total revenue than IBM has profit. DEC went out of business 27 years ago. Apple is an in astoundingly great place right now, but Jobs got kicked out of his own company, and then returned when it was about to fail, having to take investment from Microsoft(!) in order to stay afloat.

Meanwhile, IBM is still here, making money hand over fist. We might not have a ton of respect for them, being mostly a consulting services company these days, but they're doing just fine.

[0] As another commenter points out: https://news.ycombinator.com/item?id=46131245

Were Xerox, Dec, or Apple burning investor money by the billions of dollars?

  • > Were Xerox, Dec, or Apple burning investor money by the billions of dollars?

    Shhh. You are not allowed to ruin OpenAI’s PPU value. Can’t make the E7’s feel bad.

  • No, but the comment above and variations of it are mentioned in every thread about IBM, so it’s probably just a reflex at this point without much thought behind it.

  • Xerox is clearly crushing it in 2025... /s

    • I'm typing this comment from an Apple MacBook, whose interface is a direct result of Xerox PARC allowing Steve Jobs to view the Alto. Xerox was extremely innovative at that time, and with the right leadership, could have become #1 in personal computing.

    • That's completely beyond the point, though? Kodak invented the digital camera, did not think anything about it and others then ate their lunch. Those others are also not crushing it in 2025. The point is IBM is not the go-to to listen about AI. Also not saying they are not right, even a broken clock is right 2 times a day.

      1 reply →

DEC went down the drain, Xerox is 1/1000 of IBM's market cap. IBM made its own, superior by its relative openness, personal computer that ended up running the world, mostly maintaining direct binary compatibility for 40+ years, even without IBM really paying attention.

  • How much did IBM itself benefit from the PC? I thought the clones ate their lunch there

    • Wikipedia says their PC revenue was twice Apple's by 1984 at $4 billion/year. Not bad for a side hustle?

      My understanding is that clones were a net positive, just like widespread Windows/Office piracy is a net positive for MS.

      1 reply →

What does that have to do with the current CEO's assessment of the situation?

  • [flagged]

    • A revolution means radical changes executed over a short period of time. Well with 4 years in, this has got to be one of the smallest "revolutions" we have ever witnessed in human history. Maybe it's revolutionary for people who get excited about crappy pictures they can insert into their slides to impress the management.

  • IBM sees the funding bubble bursting and the next wave of AI innovation as about to begin.

    IBM was too early with "Watson" to really participate in the 2018-2025 rapid scaling growth phase, but they want to be present for the next round of more sensible investment.

    IBM's CEO is attempting to poison the well for funding, startups, and other ventures so IBM can collect itself and take advantage of any opportunities to insert itself back into the AI game. They're hoping timing and preparation pay off this time.

    It's not like IBM totally slept on AI. They had Kubernetes clusters with GPUs. They had models and notebooks. But their offerings were the absolute worst. They weren't in a position to service real customers or build real products.

    Have you seen their cloud offerings? Ugh.

    They're hoping this time they'll be better prepared. And they want to dunk on AI to cool the playing field as much as they can. Maybe pick up an acquisition or two on the cheap.

    • How exactly are they poisoning the well..? OpenAI committed to 1.4 trillion investements...with a revenue of ~13B - how is IBM CEO contributing to that absolutely already poisoned situation? Steve Jobs did not care about naysayers when he introduced iPhone - because his product was so innovative for the time. According to AI boosters, we now have a segment of supposedly incredibly powerful and at the same time "dangerous" AI products. Why are they not sweeping the floor off with the "negators", "luddites", "laggards" etc... After so many hundreds of billions of dollars and supposedly so many "smart" AI researchers...Where are the groundbreaking results man? Where are the billion-dollar startups launched by single persons (heck, I'd settle even for a small team)...Where are the ultimate applications..etc?

50 year grudges are not relevant there is no one still at ibm that worked there in 1977, IMHO.

  • It’s the ship of Theseus in corporate form. Even if all the people are gone but the culture hasn’t changed, is the criticism inaccurate?

    • > Even if all the people are gone but the culture hasn’t changed

      Can you expand on this? What was the culture then versus now?

      For example back then it was the culture to have suit inspectors ensure you had the right clothes on and even measure your socks. (PBS Triumph of the Nerds)

    • I mean, okay, but you're taking the current leadership's words and claiming they are incorrect because IBM management was not great at identifying trends decades ago. Historical trend is not an indicator of the future and it's not engaging in good faith on the conversation if overspending on AI can be backed by revenue in the future. You're attacking the messenger instead of the message.

      8 replies →

  • Culture evolution can be very fast, yet some cultures stick around for a very long time.

"The amount being spent on AI data centres not paying off" is a different statement to "AI is not worth investing in". They're effectively saying the portions people are investing are disproportionately large to what the returns will end up being.

It's a difficult thing to predict, but I think there's almost certainly some wasteful competition here. And some competitors are probably going to lose hard. If models end up being easy to switch between and the better model is significantly better than its competitors, than anything invested in weaker models will effectively be for nothing.

But there's also a lot to gain from investing in the right model, even so it's possible those who invested in the winner may have to wait a long time to see a return on their investment and could still possibly over allocate their capital at the expense of other investment opportunities.

> IBM has not exactly had a stellar record at identifying the future.

This would be very damning if IBM had only considered three businesses over the course of seventy years and made the wrong call each time.

This is like only counting three times that somebody got food poisoning and then confidently asserting that diarrhea is part of their character.

Right, you just missed the part where DEC went out of business in the 90s. And IBM is still around, with a different business model.

Steve Jobs, the guy that got booted out of his own company and that required a lifeline from his arch nemesis to survive?

This is all true, but it was only true in hindsight and as such does not carry much value.

It's possible that you are right and AI is 'the future' but with the present day AI offering I'm skeptical as well. It isn't at a level where you don't have to be constantly on guard against bs and in that sense it's very different from computing so far, where reproducibility and accuracy of the results were important, not the language that they are cast in.

AI has killed the NLP field and it probably will kill quite a few others, but for the moment I don't see it as the replacement of general computing that the proponents say that it is. Some qualitative change is still required before I'm willing to check off that box.

In other news: Kodak declares digital cameras a fad, and Microsoft saw the potential of the mp3 format and created a killer device called the M-Pod.

But how many companies did IBM pass on that did crash and burn ? And how many did it not pass on and did decently ? They're still around after more than 3 generations worth of tech industry. They're doing something right.

TLDR Cherrypicking

You, or your existence, probably triggers multiple transactions per day through a POWER mainframe without you even knowing it. Their mainframes handle the critical infrastructure that can't go down.It's so reliable we don't even think about it. I shudder to think about Microsoft or Apple handling that.

How about check out how many companies exist today vs existed in 1958? If you look at it that way then just surviving is an achievement in itself and then you might interpret their actions as extremely astute business acumen.

IBM is still alive and kicking well, and definitively more relevant than Xerox or DEC. You are completely misconstruing Jobs’ point to justify the current AI datacenter tulip fever.

This isn't even a great argument at a literal level. Nowadays nobody cares about Xerox and their business is selling printers, DEC was bought by Compaq which was bought by HP. Apple is important today because of phones, and itself was struggling selling personal computers and needed a (antitrust-motivated) bailout from Microsoft to survive during the transition.

Cool story, but it’s more than just the opinion of this CEO. It’s logic.

Hardware is not like building railroads, the hardware is already out of date once deployed and the clock has started ticking on writing off the expense or turning a profit on it.

There are fundamental discoveries needed to make the current tech financially viable and an entire next generation of discoveries needed to deliver on the over inflated promises already made.

You could try addressing the actual topic of discussion vs this inflammatory and lazy "dunk" format that frankly, doesn't reflect favorably on you.

For some strange reason a lot of people were attracted by a comment that speaks about everything else BUT the actual topic and its the top comment now. Sigh.

If you think that carefully chosen anecdotes out of many many more are relevant, there needs to be at least an attempt of reasoning. There is nothing here. It's really just barebones mentioning of stuff intentionally selected to support the preconceived point.

I think we can, and should, do better in HN discussions, no? This is "vibe commenting".

The idea that a company DNA somehow lives over 100 years and maintains the same track record is far fetched.

that the OpenAI tech bro are investing in AI using a grown up ROI is similarly far fetched, they are burning money to pull ahead of the reset and assume the world will be in the palm of the winner and there is only 1 winner. Will the investment pay off if there are 3 neck and neck companies ?

I’m sorry, but this is stupid, you understand that you have several logical errors in your post? I was sure Clinton is going to win 2016. Does that mean that when I say 800 is bigger than 8 is not to be trusted?

Do people actually think that running a business is some magical realism where you can manifest yourself to become a billionaire if you just believe hard enough?

  • The post is almost worse than you give it credit for. Like it doesn't even take into account different people are making the decisions.