Comment by dontupvoteme

2 years ago

He's not wrong. DeepMind spends time solving big scientific / large-scale problems such as those in genetics, material science or weather forecasting, and Google has untouchable resources such as all the books they've scanned (and already won court cases about)

They do make OpenAI look like kids in that regard. There is far more to technology than public facing goods/products.

It's probably in part due to the cultural differences between London/UK/Europe and SiliconValley/California/USA.

While you are spot on, I cannot avoid thinking of 1996 or so.

On one corner: IBM Deep Blue winning vs Kasparov. A world class giant with huge research experience.

On the other corner, Google, a feisty newcomer, 2 years in their life, leveraging the tech to actually make something practical.

Is Google the new IBM?

  • I don’t think Google is the same as IBM here. I think Google’s problem is its insanely low attention span. It frequently releases innovative and well built products, but seems to quickly lose interest. Google has become somewhat notorious for killing off popular products.

    On the other hand, I think IBM’s problem is its finance focus and longterm decay of technical talent. It is well known for maintaining products for decades, but when’s the last time IBM came out with something really innovative? It touted Watson, but that was always more of a gimmick than an actually viable product.

    Google has the resources and technical talent to compete with OpenAI. In fact, a lot of GPT is based on Google’s research. I think the main things that have held Google back are questions about how to monetize effectively, but it has little choice but to move forward now that OpenAI has thrown down the gauntlet.

    • In addition, products that seem like magic at launch get worse over time instead of better.

      I used to do all kinds of really cool routines and home control tasks with Google home, and it could hear and interpret my voice at a mumble. I used it as an alarm clock, to do list, calendar, grocery list, lighting control, give me weather updates, set times etc. It just worked.

      Now I have to yell unnaturally loud for it to even wake, and even then the simplest commands have a 20% chance of throwing “Sorry I don’t understand” or playing random music. Despite having a device in every room it has lost the ability to detect proximity and will set timers or control devices across the house. I don’t trust it enough anymore for timers and alarms, since it will often confirm what I asked then simply… not do it.

      Ask it to set a 10 minute timer.

      It says ok setting a timer for 10 minutes.

      3 mins later ask it how long is remaining on the timer. A couple years ago it would say “7 minutes”.

      Now there’s a good chance it says I have no timers running.

      It’s pathetic, and I would love any insight on the decay. (And yes they’re clean, the mics are as unobstructed as they were out of the box)

      6 replies →

    • > its insanely low attention span. It frequently releases innovative and well built products, but seems to quickly lose interest quickly. Google has become somewhat notorious for killing off popular products.

      I understood this problem to be "how it manages its org chart and maps that onto the customer experience."

      5 replies →

    • Along with your thoughts, I feel that Google's problem has always been over-promising. (There's even comedy skits about it.)

      That starts with the demonstrations which show really promising technology, but what eventually ships doesn't live up to the hype (or often doesn't ship at all.)

      It continues through to not managing the products well, such as when users have problems with them and not supporting ongoing development so they suffer decay.

      It finishes with Google killing established products that aren't useful to the core mission/data collection purposes. For products which are money makers they take on a new type of financially-optimised decay as seen with Search and more recently with Chrome and YouTube.

      I'm all for sunsetting redundant tech, but Google has a self-harm problem.

      The cynic in me feels that part of Google's desire to over-promise is to take the excitement away from companies which ship* what they show. This seems to align with Pichai's commentary, it's about appearing the most eminent, but not necessarily supporting that view with shipping products.

      * The Verge is already running an article about what was faked in the Gemini demo, and if history repeats itself this won't be the only thing they mispresented.

    • Google has one major disadvantage - it's an old megacorporation, not a startup. OpenAI will be able to innovate faster. The best people want to work at OpenAI, not Google.

      1 reply →

  • I think the analogy is kind of strained here - at the current stage, OpenAI doesn't have an overwhelming superiority in quality in the same way Google once did. And, if marketing claims are to be believed, Google's Gemini appears to be no publicity stunt. (not to mention that IBM's "downfall" isn't very related to Deep Blue in the first place)

    • > OpenAI doesn't have an overwhelming superiority in quality in the same way Google once did

      The comparison is between a useful shipping product available to everyone for a full year vs a tech demo of an extremely limited release to privileged customers.

      There are millions of people for whom OpenAI's products are broadly useful, and the specifics of where they fall short compared to Gemini are irrelevant here, because Google isn't offering anything comparable that can be tested.

    • I'd say IBM's downfall was directly related to failing to monetize Deep Blue (and similar research) at scale.

      At the time, I believe IBM was still "we'll throw people and billable hours at a problem."

      They had their lunch eaten because their competitors realized they could undercut IBM on price if they changed the equation to "throw compute at a problem."

      In other words, sell prebuilt products instead of lead-ins to consulting. And harness advertising to offer free products to drive scale to generate profit. (e.g. Google/search)

      1 reply →

  • It's an interesting analogy. I think Googles problem is how disruptive this is to their core products monetization strategy. They have misaligned incentives in how quickly they want to push this tech out vs wait for it to be affordable with ads.

    Whereas for OpenAI there are no such constraints.

    Did IBM have research with impressive web reverse indexing tech that they didn't want to push to market because it would hurt their other business lines? It's not impossible... It could be as innocuous as discouraging some research engineer from such a project to focus on something more in line.

    This is why I believe businesses should be absolutely willing to disrupt themselves if they want to avoid going the way of Nokia. I believe Apple should make a standalone apple watch that cannibalizes their iPhone business instead of tying it to and trying to prop up their iPhone business (ofc shareholders won't like it). Whilst this looks good from Google - I think they are still sandbagging.. why can't I use Bard inside of their other products instead of the silly export thing.

  • OpenAI was at least around in 2017 when YCR HARC was closed down (because...the priority would be OpenAI).

  • google is the new IBM.

    apple is the new Nokia.

    openai is the new google.

    microsoft is the new apple.

    • No, because OpenAI and Microsoft both have “CUSTOMER NONCOMPETE CLAUSES” in their terms of use. I didn’t check Apple, but Google doesn’t have any shady monopolistic stuff like that.

      Proof OpenAI has this shady monopolistic stuff: https://archive.ph/vVdIC

      “What You Cannot Do. You may not use our Services for any illegal, harmful, or abusive activity. For example, you may not: […] Use Output to develop models that compete with OpenAI.” (Hilarious how that reads btw)

      Proof Microsoft has this shady monopolistic stuff: https://archive.ph/N5iVq

      “AI Services. ”AI services” are services that are labeled or described by Microsoft as including, using, powered by, or being an Artificial Intelligence (“AI”) system. Limits on use of data from the AI Services. You may not use the AI services, or data from the AI services, to create, train, or improve (directly or indirectly) any other AI service.”

      That 100% does include GitHub Copilot, by the way. I canceled my sub. After I emailed Satya, they told me to post my “feedback” in a forum for issues about Xbox and Word (what a joke). I emailed the FTC Antitrust team. I filed a formal complaint with the office of the attorney general of the state of Washington.

      I am just one person. You should also raise a ruckus about this and contact the authorities, because it’s morally bankrupt and almost surely unlawful by virtue of extreme unfairness and unreasonableness, in addition to precedent.

      AWS, Anthropic, and NVIDIA also all have similar Customer Noncompete Clauses.

      I meekly suggest everyone immediately and completely boycott OpenAI, Microsoft, AWS, Anthropic, and NVIDIA, until they remove these customer noncompete clauses (which seem contrary to the Sherman Antitrust Act).

      Just imagine a world where AI can freely learn from us, but we are forbidden to learn from AI. Sounds like a boring dystopia, and we ought to make sure to avoid it.

      19 replies →

    • I have considered Oracle and MS to be competing for the title of new IBM. Maybe MS is shaking it off with their AI innovation, but I think a lot of that is just lipstick.

  • Hmm, what was that tech from IBM deep blue, that apparently Google leveraged to such a degree?

    Was it “machine learning”? If so, I don’t think that was actually the key insight for Google search… right? Did deep blue even machine learn?

    Or was it something else?

    • Deep Blue was the name of the computer itself rather than the software, but to answer your question - it didn't use machine learning, its program was written and tweaked by hand. It contained millions of different games and positions, and functioned by evaluating all possible moves at a certain depth. As far as I know, practical machine learning implementations wouldn't be a thing for a decent while after Deep Blue.

      1 reply →

Oh it's good they working on important problems with their ai. Its just openai was working on my/our problems (or providing tools to do so) and that's why people are more excited about them. Not because of cultural differences. If you are more into weather forecasting, yeah it sure may be reasonable to prefer google more.

  • Stuff like alphafold has and will have huge impact in our lives, even if I am not into spending time folding proteins myself. It is absurd to make this sort of comparisons.

  • That’s what makes Altman a great leader. He understands marketing better than many of these giants. Google got caught being too big. Sure they will argue that AI mass release is a dangerous proposition, but Sam had to make a big splash otherwise he would be competing with incumbent marketing spendings far greater than OpenAI could afford.

    It was a genius move to go public with a simple UI.

    No matter how stunning the tech side is, if human interaction is not simple, the big stuff doesn’t even matter.

That statement isn't really directed at the people who care about the scientific or tech-focused capabilities. I'd argue the majority of those folks interested in those things already know about DeepMind.

This statement is for the mass market MBA-types. More specifically, middle managers and dinosaur executives who barely comprehend what generative AI is, and value perceived stability and brand recognition over bleeding edge, for better or worse.

I think the sad truth is an enormous chunk of paying customers, at least for the "enterprise" accounts, will be generating marketing copy and similar "biz dev" use cases.

> They do make OpenAI look like kids in that regard.

Nokia and Blackberry had far more phone-making experience than Apple when the iPhone launched.

But if you can't bring that experience to bear, allowing you to make a better product - then you don't have a better product.

  • The thing is that OpenAI doesn't have an "iPhone of AI" so far. That's not to say what will happen in the future - the advent of generative AI may become a big "equalizer" in the tech space - but no company seems to have a strong edge that'd make me more confident in any one of them over others.

  • Phones are an end-consumer product. AI is not only an end-consumer product (and probably not even mostly an end-consumer one). It is a tool to be used in many different steps in production. AI is not chatbots.

Great. But school's out. It's time to build product. Let the rubber hit the road. Put up or shut up, as they say.

I'm not dumb enough to bet against Google. They appear to be losing the race, but they can easily catch up to the lead pack.

There's a secondary issue that I don't like Google, and I want them to lose the race. So that will color my commentary and slow my early adoption of their new products, but unless everyone feels the same, it shouldn't have a meaningful effect on the outcome. Although I suppose they do need to clear a higher bar than some unknown AI startup. Expectations are understandably high - as Sundar says, they basically invented this stuff... so where's the payoff?

  • Why don't you like Google?

    • The usual reasons, evil big corp monopoly with a user-hostile business model etc.

      I still use their products. But if I had to pick a company to win the next gold rush, it wouldn't be an incumbent. It's not great that MSFT is winning either, but they are less user-hostile in the sense that they aren't dependent on advertising (another word for "psychological warfare" and "dragnet corporate surveillance"), and I also appreciate their pro-developer innovations.

Damn I totally forgot Google actually has rights over its training set, good point, pretty much everybody else is just bootlegging it.

I think Apple (especially under Jobs) had it right that customers don’t really give a shit about how hard or long you’ve worked on a problem or area.

They do not make Openai look like kids. If anything, it looks like they spent more time, but achieved less. GPT-4 is still ahead of anything Google has released.

From afar it seems like the issues around Maven caused Google to pump the brakes on AI at just the wrong moment with respect to ChatGPT and bringing AI to market. I’m guessing all of the tech giants, and OpenAI, are working with various defense departments yet they haven’t had a Maven moment. Or maybe they have and it wasn’t in the middle of the race for all the marbles.

> They do make OpenAI look like kids in that regard.

It makes Google look like old fart that wasted his life and didn't get anywhere and now he's bitter about kids running on his lawn.

> and Google has untouchable resources such as all the books they've scanned (and already won court cases about)

https://www.hathitrust.org/ has that corpus, and its evolution, and you can propose to get access to it via collaborating supercomputer access. It grows very rapidly. InternetArchive would also like to chat I expect. I've also asked, and prompt manipulated chatGPT to estimate the total books it is trained with, it's a tiny fraction of the corpus, I wonder if it's the same with Google?

  • > I've also asked, and prompt manipulated chatGPT to estimate the total books it is trained with

    Whatever answer it gave you is not reliable.