← Back to context

Comment by tikkun

2 years ago

One observation: Sundar's comments in the main video seem like he's trying to communicate "we've been doing this ai stuff since you (other AI companies) were little babies" - to me this comes off kind of badly, like it's trying too hard to emphasize how long they've been doing AI (which is a weird look when the currently publicly available SOTA model is made by OpenAI, not Google). A better look would simply be to show instead of tell.

In contrast to the main video, this video that is further down the page is really impressive and really does show - the 'which cup is the ball in is particularly cool': https://www.youtube.com/watch?v=UIZAiXYceBI.

Other key info: "Integrate Gemini models into your applications with Google AI Studio and Google Cloud Vertex AI. Available December 13th." (Unclear if all 3 models are available then, hopefully they are, and hopefully it's more like OpenAI with many people getting access, rather than Claude's API with few customers getting access)

He's not wrong. DeepMind spends time solving big scientific / large-scale problems such as those in genetics, material science or weather forecasting, and Google has untouchable resources such as all the books they've scanned (and already won court cases about)

They do make OpenAI look like kids in that regard. There is far more to technology than public facing goods/products.

It's probably in part due to the cultural differences between London/UK/Europe and SiliconValley/California/USA.

  • While you are spot on, I cannot avoid thinking of 1996 or so.

    On one corner: IBM Deep Blue winning vs Kasparov. A world class giant with huge research experience.

    On the other corner, Google, a feisty newcomer, 2 years in their life, leveraging the tech to actually make something practical.

    Is Google the new IBM?

    • I don’t think Google is the same as IBM here. I think Google’s problem is its insanely low attention span. It frequently releases innovative and well built products, but seems to quickly lose interest. Google has become somewhat notorious for killing off popular products.

      On the other hand, I think IBM’s problem is its finance focus and longterm decay of technical talent. It is well known for maintaining products for decades, but when’s the last time IBM came out with something really innovative? It touted Watson, but that was always more of a gimmick than an actually viable product.

      Google has the resources and technical talent to compete with OpenAI. In fact, a lot of GPT is based on Google’s research. I think the main things that have held Google back are questions about how to monetize effectively, but it has little choice but to move forward now that OpenAI has thrown down the gauntlet.

      16 replies →

    • I think the analogy is kind of strained here - at the current stage, OpenAI doesn't have an overwhelming superiority in quality in the same way Google once did. And, if marketing claims are to be believed, Google's Gemini appears to be no publicity stunt. (not to mention that IBM's "downfall" isn't very related to Deep Blue in the first place)

      3 replies →

    • It's an interesting analogy. I think Googles problem is how disruptive this is to their core products monetization strategy. They have misaligned incentives in how quickly they want to push this tech out vs wait for it to be affordable with ads.

      Whereas for OpenAI there are no such constraints.

      Did IBM have research with impressive web reverse indexing tech that they didn't want to push to market because it would hurt their other business lines? It's not impossible... It could be as innocuous as discouraging some research engineer from such a project to focus on something more in line.

      This is why I believe businesses should be absolutely willing to disrupt themselves if they want to avoid going the way of Nokia. I believe Apple should make a standalone apple watch that cannibalizes their iPhone business instead of tying it to and trying to prop up their iPhone business (ofc shareholders won't like it). Whilst this looks good from Google - I think they are still sandbagging.. why can't I use Bard inside of their other products instead of the silly export thing.

    • OpenAI was at least around in 2017 when YCR HARC was closed down (because...the priority would be OpenAI).

    • Hmm, what was that tech from IBM deep blue, that apparently Google leveraged to such a degree?

      Was it “machine learning”? If so, I don’t think that was actually the key insight for Google search… right? Did deep blue even machine learn?

      Or was it something else?

      2 replies →

  • Oh it's good they working on important problems with their ai. Its just openai was working on my/our problems (or providing tools to do so) and that's why people are more excited about them. Not because of cultural differences. If you are more into weather forecasting, yeah it sure may be reasonable to prefer google more.

    • Stuff like alphafold has and will have huge impact in our lives, even if I am not into spending time folding proteins myself. It is absurd to make this sort of comparisons.

    • That’s what makes Altman a great leader. He understands marketing better than many of these giants. Google got caught being too big. Sure they will argue that AI mass release is a dangerous proposition, but Sam had to make a big splash otherwise he would be competing with incumbent marketing spendings far greater than OpenAI could afford.

      It was a genius move to go public with a simple UI.

      No matter how stunning the tech side is, if human interaction is not simple, the big stuff doesn’t even matter.

      1 reply →

  • That statement isn't really directed at the people who care about the scientific or tech-focused capabilities. I'd argue the majority of those folks interested in those things already know about DeepMind.

    This statement is for the mass market MBA-types. More specifically, middle managers and dinosaur executives who barely comprehend what generative AI is, and value perceived stability and brand recognition over bleeding edge, for better or worse.

    I think the sad truth is an enormous chunk of paying customers, at least for the "enterprise" accounts, will be generating marketing copy and similar "biz dev" use cases.

  • > They do make OpenAI look like kids in that regard.

    Nokia and Blackberry had far more phone-making experience than Apple when the iPhone launched.

    But if you can't bring that experience to bear, allowing you to make a better product - then you don't have a better product.

    • The thing is that OpenAI doesn't have an "iPhone of AI" so far. That's not to say what will happen in the future - the advent of generative AI may become a big "equalizer" in the tech space - but no company seems to have a strong edge that'd make me more confident in any one of them over others.

      4 replies →

    • Phones are an end-consumer product. AI is not only an end-consumer product (and probably not even mostly an end-consumer one). It is a tool to be used in many different steps in production. AI is not chatbots.

  • Great. But school's out. It's time to build product. Let the rubber hit the road. Put up or shut up, as they say.

    I'm not dumb enough to bet against Google. They appear to be losing the race, but they can easily catch up to the lead pack.

    There's a secondary issue that I don't like Google, and I want them to lose the race. So that will color my commentary and slow my early adoption of their new products, but unless everyone feels the same, it shouldn't have a meaningful effect on the outcome. Although I suppose they do need to clear a higher bar than some unknown AI startup. Expectations are understandably high - as Sundar says, they basically invented this stuff... so where's the payoff?

  • Damn I totally forgot Google actually has rights over its training set, good point, pretty much everybody else is just bootlegging it.

  • I think Apple (especially under Jobs) had it right that customers don’t really give a shit about how hard or long you’ve worked on a problem or area.

  • They do not make Openai look like kids. If anything, it looks like they spent more time, but achieved less. GPT-4 is still ahead of anything Google has released.

  • From afar it seems like the issues around Maven caused Google to pump the brakes on AI at just the wrong moment with respect to ChatGPT and bringing AI to market. I’m guessing all of the tech giants, and OpenAI, are working with various defense departments yet they haven’t had a Maven moment. Or maybe they have and it wasn’t in the middle of the race for all the marbles.

  • > They do make OpenAI look like kids in that regard.

    It makes Google look like old fart that wasted his life and didn't get anywhere and now he's bitter about kids running on his lawn.

  • > and Google has untouchable resources such as all the books they've scanned (and already won court cases about)

    https://www.hathitrust.org/ has that corpus, and its evolution, and you can propose to get access to it via collaborating supercomputer access. It grows very rapidly. InternetArchive would also like to chat I expect. I've also asked, and prompt manipulated chatGPT to estimate the total books it is trained with, it's a tiny fraction of the corpus, I wonder if it's the same with Google?

It's worth remembering that AI is more than LLMs. DeepMind is still doing big stuff: https://deepmind.google/discover/blog/millions-of-new-materi...

  • I just want to underscore that. DeepMind's research output within the last month is staggering:

    2023-11-14: GraphCast, word leading weather prediction model, published in Science

    2023-11-15: Student of Games: unified learning algorithm, major algorithmic breath-through, published in Science

    2023-11-16: Music generation model, seemingly SOTA

    2023-11-29: GNoME model for material discovery, published in Nature

    2023-12-06: Gemini, the most advanced LLM according to own benchmarks

    • Google is very good at AI research.

      Where it has fallen down (compared to its relative performance in relevant research) is public generative AI products [0]. It is trying very hard to catch up at that, and its disadvantage isn't technological, but that doesn't mean it isn't real and durable.

      [0] I say "generative AI" because AI is a big an amorphous space, and lots of Google's products have some form of AI that is behind important features, so I'm just talking about products where generative AI is the center of what the product offers, which have become a big deal recently and where Google had definitely been delivering far below its general AI research weight class so far.

      18 replies →

    • They publish but don't share. Who cares about your cool tech if we can't experience it ourselves? I don't care about your blog writeup or research paper.

      Google is locked behind research bubbles, legal reviews and safety checks.

      Mean while OpenAI is eating their lunch.

      11 replies →

  • Indeed, I would think the core search product as another example of ai/ml...

    • This does highlight the gap between SOTA and business production. Google search is very often a low quality, even user hostile experience. If Google has all this fantastic technology, but when the rubber hits the road they have no constructive (business supporting) use cases for their search interface, we are a ways away from getting something broadly useful.

      It will be interesting to see how this percolates through the existing systems.

      1 reply →

> Sundar's comments in the main video seem like he's trying to communicate "we've been doing this ai stuff since you (other AI companies) were little babies" - to me this comes off kind of badly

Reminds me of the Stadia reveal, where the first words out of his mouth were along the lines of "I'll admit, I'm not much of a gamer"

This dude needs a new speech writer.

  • This dude needs a new speech writer.

    How about we go further and just state what everyone (other than Wall St) thinks: Google needs a new CEO.

    One more interested in Google's supposed mission ("to organize the world's information and make it universally accessible and useful"), than in Google's stock price.

  • Dude needs a new job. He's been the Steve Balmer of Google, ruining what made them great and running the company into the ground.

    • >Steve Balmer of Google

      I've been making this exact comparison for years at this point.

      Both inherited companies with market dominant core products in near monopoly positions. They both kept the lights on, but the companies under them repeatedly fail the break into new markets and suffer from a near total lack of coherent vision and perverse internal incentives that contribute to the failure of new products. And after a while, the quality of that core product starts to stumble as well.

      The fact that we've seen this show before makes it all the more baffling to me that investors are happy about it. Especially when in the same timeframe we've seen Satya Nadella completely transform Microsoft and deliver relatively meteoric performance.

      5 replies →

To add to my comment above: Google DeepMind put out 16 videos about Gemini today, the total watch time at 1x speed is about 45 mins. I've now watched them all (at >1x speed).

In my opinion, the best ones are:

* https://www.youtube.com/watch?v=UIZAiXYceBI - variety of video/sight capabilities

* https://www.youtube.com/watch?v=JPwU1FNhMOA - understanding direction of light and plants

* https://www.youtube.com/watch?v=D64QD7Swr3s - multimodal understanding of audio

* https://www.youtube.com/watch?v=v5tRc_5-8G4 - helping a user with complex requests and showing some of the 'thinking' it is doing about what context it does/doesn't have

* https://www.youtube.com/watch?v=sPiOP_CB54A - assessing the relevance of scientific papers and then extracting data from the papers

My current context: API user of OpenAI, regular user of ChatGPT Plus (GPT-4-Turbo, Dall E 3, and GPT-4V), occasional user of Claude Pro (much less since GPT-4-Turbo with longer context length), paying user of Midjourney.

Gemini Pro is available starting today in Bard. It's not clear to me how many of the super impressive results are from Ultra vs Pro.

Overall conclusion: Gemini Ultra looks very impressive. But - the timing is disappointing: Gemini Ultra looks like it won't be widely available until ~Feb/March 2024, or possibly later.

> As part of this process, we’ll make Gemini Ultra available to select customers, developers, partners and safety and responsibility experts for early experimentation and feedback before rolling it out to developers and enterprise customers early next year.

> Early next year, we’ll also launch Bard Advanced, a new, cutting-edge AI experience that gives you access to our best models and capabilities, starting with Gemini Ultra.

I hope that there will be a product available sooner than that without a crazy waitlist for both Bard Advanced, and Gemini Ultra API. Also fingers crossed that they have good data privacy for API usage, like OpenAI does (i.e. data isn't used to train their models when it's via API/playground requests).

  • My general conclusion: Gemini Ultra > GPT-4 > Gemini Pro

    See Table 2 and Table 7 https://storage.googleapis.com/deepmind-media/gemini/gemini_... (I think they're comparing against original GPT-4 rather than GPT-4-Turbo, but it's not entirely clear)

    What they've released today: Gemini Pro is in Bard today. Gemini Pro will be coming to API soon (Dec 13?). Gemini Ultra will be available via Bard and API "early next year"

    Therefore, as of Dec 6 2023:

    SOTA API = GPT-4, still.

    SOTA Chat assistant = ChatGPT Plus, still, for everything except video, where Bard has capabilities . ChatGPT plus is closely followed by Claude. (But, I tried asking Bard a question about a youtube video today, and it told me "I'm sorry, but I'm unable to access this YouTube content. This is possible for a number of reasons, but the most common are: the content isn't a valid YouTube link, potentially unsafe content, or the content does not have a captions file that I can read.")

    SOTA API after Gemini Ultra is out in ~Q1 2024 = Gemini Ultra, if OpenAI/Anthropic haven't released a new model by then

    SOTA Chat assistant after Bard Advanced is out in ~Q1 2024 = Bard Advanced, probably, assuming that OpenAI/Anthropic haven't released new models by then

  • Watching these videos made me remember this cool demo Google did years ago where their earpods would auto translate in realtime a conversation between two people talking different languages. Turned out to be demo vaporware. Will this be the same thing?

  • When I watch any of these videos, all the related videos on my right sidebar are from Google, 16 of which were uploaded at the same time as the one I'm watching.

    I've never seen the entire sidebar filled with the videos of a single channel before.

    • Yeah. Dropping that blatant a weight on the algorithm is the most infuriating dark patterns I've noticed in a while.

  • Wait so it doesn't exist yet? Thanks for watching 45 minutes of video to figure that out for me. Why am I wasting my time reading this thread?

    Somebody please wake me up when I can talk to the thing by typing and dropping files into a chat box.

> to me this comes off kind of badly, like it's trying too hard to emphasize how long they've been doing AI

These lines are for the stakeholders as opposed to consumers. Large backers don't want to invest in a company that has to rush to the market to play catch-up, they want a company that can execute on long-term goals. Re-assuring them that this is a long-term goal is important for $GOOG.

  • It would be interesting to write a LLM query to separate speech details based on target audience: stakeholders, consumers, etc.

Its a conceit but not unjustified, they have been doing "AI" since their inception. And yeah, Sundar's term up until recently seems to me to be milking existing products instead of creating new ones, so it is a bit annoying when they act like this was their plan the whole time.

Google's weakness is on the product side, their research arm puts out incredible stuff as other commenters have pointed out. GPT essentially came out from Google researchers that were impatient with Google's reluctance to ship a product that could jeopardize ad revenue on search.

  • The point is if you have to remind people then you’re doing something wrong. The insight to draw from this is not that everyone else is misinformed about googles abilities (the implication), its that Google has not capitalized on their resources.

  • It's such a short sighted approach too because I'm sure someone will develop a GPT with native advertising and it'll be a blockbuster because it'll be free to use but also have strong revenue generating potential.

I also find that tone a bit annoying but I'm OK with it because it highlights how these types of bets, without an immediate benefit, can pay off very well in the long term, even for huge companies like Google. AI, as we currently know it, wasn't really a "thing" when Google started with it and the payoff wasn't clear. They've long had to defend their use of their own money for big R&D bets like this and only now is it really clearly "adding shareholder value".

Yes, I know it was a field of interest and research long before Google invested, but the fact remains that they _did_ invest deeply in it very early on for a very long time before we got to this point.

Their continued investment has helped push the industry forward, for better or worse. In light of this context, I'm ok with them taking a small victory lap and saying "we've been here, I told you it was important".

  • > only now is it really clearly "adding shareholder value".

    AI has been adding a huge proportion of the shareholder value at Google for many years. The fact that their inference systems are internal and not user products might have hidden this from you.

> we've been doing this ai stuff since you (other AI companies) were little babies

Actually, they kind of did. What's interesting is that they still only match GPT-4's version but don't propose any architectural breakthroughs. From an architectural standpoint, not much has changed since 2017. The 'breakthroughs', in terms of moving from GPT to GPT-4, included: adding more parameters (GPT-2/3/4), fine-tuning base models following instructions (RLHF), which is essentially structured training (GPT-3.5), and multi-modality, which involves using embeddings from different sources in the same latent space, along with some optimizations that allowed for faster inference and training. Increasing evidence suggests that AGI will not be attainable solely using LLMs/transformers/current architecture, as LLMs can't extrapolate beyond the patterns in their training data (according to a paper from DeepMind last month):

"Together our results highlight that the impressive ICL abilities of high-capacity sequence models may be more closely tied to the coverage of their pretraining data mixtures than inductive biases that create fundamental generalization capabilities."[1]

1. https://arxiv.org/abs/2311.00871

Sundar studied material science in school and is only slightly older than me. Google is a little over 25 years old. I guarantee you they have not been doing AI since I was a baby.

And how many financial people worth reconning with are under 30 years old? Not many.

  • Unless you are OpenAI, the company, I doubt OP implied it was aimed at you. But then I wouldn't know as I am much younger than Sundar Pichai and I am not on first name basis with him either ;-)

I do think that’s a backfire. Telling me how long you’ve been doing something isn’t that impressive if the other guy has been doing it for much less time and is better at it. It’s in fact the opposite.

  • Not if the little guy leveraged your inventions/research.

    • That's even worse: what it says is that you are getting beat at product even where you create the tech.

      Which is definitely where Google is in the generative AI space.

    • Echoes of Apple “leveraging” the Mouse/GUI interface from Xerox. I wonder if Google is at risk of going to way of Xerox, where they were so focused on their current business and product lineups they failed to see the potential new business lines their researchers were trying to show them.

      1 reply →

    • Weird for us to personify a corporation like that tbh. Google didn't invent transformers, researchers working at Google did.

      Sure Google paid em money/employed em, but the smarts behind it isn't the entity Google or the execs at the top, Sundar etc; it's those researchers. I like to appreciate individualism in a world where those at the top have lobbied their way into a 1% monopoly lmao.

      3 replies →

> "we've been doing this ai stuff since you (other AI companies) were little babies"

Well in fairness he has a point, they are starting to look like a legacy tech company.

> One observation: Sundar's comments in the main video seem like he's trying to communicate "we've been doing this ai stuff since you (other AI companies)

Sundar has been saying this repeatedly since Day 0 of the current AI wave. It's almost cliche for him at this point.

Well, deepmind was doing amazing stuff before OpenAI.

AlphaGo, AlphaFold, AlphaStar.

They were groundbreaking a long time ago. They just happened to miss the LLM surge.

They always do this, every time they get to mention AI. It appears somewhat desperate imo.

That was pretty impressive… but do I have to be “that guy” and point out the error it made?

It said rubber ducks float because they’re made of a material less dense than water — but that’s not true!

Rubber is more dense than water. The ducky floats because it’s filled with air. If you fill it with water it’ll sink.

Interestingly, ChatGPT 3.5 makes the same error, but GPT 4 nails it and explains the it’s the air that provides buoyancy.

I had the same impression with Google’s other AI demos: cute but missing something essential that GPT 4 has.

  • I spotted that too, but also, it didn't recognise the "bird" until it had feet, when it is supposedly better than a human expert. I don't doubt that the examples were cherry-picked, so if this is the best it can do, it's not very convincing.

  • I would've liked to see an explanation that includes the weight of water being displaced. That would also explain how a steel ship with an open top is also able to float.

In fairness, the performance/size ratio for models like BERT still gives GPT-3/4 and even Llama a run for it's money. Their tech isn't as product-ized as OpenAI's, but Tensorflow and it's ilk have been an essential part of driving actual AI adoption. The people I know in the robotics and manufacturing industries are forever grateful for the out-front work Google did to get the ball rolling.

  • You seem to be saying the same thing- Googles best work is in the past, their current offerings are underwhelming, even if foundational to the progress of others.

Didn't Google invent LLMs and didn't Google have an internal LLm with similar capabilities long before openai released the gpts? Remember when that guy got fired for making a claim it was conscious ?

The look isn't good. But it's not dishonest.

  • No this is not correct. Arguably OpenAI invented LLMs with GPT3 and the preceding scaling laws paper. I worked on LAMDA, it came after GPT4 and was not as capable. Google did invent the transformer, but all the authors of the paper have left since.

Incredible stuff, and yet TTS is still so robotic. Frankly I assume it must be deliberate at this point, or at least deliberate that nobody's worked on it because it's comparatively easy and dull?

(The context awareness of the current breed of generative AI seems to be exactly what TTS always lacks, awkward syllables and emphasis, pronunciation that would be correct sometimes but not after that word, etc.)

Google literally invented transformers that are at the core of all current AI/LLMs so Sundar's comment is very accurate.

  • Sundar's comments about Google doing AI (really ML) are based more on things that people externally know very little about. Systems like SETI, Sibyl, RePhil, SmartASS. These were all production ML systems that used fairly straightforward and conventional ML combined with innovative distributed computing and large-scale infrastructure to grow Google's product usage significantly over the past 20 years.

    For example here's a paper 10 years old now: https://static.googleusercontent.com/media/research.google.c... and another close to 10 years old now: https://research.google/pubs/pub43146/ The learning they expose in those papers came from the previous 10 years of operating SmartASS.

    However, SmartASS and sibyl weren't really what external ML people wanted- it was just fairly boring "increase watch time by identifying what videos people wioll click on" and "increase mobile app installs" or "show the ads people are likely to click on".

    It really wasn't until vincent vanhoucke stuffed a bunch of GPUs into a desktop and demonstrated scalable and dean/ng built their cat detector NN that google started being really active in deep learning. That was around 2010-2012.

  • But their first efforts in BARD were really not great. I'd just have left the bragging out in terms of how long. OpenAI and others have no doubt sent a big wakeup call to google. For a while it seemed like they had turned to focus an AI "safety" (remembering some big blowups on those teams as well) with papers about how AI might develop negative stereotypes (ie, men commit more violent crime then women?). That seems to have changed - this is very product focused, and I asked it some questions that in many models are screened out for "safety" and it responded which is almost even more surprising (ie. Statistically who commits more violent crime, men or women).

    • The big concern was biased datasets iirc and shit fits for people of color. Like clearly mislabeling feminine looking women as men, and a stupid high false positive rate for face detection.

      That was relevant given they were selling their models to law enforcement.

> A better look would simply be to show instead of tell.

Completely! Just tried Bard. No images and the responses it gave me were pretty poor. Today's launch is a weak poor product launch, looks mostly like a push to close out stuff for Perf and before everybody leaves for the rest of the December for vacation.

A simple REST API with a static token auth like OpenAI API would help. Previously when I tried Bard API it was refusing to accept token auth, requiring that terrible oauth flow so I gave up.

> show instead of tell

They showed AlphaGo, they showed Transformers.

Pretty good track record.

  • That was ages ago. In AI even a week feels like a whole year in other fields. And many/most of those researchers have fled to startups, so those startups also have a right to brag. But not too much - only immediate access to a model beating GPT4 is worth bragging today (cloud), or getting GPT3.5 quality from a model running on a phone (edge).

    So it's either free-private-gpt3.5 or cloud-better-than-gpt4v. Nothing else matters now. I think we have reached an extreme point of temporal discounting (https://en.wikipedia.org/wiki/Time_preference).

    • The Transformer paper “Attention is All You Need” came out in 2017. Sundar got the CEO job two years earlier, so he was in CEO diapers at the time if you will.

      I would argue Google has done almost nothing interesting since then (at least not things they haven't killed)

I find this video really freaky. It’s like Gemini is a baby or very young child and also a massively know it all adult that just can’t help telling how clever it is and showing off its knowledge.

People speak of the uncanny valley in terms of appearance. I am getting this from Gemini. It’s sort of impressive but feels freaky at the same time.

Is it just me?

  • No, there's an odd disconnect between the impressiveness of the multimodal capabilities vs the juvenile tone and insights compared to something like GPT-4 that's very bizarre in application.

    It is a great example of what I've been finding a growing concern as we double down on Goodhart's Law with the "beats 30 out of 32 tests compared to existing models."

    My guess is those tests are very specific to evaluations of what we've historically imagined AI to be good at vs comprehensive tests of human ability and competencies.

    So a broad general pretrained model might actually be great at sounding 'human' but not as good at logic puzzles, so you hit it with extensive fine tuning aimed at improving test scores on logic but no longer target "sounding human" and you end up with a model that is extremely good at what you targeted as measurements but sounds like a creepy toddler.

    We really need to stop being so afraid of anthropomorphic evaluation of LLMs. Even if the underlying processes shouldn't be anthropomorphized, the expressed results really should be given the whole point was modeling and predicting anthropomorphic training data.

    "Don't sound like a creepy soulless toddler and sound more like a fellow human" is a perfectly appropriate goal for an enterprise scale LLM, and we shouldn't be afraid of openly setting that as a goal.

they have to try something, otherwise it looks like they've been completely destroyed by a company of 1000 people

Yes it sounds like a conspiracy theory about government and big tech working on advanced tech which has existed for decades but kept secret.

No surprises here.

Google DeepMind squandered their lead in AI so much that they now have to have “Google” prepended to their name to show that adults are now in charge.

  • What an ugly statement. DeepMind has been very open with their research since the beginning because their objective was much more on making breakthroughs with moonshot projects than near term profit.