← Back to context

Comment by SeanAnderson

2 years ago

I asked Bard, "Are you running Gemini Pro now?" And it told me, "Unfortunately, your question is ambiguous. "Gemini Pro" could refer to..." and listed a bunch of irrelevant stuff.

Is Bard not using Gemini Pro at time of writing? The blog post says, "Starting today, Bard will use a fine-tuned version of Gemini Pro for more advanced reasoning, planning, understanding and more."

(EDIT: it is... gave me a correct answer after some pushing. I just had to clarify that I meant "Bard". Not off to a great start)

It's weird, right? If I ask it:

Q: "Who are you?" A: "I am bard." Q: "Is Bard running Gemini Pro?" A: "Yes." Q: "Are you running Gemini Pro?" A: "Ambiguous Question."

Your line of thinking also presupposes that Bard is self aware about that type of thing. You could also ask it what programming language it's written in, but that doesn't mean it knows and/or will answer you.

  • This is a common occurance I'm seeing lately. People treating these things as oracles and going straight to chatgpt/bard instead of thinking or researching for themselves

    • I consider it a standard test because no self-respecting PM would allow the product to ship without being able to market itself correctly. There's a reason the seed prompt says, "You are Bard."

      I don't lack awareness of the limitations of pretrained models. I'm evaluating its ability to employ chain of reasoning, in combination with its plugins, to get me an obvious answer.

  • I had the same issue as OP. Initially Bard seemed clueless about Gemini, then:

    Me: I see. Google made an announcment today saying that Bard was now using a fine-tuned version of their "Gemini" model

    Bard: That's correct! As of December 6, 2023, I am using a fine-tuned version of Google's Gemini model ...

  • It has access to the Internet and is free to search for the right answer.

    If I ask it who it is, it says it is Bard. It is aware of the launch that occurred today. It cites December 6th.

    It just very incorrectly felt that I was asking an ambiguous question until I restate the same question again. It's not great.

    • It forgets previous prompts and answers. I have to specifically ask it to relate to those and take those into consideration.

  • Knowing it’s own build information is something that could be trained into the model right? Seems like a good idea.

Bard shows “PaLM2” in my answers, and it says “I can't create images yet so I'm not able to help you with that” when I ask it to do so, which Gemini ought to be able to since its transformer can output images.

I don’t think Bard is using Gemini Pro, perhaps because the rollout will be slow, but it is a bit of a blunder on Google’s part to indicate that it now uses it, since many will believe that this is the quality that Gemini assumes.

Came here to say exactly this. My conversation with Bard:

```

do you use Lambda model?

I use a variety of large language models, including the Lambda model, to perform my tasks. However, for each specific task, the best model is chosen based on its strengths and weaknesses. The Lambda model is particularly good at tasks that require understanding and responding to complex and nuanced language, such as generating creative text formats of text content and answering open ended, challenging, or strange questions. However, it may not be the best choice for tasks that require high accuracy or low latency.

Here's a breakdown of the models I use:

LaMDA: This is a family of Transformer-based neural language models specialized for dialogue. It is particularly good at understanding and responding to complex and nuanced language, and can generate different creative text formats of text content.

PaLM: This is another large language model, similar to LaMDA, but with a focus on factual accuracy. It is best suited for tasks that require precise and reliable information retrieval.

Meena: This is an older, smaller language model that was the precursor to LaMDA. It is still used for some tasks that require a more lightweight model. Other models: I also have access to a variety of other specialized language models, such as those for summarization, translation, and question answering. The specific model that is used for each task is determined by a variety of factors, including the nature of the task, the available resources, and the desired level of performance.

```

I wonder the same too, also is Gemini Pro supposed to beat GPT-4?

  • I thought so too but no, Gemini _Ultra_ is supposed to beat GPT-4. The one available through Bard is Gemini Pro. The wording sure is confusing!

    Looking at the technical report [1], on selected benchmarks, rounded score in %:

    Dataset | Gemini Ultra | Gemini Pro | GPT-4

    MMLU | 90 | 79 | 87

    BIG-Bench-Hard | 84 | 75 | 83

    HellaSwag | 88 | 85 | 95

    Natural2Code | 75 | 70 | 74

    WMT23 | 74 | 72 | 74

    [1] https://storage.googleapis.com/deepmind-media/gemini/gemini_...

For the record, GPT-4 still thinks it's GPT-3.