← Back to context

Comment by bitshiftfaced

2 years ago

There's nothing wrong with what you're saying, but what do you suggest? Factuality is an area of active research, and Deepmind goes into some detail in their technical paper.

The models are too useful to say, "don't use them at all." Hopefully people will heed the warnings of how they can hallucinate, but further than that I'm not sure what more you can expect.

The problem is not with the model, but with its portrayal in the marketing materials. It's not even the fact that it lied, which is actually realistic. The problem is the lie was not called out as such. A better demo would have had the user note the issue and give the model the opportunity to correct itself.

  • But you yourself said that it was so convincing that the people doing the demo didn't recognize it as false, so how would they know to call it out as such?

    I suppose they could've deliberately found a hallucination and showcased it in the demo. In which case, pretty much every company's promo material is guilty of not showcasing negative aspects of their product. It's nothing new or unique to this case.

    • They should have looked more carefully, clearly. Especially since they were criticized for the exact same thing in their last launch.