Comment by philipwhiuk
14 days ago
I feel like 'vibes' is being used to dismiss anecdotal evidence, which, amalgamated across an increasing swathe of technical users, approaches actual evidence.
The very fact that it was clearly wrong in the example shows you that Google is capable of building a flawed Knowledge graph. This is not vibes, this is either a bug or, frankly far more likely, the inevitable problem of trying to compress all of human existence into a LLM model.
Given that LLM training is an imperfect storage mechanism, is it really hard to believe that a given iteration of the model will just not "know" arbitrary facts.
My personal anecdata on this is that searching for the 'ephemeral port range' the Google AI summary was wrong, even though the Wikipedia reference it used was correct.
No comments yet
Contribute on Hacker News ↗