Unfair - human beats AI in this comparison, as human will instantly answer "I don't know" instead of yelling a random number.
Or at best "I don't know, but maybe I can find out" and proceed to finding out/ But he is unlikely to shout "6" because he heard this number once when someone talked about light.
Because LLMs dont have a textual representation of any text they consume. Its just vectors to them. Which is why they are so good at ignoring typos, the vector distance is so small it makes no difference to them.
what bothers me is not that this issue will certainly disappear now that it has been identified, but that that we have yet to identify the category of these "stupid" bugs ...
That question is equivalent to asking a human to add the wavelengths of those two colors and divide it by 3.
Unless you're aware of hyperspectral image adapters for LLMs they aren't capable of that either.
Unfair - human beats AI in this comparison, as human will instantly answer "I don't know" instead of yelling a random number.
Or at best "I don't know, but maybe I can find out" and proceed to finding out/ But he is unlikely to shout "6" because he heard this number once when someone talked about light.
> human will instantly answer "I don't know" instead of yelling a random number.
Seems that you never worked with Accenture consultants?
1 reply →
Why is that?
Because LLMs dont have a textual representation of any text they consume. Its just vectors to them. Which is why they are so good at ignoring typos, the vector distance is so small it makes no difference to them.
yes its ridiculously good at stuff like that now. I dare you to try and trick it.
https://news.ycombinator.com/item?id=47495568
what bothers me is not that this issue will certainly disappear now that it has been identified, but that that we have yet to identify the category of these "stupid" bugs ...
6 replies →