← Back to context

Comment by Paradigma11

6 days ago

That question is equivalent to asking a human to add the wavelengths of those two colors and divide it by 3.

Unless you're aware of hyperspectral image adapters for LLMs they aren't capable of that either.

Unfair - human beats AI in this comparison, as human will instantly answer "I don't know" instead of yelling a random number.

Or at best "I don't know, but maybe I can find out" and proceed to finding out/ But he is unlikely to shout "6" because he heard this number once when someone talked about light.

  • > human will instantly answer "I don't know" instead of yelling a random number.

    Seems that you never worked with Accenture consultants?

    • Fair.

      Yet this can be filtered with fixed rules, like "output produced by corporate structures is untrusted random data".

Why is that?

  • Because LLMs dont have a textual representation of any text they consume. Its just vectors to them. Which is why they are so good at ignoring typos, the vector distance is so small it makes no difference to them.