← Back to context

Comment by shadowfacts

3 days ago

... yes, that's the complaint. The prompt engineering they did made it spew neo-Nazi vitriol. They either did not adequately test it beforehand and didn't know what would happen, or they did test and knew the outcome—either way, it's bad.

Long live Tay! https://en.wikipedia.org/wiki/Tay_(chatbot)

  • Tay (allegedly) learned from repeated interaction with users; the current generation of LLMs can't do that. It's trained once and then that's it.

    • Do you think that Tay's user-interactions were novel or perhaps race-based hatred is a consistent/persistent human garbage that made it into the corpus used to train LLMs?

      We're literally trying to shove as much data as possible into these things afterall.

      What I'm implying is that you think you made a point, but you didn't.

It was an interesting demonstration of the politically-incorrect-to-Nazi pipeline though.