← Back to context

Comment by atemerev

6 years ago

Well, this is “fiction”, but he described his method precisely: get the publicly available Reddit comments archive, sort by controversial, set up the GAN generating and scoring texts according to controversy (the comments that cause the most schism and/or initiate discussions), train it, then try to generate new controversial comments. Seems doable.

I presume that everyone with a spark of scientific mindset and even a slight interest in the field now warming up their OpenGPT-2 instances and trying to reproduce the effect.

There's been lots of news about weaponized fake news, but is there any research being done with loading up NN/GANs with the OPPOSITE of scissors? Statements or speeches designed to unite groups, show commonalities? I'm sure it's been done jokingly to fluff like r/aww and might seem a bit hokey or insincerely feel-good, but... have to at least wonder what the polar opposite might be.

Well... https://www.reddit.com/r/slatestarcodex/comments/bers5o/usin...

  • That's about what I'd expect at the current level of GPT-2. It is very good in some dimensions, but terrible in others, e.g., right there at the beginning: "Blacks make up 1/8 of the world's population, but they account for only a 3/10 of global economic power." - GPT is very bad at math [1], it gets the structure of arguments involving math or numbers but doesn't understand the numbers, so right off there's a massive unforced error in this controversial statement in which it complains that 12.5% of the world's population has "only" 30% of the power, undercutting the whole thing from the getgo and making something that was a good start to a "controversial" statement completely fall apart into farce due to a simple error. It does this sort of thing pervasively. It's clear that GPT-2 is touching controversial topics but it's failing to put together controversial sentences.

    I suspect that the sort of thing that Scott discusses in this story isn't quite possible, at the full described power... but certainly if AI gets better than GPT-2, it'll get better at this too.

    [1]: Read some of https://www.reddit.com/r/SubSimulatorGPT2/search/?q=math&res...

    • Funny thing though - I could see that statement being particularly enraging once a large enough and random enough group got behind debating it. These statements don't have to make logical sense, they only have to trigger people into instinctive camps.

      "The math doesn't even make sense, and is making my point, that there's no problem here"

      "You're saying there's not a problem? Believe me, there's a problem..."

      etc.

      2 replies →

    • AI is already better (just not evenly distributed). The small GPT-2s are bad at math, but they're not trained for that in the first place; we know Transformers are capable of doing excellent things with math because they do in other papers which tackle more specialized problems like theorem proving. The shallowness of the GPT-2s is definitely part of it (it gets only a few sequential steps of computation to 'think'), as is lousy sampling procedures, and just a general of parameters: 'coherency' in general seems to improve drastically as you scale up to Megatron levels. If you combined all of the SOTA pieces and polished it for a while and plugged it into social media for RL, you'd get something much better than this...