Comment by throwuwu
2 years ago
I thought the conclusion was the weakest part. Look at the two ambiguous responses for terraforming and asking AIs for advice side by side. They’re basically form letters with opposing opinions substituted in. Contrast this to text completion using GPT-3 which will give a definite answer that builds off the content given. Chat GPT obviously has some “guard rails” in place for certain types of questions ie they’ve intentionally made it present both sides of an argument. Probably in order to avoid media controversy since most news outlets and a lot of people ITT would pounce on any professed beliefs such a system might seem to have. The solution was to make it waffle but even that has been seized up to proclaim its amorality and insinuate darker tendencies!
FFS people, you’re looking at a Chinese Room and there’s no man with opinions inside. Just a fat rule book and a glorified calculator.
Tangential to your actual concerns but I studied CS without any exposure to Searle or AI, so I've never had to think much about Chinese Room or Turing Test debates. Every time a discussion turns to those I am bemused by how argumentative some people get!
> ie they’ve intentionally made it present both sides of an argument
Is it intentional? Or something it just did on its own?
I’m sure it’s intentional, compared to when it was first released and it would gladly give you amazingly opinionated answers. You can also compare it to GPT-3 which will mostly still do that even though it does have a weird bias towards safe answers when you don’t give it a lot of pre-amble.