Comment by Yoric
2 years ago
I actually feel that they can be very dangerous. Not because of the fabled AGI, but because
1. they're so good at showing the appearance of being right;
2. their results are actually quite unpredictable, not always in a funny way;
3. C-level executives actually believe that they work.
Combine this with web APIs or effectors and this is a recipe for disaster.
I got into an argument with someone over text yesterday and the person said their argument was true because ChatGPT agreed with them and even sent the ChatGPT output to me.
Just for an example of your danger #1 above. We used to say that the internet always agrees with us, but with Google it was a little harder. ChatGPT can make it so much easier to find agreeing rationalizations.
The ‘plausible text generator’ element of this is perfect for mass fraud and propaganda.
3. Sorry, but how do you know what do they believe in?
My bad, I meant too many C-level executives believe that they actually work.
And the reason I believe that is that, as far as I understand, many companies are laying off employees (or at least freezing hiring) with the expectation that AI will do the work. I have no mean to quantify how many.