← Back to context

Comment by vidarh

6 days ago

Most human criticism is not doing anything of deep analysis either, but just reading the text and comparing it to what we know about similar text.

> Even if there was nothing wrong at all Claude would still find something to critique

And the entire point was that you claimed 'Both ChatGPT and Claud always say something like, "a few grammar corrections are needed but this is excellent!"'.

Which clearly is not the case, as demonstrated. What you get out will depend on how much effort you're willing to put into prompting to specify the type of response you want, because they certainly lack "personality" and will try to please you. But that includes trying to please you when your prompt specifies how you want them to treat the input.

> It doesn't really have an understanding of the text at all.

This is a take that might have made sense a few years ago. It does not make sense with current models at all, and to me is a take that typically suggest a lack of experience with the models. Current models can in my experience e.g. often spot reasoning errors in text provided to them that the human writer of said text refuse to acknowledge is there.

I suggest you try to paste some bits of text into any of the major models and ask them to explain what they think the author might have meant. They don't get it perfectly right all of the time, but they can go quite in-depth and provide analysis that well exceeds what a lot of people would manage.

It literally said that the word "this" in the second sentence was bad despite that word not being used in the sentence.

  • Yes, it made a tiny error. None of which changes a word of what I wrote, nor addresses any of it.

    You seem to be stuck in a loop.