> I had Claude3 read it for claims about quality, security, privacy. It found none.
You might want to consider actually reading things for yourself, rather than assuming the magic robots are telling the truth (the magic robot did not tell you the truth, here).
I had ChatGPT read your comment and asked it "Is this a useful methodology of evaluating articles?"
It cautioned me that "it's important to recognize the limitations of such tools" and said:
> Tools like Claude3 may struggle with understanding the context of certain claims or the overall tone and intent of the article. They might flag statements that are not actually problematic or miss important context that affects the interpretation of claims.
> I had Claude3 read it for claims about quality, security, privacy. It found none.
You might want to consider actually reading things for yourself, rather than assuming the magic robots are telling the truth (the magic robot did not tell you the truth, here).
[flagged]
I had ChatGPT read your comment and asked it "Is this a useful methodology of evaluating articles?"
It cautioned me that "it's important to recognize the limitations of such tools" and said:
> Tools like Claude3 may struggle with understanding the context of certain claims or the overall tone and intent of the article. They might flag statements that are not actually problematic or miss important context that affects the interpretation of claims.
Try some other AI. Maybe Clause couldn't read text in the screenshots.
No, don't try some other AI, actually read things, bloody hell.