Comment by cubefox
7 days ago
> his writing is terrible and extremely AI-enthusiastic
I disagree, his writings are generally quite good. For example, in a recent article [1] on a hostile Gemini distillation attempt, he gives a significant amount of background, including the relevant historical precedent of Alpaca, which almost any other journalist wouldn't even know about.
1: https://arstechnica.com/ai/2026/02/attackers-prompted-gemini...
For what it's worth, both the article you're linking to and the one this story is about are immediately flagged by AI text checkers as LLM-generated. These tools are not perfect, but they're right more often than they're wrong.
>These tools are not perfect, but they're right more often than they're wrong.
Based on what in particular? The only time I have used them is to have a laugh.
Based on experience, including a good number of experiments I've done with known-LLM output and contemporary, known-human text. Try them for real and be surprised. Some of the good, state-of-the-art tools include originality.ai and Pangram.
A lot of people on HN have preconceived notions here based on stories they read about someone being unfairly accused of plagiarism or people deliberately triggering failure modes in these programs, and that's basically like dismissing the potential of LLMs because you read they suggested putting glue on a pizza once.
2 replies →
> immediately flagged by AI text checkers as LLM-generate
Proof? Which one? I would like to test a few other articles with your checker to test its accuracy.
hey! im not op but ive used originality.ai before and it saved my ass. its super sensitive, but also super accurate
1 reply →