Comment by mendelmaleh
7 hours ago
> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.
I think it's time to have a badge for non LLM content, and avoid the rest.
There is also Brainmade: https://brainmade.org/
What's stopping AI made content from including this as well?
I imagine it's kind of like "What's stopping someone from forging your signature on almost any document?" The point is less that it's hard to fake, and more that it's a line you're crossing where everyone agrees you can't say "oops I didn't know I wasn't supposed to do that."
The name seems odd to me, because I think it's fine to describe things as a digital brain, especially when the word brain doesn't only apply to humans but to organisms as simple as a 959 cell roundworm with 302 neurons.
Also the logo seems to imply a plant has taken over this person and the content was made by some sort of body-snatched pod person.
If this gets any traction, AI bros on Twitter will put it on their generated images just out of spite.
There seems to be https://notbyai.fyi/ and https://no-ai-icon.com/ ..!
Even for content that isn’t directly composed by llm, I bet there’d be value in an alerting system that could ingest your docs and code+commits and flag places where behaviour referenced by docs has changed and may need to be updated.
This kind of “workflow” llm use has the potential to deliver a lot of value even to a scenario where the final product is human-composed.
I like these ones:
https://cadence.moe/blog/2024-10-05-created-by-a-human-badge...
> Most programming languages hide complexity from you—they abstract away memory management, mask control flow with implicit operations, and shield you from the machine beneath. This feels simple at first, but eventually you hit a wall. You need to understand why something is slow, where a crash happened, or how to squeeze every ounce of performance from your hardware. Suddenly, the abstractions that helped you get started are now in your way.
> Zig takes a different path. It reveals complexity—and then gives you the tools to master it.
> This book will take you from Hello, world! to building systems that cross-compile to any platform, manage memory with surgical precision, and generate code at compile time. You will learn not just how Zig works, but why it works the way it does. Every allocation will be explicit. Every control path will be visible. Every abstraction will be precise, not vague.
But sadly people like the prompter of this book will lie and pretend to have written things themselves that they did not. First three paragraphs by the way, and a bingo for every sign of AI.
These posts are getting old.
I had a discussion on some other submission a couple of weeks back, where several people were arguing "it's obviously AI generated" (the style btw was completely different to this, quite a few explicitives...). When I put the the text in 5 random AI detectors the argument who except for one (which said mixed, 10% AI or so) all said 100% human I was being down voted and the argument became "AI detection tools can detect AI" but somehow the people claim there are 100% clear telltale signs which says it's AI (why those detection tools can detect them is baffling to me).
I have the feeling that the whole "it's AI" stick has become a synonym for I don't like this writing style.
It really does not add to the discussion. If people would post immediately "there's spelling mistakes this is rubbish", they would rightfully get down voted, but somehow saying "it's AI" is acceptable. Would the book be any more or less useful if somebody used AI for writing it? So what is your point?
I ran the introduction chapter through Pangram [1], which is one of the most reliable AI-generated text classifiers out there [2] (with a benchmarked accuracy of 99.85% over long-form text), and it gives high confidence for it having been AI-generated. It's also very intuitively obvious if you play a lot with LLMs.
I have no problem at all reading AI-generated content if it's good, but I don't appreciate dishonesty.
[1]: https://www.pangram.com/ [2]: https://arxiv.org/pdf/2402.14873
Check out the other examples presented in this thread or read some of the chapters. I'm pretty sure the author used LLMs to generate at least parts of this text. In this case this would be particularly outrageous since the author explicitly advertizes the content as 100% handwritten.
> Would the book be any more or less useful if somebody used AI for writing it?
Personally, I don't want to read AI generated texts. I would appreciate if people were upfront about their LLM usage. At the very least they shouldn't lie about it.
The em dashes?
There's also the classic “it's not just X, it's Y”, adjective overuse, rule of 3, total nonsense (manage memory with surgical precision? what does that mean?), etc. One of these is excusable, but text entirely comprised of AI indicators is either deliberately written to mimic AI style, or the product of AI.
2 replies →
Meh. I mean, who's it for? People should be adopting the stance that everything is AI on the internet and make decisions from there. If you start trusting people telling you that they're not using AI, you're setting yourself up to be conned.
Edit: So I wrote this before I read the rest of the thread where everyone is pointing out this is indeed probably AI, so right of the bat the "AI-free" label is conning people.