← Back to context

Comment by jasonjmcghee

9 hours ago

So despite this...

> The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices.

I just don't buy it. I'm 99% sure this is written by an LLM.

Can the author... Convince me otherwise?

> This journey begins with simplicity—the kind you encounter on the first day. By the end, you will discover a different kind of simplicity: the kind you earn by climbing through complexity and emerging with complete understanding on the other side.

> Welcome to the Zigbook. Your transformation starts now.

...

> You will know where every byte lives in memory, when the compiler executes your code, and what machine instructions your abstractions compile to. No hidden allocations. No mystery overhead. No surprises.

...

> This is not about memorizing syntax. This is about earning mastery.

Pretty clear it's all AI. The @zigbook account only has 1 activity prior to publishing this repo, and that's an issue where they mention "ai has made me too lazy": https://github.com/microsoft/vscode/issues/272725

  • After reading the first five chapters, I'm leaning this way. Not because of a specific phrase, but because the pacing is way off. It's really strange to start with symbol exporting, then moving to while loops, then moving to slices. It just feels like a strange order. The "how it works" and "key insights" also feel like a GPT summarization. Maybe that's just a writing tic, but the combination of correct grammar with bad pacing isn't something I feel like a human writer has. Either you have neither (due to lack of practice), or both (because when you do a lot of writing you also pick up at least some ability to pace). Could be wrong though.

It's just an odd claim to make when it feels very much like AI generated content + publish the text anonymously. It's obviously possible to write like this without AI, but I can't remember reading something like this that wasn't written by AI.

It doesn't take away from the fact that someone used a bunch of time and effort on this project.

  • To be clear, I did not dismiss the project or question its value - simply questioned this claim as my experience tells me otherwise and they make a big deal out of it being human written and "No AI" in multiple places.

    • I agree with you. After reading a couple of the chapters I'd be surprised if this wasn't written by an LLM.

  • Did they actually spend a bunch of time and effort though? I think you could get an llm to generate the entire thing, website and all.

    Check out the sleek looking terminal--there's no ls, cd, it's just an ai hallucination.

I was pretty skeptical too, but it looks legit to me. I've been doing Zig off and on for several years, and have read through the things I feel like I have a good understanding of (though I'm not working on the compiler, contributing to the language, etc.) and they are explained correctly in a logical/thoughtful way. I also work with LLMs a ton at work, and you'd have to spoon-feed the model to get outputs this cohesive.

Pangram[1] flags the introduction as totally AI-written, which I also suspected for the same reasons you did

[1] one of the only AI detectors that actually works, 99.9% accuracy, 0.1% false positive

  • Keep in mind that pangram flags many hand-written things as AI.

    > I just ran excerpts from two unpublished science fiction / speculative fiction short stories through it. Both came back as ai with 99.9% confidence. Both stories were written in 2013.

    > I've been doing some extensive testing in the last 24 hours and I can confidently say that I believe the 1 in 10,000 rate is bullshit. I've been an author for over a decade and have dozens of books at hand that I can throw at this from years prior to AI even existing in anywhere close to its current capacity. Most of the time, that content is detected as AI-created, even when it's not.

    > Pangram is saying EVERYTHING I have hand written for school is AI. I've had to rewrite my paper four times already and it still says 99.9% AI even though I didn't even use AI for the research.

    > I've written an overview of a project plan based on a brief and, after reading an article on AI detection, I thought it would be interesting to run it through AI detection sites to see where my writing winds up. All of them, with the exception of Pangram, flagged the writing as 100% written by a human. Pangram has "99% confidence" of it being written by AI.

    I generally don't give startups my contact info, but if folks don't mind doing so, I recommend running pangram on some of their polished hand written stuff.

    https://www.reddit.com/r/teachingresources/comments/1icnren/...

    • How long were the extracts you gave to Pangram? Pangram only has the stated very high accuracy for long-form text covering at least a handful of paragraphs. When I ran this book, I used an entire chapter.

    • Weird to me that nobody ever posts the actual alleged false positive text in these criticisms

      I've yet to see a single real Pangram false positive that was provably published when it says it was, yet plenty such comments claiming they exist

Doesn't mean that the author might not use AI to optimise legibility. You can write stuff yourself and use an LLM to enhance the reading flow. Especially for non-native speakers it is immensely helpful to do so. Doesn't mean that the content is "AI-generated". The essence is still written by a human.

  • > Doesn't mean that the author might not use AI to optimise legibility.

    I agree that there is a difference between entirely LLM-generated, and LLM-reworded. But the statement is unequivocal to me:

    > The Zigbook intentionally contains no AI-generated content—it is hand-written

    If an LLM was used in any fashion, then this statement is simply a lie.

  • But then you cannot write that

    "The Zigbook intentionally contains no AI-generated content—it is hand-written"

I wish AI had the self-built irony of adding vomit emojis to their sycophantic sentences.

> Can the author... Convince me otherwise?

Not disagreeing with you, but out of interest, how could you be convinced otherwise?

  • To me it's another specimen in the "demonstrating personhood" problem that predates LLMs. e.g. Someone replies to you on HN or twitter or wherever, are they a real person worth engaging with? Sometimes it'll literally be a person but their behavior is indistinguishable from a bot, that's their problem. Convincing signs of life include account age, past writing samples, and topic diversity.

  • I'm not sure, but I try my best to assume good faith / be optimistic.

    This one hit a sore spot b/c many people are putting time and effort into writing things themselves and to claim "no ai use" if it is untrue is not fair.

    If the author had a good explanation... Idk not a native English writer and used an LLM to translate and that included the "no LLMs used" call-out and that was translated improperly etc

You can't just say that a linguistic style "proves" or even "suggests" AI. Remember, AI is just spitting out things its seen before elsewhere. There's plenty of other texts I've seen with this sort of writing style, written long before AI was around.

Can I also ask: so what if it is or it isn't?

While AI slop is infuriating, and the bubble hype is maddening, I'm not sure every time somebody sees some content they don't like the style of we just call out it "must" be AI, and debate if it is or it isn't is not at least as maddening. It feels like all content published now gets debated like this, and I'm definitely not enjoying it.

  • You can be skeptical of anything but I think it's silly to say that these "Not just A, but B" constructions don't strongly suggest that it's generated text.

    As to why it matters, doesn't it matter when people lie? Aren't you worried about the veracity of the text if it's not only generated but was presented otherwise? That wouldn't erode your trust that the author reviewed the text and corrected any hallucinations even by an iota?

    • > but I think it's silly to say that these "Not just A, but B" constructions don't strongly suggest ai generated text

      Why? Didn't people use such constructions frequently before AI? Some authors probably overused them the same frequency AI does.

      3 replies →

IMO HN should add a guideline about not insinuating things were written by AI. It degrades the quality of the site similarly to many of the existing rules.

Arguably it would be covered by some of the existing rules, but it's become such a common occurrence that it may need singling out.

  • What degrades conversation is to lie about something being not AI when it actually is. People pointing out the fraud are right to do so.

    One thing I've learned is that comment sections are a vital defense on AI content spreading, because while you might fool some people, it's hard to fool all the people. There have been times I've been fooled by AI only to see in the comments the consensus that it is AI. So now it's my standard practice to check comments to see what others are saying.

    If mods put a rule into place that muzzles this community when it comes to alerting others a fraud is being affected, that just makes this place a target for AI scams.

    • It's 2025, people are going to use technology and its use will spread.

      There are intentional communities devoted to stopping the spread of technology, but HN isn't currently one of them. And I've never seen an HN discussion where curiosity was promoted by accusations or insinuations of LLM use.

      It seems consistent to me with the rules against low effort snark, sarcasm, insinuating shilling, and ideological battles. I don't personally have a problem with people waging ideological battles about AI, but it does seem contrary to the spirit of the site for so many technical discussions to be derailed so consistently in ways that specifically try to silence a form of expression.

      5 replies →

Who cares?

Still better than just nagging.

  • Using AI to write is one thing, claiming you didn't when you did should be objectionable to everyone.

    • This.

      I wouldn't mind a technical person transparently using AI for doing the writing which isn't necessary their strength, as long as the content itself comes from the author's expertise and the generated writing is thoroughly vetted to make sure there's no hallucinationated misunderstanding in the final text. At the end of the day this would just increase the amount of high quality technical content available, because the set of people with both a good writing skill and a deep technical expertise is much narrower than just the later.

      But claiming you didn't use AI when you did breaks all trust between you a your readership and makes the end result pretty much worthless because why read a book if you don't trust the author not to waste your time?

  • My statement refers to this claim: "I'm 99% sure this is written by an LLM."

    The hypocrisy and entitlement mentality that prevails in this discussion is disgusting. My recommendation to the fellow below that he should write a book himself (instead of complaining) was even flagged, demonstrating once again the abuse of this feature to suppress other, completely legitimate opinions.

    • I'm guessing it was flagged because it came off as snark. I've gone ahead and vouched it but of course I can't guarantee it won't get flagged again. To be frank this comment is probably also going to get flagged for the strong language you're using. I don't think either are abusive uses of flagging.

      Additionally please note that I neither complained not expressed an entitlement. The author owes me as much as I owe them (nothing beyond respect and courtesy). I'm just as entitled to express a criticism as they are to publish a book. I suppose you could characterize my criticism as complaints, but I don't see what purpose that really serves other than to turn up the rhetorical temperature.