← Back to context

Comment by redfloatplane

25 days ago

I think these days if I’m going to be actively promoting code I’ve created (with Claude, no shade for that), I’ll make sure to write the documentation, or at the very least the readme, by hand. The smell of LLM from the docs of any project puts me off even when I like the idea of the project itself, as in this case. It’s hard to describe why - maybe it feels like if you care enough to promote it, you should care to try and actually communicate, person to person, to the human being promoted at. Dunno, just my 2c and maybe just my own preference. I’d rather read a typo-ridden five line readme explaining the problem the code is there to solve for you and me,the humans, not dozens of lines of perfectly penned marketing with just the right number of emoji. We all know how easy it is to write code these days. Maybe use some of that extra time to communicate with the humans. I dunno.

Edit: I see you, making edits to the readme to make it sound more human-written since I commented ;) https://github.com/gavrielc/nanoclaw/commit/40d41542d2f335a0...

OP here. Appreciate your perspective but I don't really accept the framing, which feels like it's implying that I've been caught out for writing and coding with AI.

I don't make any attempt to hide it. Nearly every commit message says "Co-Authored-By: Claude Opus 4.5". You correctly pointed out that there were some AI smells in the writing, so I removed them, just like I correct typos, and the writing is now better.

I don't care deeply about this code. It's not a masterpiece. It's functional code that is very useful to me. I'm sharing it because I think it can be useful to other people. Not as production code but as a reference or starting point they can use to build (collaboratively with claude code) functional custom software for themselves.

I spent a weekend giving instructions to coding agents to build this. I put time and effort into the architecture, especially in relation to security. I chose to post while it's still rough because I need to close out my work on it for now - can't keep going down this rabbit hole the whole week :) I hope it will be useful to others.

BTW, I know the readme irked you but if you read it I promise it will make a lot more sense where this project is coming from ;)

  • The problem with LLM-written is that I run into so many README.md's where it's clear the author barely read the thing they're expecting me to read and it's got errors that waste my time and energy.

    I don't mind it if I have good reason to believe the author actually read the docs, but that's hard to know from someone I don't know on the internet. So I actually really appreciate if you are editing the docs to make them sound more human written.

    • I think the other aspect is that if the README feels autogenerated without proper review, then my assumption is that the code is autogenerated without proper review as well. And I think that's fine for some things, but if I'm looking at a repo and trying to figure out if it's likely to work, then a lack of proper review is a big signal that the tool is probably going to fall apart pretty quickly if I try and do something that the author didn't expect.

      1 reply →

  • ”I don't care deeply about this code. It's not a masterpiece. It's functional code that is very useful to me.” - AI software engineering in a nutshell. Leaving the human artisan era of code behind. Function over form. Substance over style. Getting stuff done.

    • “Human artisan era of code” is hilarious if you’ve worked in any corporate codebase whatsoever. I’m still not entirely sure what some of the snippets I’ve seen actually are, but I can say with determination and certainty that none of it was art.

      The truth about vibe coding is that, fundamentally, it’s not much more than a fast-forward button: ff you were going to write good code by hand, you know how to guide an LLM to write good code for you. If, given infinite time, you would never have been able to achieve what you’re trying to get the LLM to do anyway, then the result is going to be a complete dumpster load.

      It’s still garbage in, garbage out, as it’s always been; there’s just a lot more of it now.

    • There should never have been an "artisan era". We use computers to solve problems. You should have always getting stuff done instead of bikeshedding over nitty-gritty details, like when in the office people have been spending weeks on optimizing code... just to have the exact same output, exact same time, but now "nicer".

      You get paid to get stuff done, period.

      23 replies →

    • Was about to comment precisely this, that line does not inspire any confidence.

      And it reminds me of a comment I saw in a thread 2 days ago. One about how RAPIDLY ITERATIVE the environment is now. There area lot of weekend projects being made over the knee of a robot nowadays and then instantly shared. Even OpenClaw is to a great extent, an example of that at its current age. Which comes in contrast to the length of time it used to take to get these small projects off the ground in the past. And also in contrast with how much code gets abandoned before and after "public release.

      I'm looking at AI evangelists and I know they're largely correct about AI. I also look at what the heck they built, and either they're selling me something AI related, or have a bunch of defunct one-shot babies or mostly tools so limited in scope they server only themselves with it. We used to have a filter for these things. Salesmen always sold promises, so, no change there, just the buzzwords. But the cloutchasers? Those were way smaller in number. People building the "thing" so the "thing" exists mostly stopped before we ever heard of the "thing", because, turns out, caring about the "thing" does not actually translate to the motivation to getting it done. Or Maintain it.

      What we have now is a reverse survivorship bias.

      OOP stating they don't care about the state of their code during their public release, means I must assume they're a Cloutchaser. Either they don't care because they know they can do better which means they shared something that isn't their best, so their motivation with the comment is to highlight the idea. They just wanted to be first. Clout. Or they don't exactly concern with if they can as they just don't care about code in general and just want the product, be it good or be it not. They believe in the idea enough they want to ensure it exists, regardless of what's in the pudding. Which means to me, they also don't care to understand what's in the ingredient list. Which means they aren't best to maintain it. And that latter is the kind that, before the LLM slop was a concept in our minds, were precisely ones among the people who would give up half way through Making The "Thing".

      See you in 16 weeks OP. I'll eat my shoe then.

    • Code is the means to an end of getting stuff done, not the end in itself as some people seem to think. Yes, being a code artisan is fun, but do not mistake the fun for its ultimate purpose.

    • > AI software engineering in a nutshell. Leaving the human artisan era of code behind. Function over form. Substance over style. Getting stuff done

      The invention of calculators and computers also left the human artisan era of slide rules, calculation charts and accounting. If that's really what you care about, what are you even doing here?

    • I too miss gathering 20 devs in the same room and debating company-wide linter rules. AI ruined the craft \s

  • Hey, you do you, I’m glad you appreciate my perspective. I wasn’t trying to catch you out but I see how it came across that way - I apologise for my edit, I had hoped the ;) would show that I meant it in jest rather than in meanness but I shouldn’t have added it in the first place.

    As I said in my comment, no shade for writing the code with Claude. I do it too, every day.

    I wasn’t “irked” by the readme, and I did read it. But it didn’t give me a sense that you had put in “time and effort” because it felt deeply LLM-authored, and my comment was trying to explore that and how it made me feel. I had little meaningful data on whether you put in that effort because the readme - the only thing I could really judge the project by - sounded vibe coded too. And if I can’t tell if there has been care put into something like the readme how can I tell if there’s been care put into any part of the project? If there has and if that matters - say, I put care into this and that’s why I’m doing a show HN about it - then it should be evident and not hidden behind a wall of LLM-speak! Or at least; that’s what I think. As I said in a sibling comment, maybe I’m already a dinosaur and this entire topic won’t matter in a few years anyway.

    • There needs to be a word for the feeling of sudden realization that you're reading an AI-generated text (or watching an AI-generated video) where you expected it to be human-authored.

      8 replies →

  • For example - I checked src/, and there’s clearly more than ~500 lines of code, ignoring the other dirs. I’m on mobile, maybe someone else can run wc -l on the repo and confirm. Is there a reason this number is inaccurately stated? Immediately makes me wary of the vibe coded nature of it.

  • So you created a project, implicitly to help individuals keep their computers and credentials secure, but you can’t be bothered to proofread a read me?

    I get using AI, I do all day everyday day it feels like, but this comes off as not having respect for others time.

I 100% agree, reading very obviously ai written blogs and "product pages"/readme's has turned into a real ick for me.

Just something that screams "I don't care about my product/readme page, why should you".

To be clear, no issue with using AI to write the actual program/whatever it is. It's just the readme/product page which super turns me off even trying/looking into it.

Project releases with llms have grown to be less about the functionality and more about convincing others to care.

Before the proof of work of code in a repo by default was a signal of a lot of thought going into something. Now this flood of code in these vibe coded projects is by default cheap and borderline meaningless. Not throwing shade or anything at coding assistants. Just the way it goes

  • Been writing code professionally for almost 3 decades.

    Not one line of code I wrote 20 years ago has the same economic value as East German currency.

    All code is social ephemera. Ethno objects. It lacks intrinsic value of something like indoor plumbing.

    It's electrical state in a machine. Our only real goal was convince people the symbols on the screen were coupled to some real world value while it is 100% decoupled from whatever real physical quantity we are tracking.

    We all been Frank from Always Sunny; we make money, line go up. We don't define truth. The churn of physics does that.

I agree 100% with you. It's even worse though. They haven't checked if the Readme has hallucinated it or not (spoiler: it has):

https://news.ycombinator.com/item?id=46850317

  • I don’t want to come off like I’m shitting on the poster here. I’ve definitely made that kind of careless mistake, probably a dozen times this week. And maybe we’re heading to a future where nobody even reads the readme anymore because they won’t be needed because an agent can just conjure one from the source code at will, so maybe it actually straight up doesn’t matter. I’ve just been thinking about what it means to release software nowadays, and I think the window for releasing software for clout and credit is closing, since creating software basically requires a Claude subscription and an idea now, so fewer people are impressed by the thing simply existing, and the standard of care for a project released for that aim (of clout) needs to be higher than it maybe needed to be in the past. But who knows, I’m probably already a dinosaur in today’s world, and I really don’t mean to shit on the OP - it’s a good idea for a project and it makes a lot of sense for it to exist. I just can’t tell if any actual care has gone into it, and if not, why promote?

    • > I don’t want to come off like I’m shitting on the poster

      Why not, if they're making people read AI slop without checking it first? They deserve the shit-nudge to fix it.

      1 reply →

the main reason I'd want a person to write or at least curate readmes is because models have, at least for the time being, this tendency to make confident and plausible-sounding claims that are completely false (hallucination applied to claims on the stuff they just made)

so long as this is commonplace I'd be extremely sceptical of anything with some LLM-style readmes and docs

the caveats to this is that LLMs can be trained to fool people with human-sounding and imperfectly written readmes, and that although humans can quickly oversee that things compile and seem to produce the expected outputs, there's deeper stuff like security issues and subtle userspace-breaking changes

track-record is going to see its importance redoubled

You will definitely like Josh Mock's recent post: https://joshmock.com/post/2026-agents-md-as-a-dark-signal/

  • I am confused by “senior-learning engineer”; so he’s learning as a senior, learning at a “senior” level in a “continuous learning”, “life long learning” kind of way? What is senior-learning? Searching for it only comes up with learning for seniors programs.

    • I'm looking at it now and it says "senior-leaning" not "senior-learning"

      Might've been a typo they've since fixed.

      >I am, as many senior-leaning engineers are, ambivalent about whether AI is making us more productive coders

FWIW, this is a variation of the age-old thing about open source.

It isn’t “have it your way”, he graciously made code available, use it or leave it.

> I’d rather read a typo-ridden five line readme explaining the problem the code is there to solve for you and me,the humans, not dozens of lines of perfectly penned marketing with just the right number of emoji

Don't worry, bro. If enough people are like you, there will be fully automatic workflow to add typos into AI writing.

  • As a practical matter, if it tones down the AI sleuthing vs. reading, it might be a good idea.

    Assuming the written/generated text is well written/generated, of course.