← Back to context

Comment by nonethewiser

19 hours ago

>We have always been wary of AI generated code, but felt everyone is free to do what they want and experiment, etc. But, one of our own, Andy Kirby, decided to branch out and extensively use Claude Code, and has decided to aggressively take over all of the components of the MeshCore ecosystem: standalone devices, mobile app, web flasher and web config tools.

>And, he’s kept that small detail a secret - that it’s all majority vibe coded.

Without any more context, I am highly suspicious of this framing.

1) Someone "taking over" the ecosystem seems like an entirely different issue. How is this possible? Does it mean he's publishing things and people want to use them?

2) Is the code bad? It sounds like they had no idea he was using AI. That seems to imply there was nothing wrong with the code as-is. Why not judge it on it's merits?

>The team didn’t feel it was our place to protest, until we recently discovered that Andy applied for the MeshCore Trademark (on the 29th March, according to filings) and didn’t tell any of us.

Taking this at face value, this is indeed hostile and bad.

But no, I'm not going to get outraged that someone is simply using Claude Code.

Agreed. I use meshcore and have multiple repeaters setup. I don't care about people using ai assisted coding but I think it should be disclosed especially if its closed source.

Now the trademark take over seems crazy especially given Andy hasn't contributed to the github project, only personal for profit add ons.

I do also think that the meshcore core team have "tacked on" and tried to enforce a stronger narrative with their anti ai coding bias.

  • It wasn't ai assisted coding, it was vibe coding from someone with no real coding background. A communication protocol can't be vibe coded, how do you enforce security if the person is unable to understand what the tool created?

    Especially when they try to hide that they were using those tools in the first place

  • > only personal for profit add ons

    In that context it is quite logical to take a trademark out once the project is mature enough so you can profit off other people's work.

    Considering their user base does not like the hidden vibe coded idea I don't think this is bias but a sane rationalisation.

    • There’s a lot of framing in how questions are asked. I’m going to bet asking the community “Would you like more features if they’re made using AI assistance?” is going to get wildly different results.

      1 reply →

Disagree: I applaud them for doing this. Anyone that says they've reviewed the 1000 lines of slop any AI has spit out is simply lying to everyone and even potentially themselves and has never done a single extensive code review in their life. Reading 1000 lines of text is one thing, reading and analyzing the complexity implications and edge cases in code - no chance. The "I've reviewed the slop" response is the reasons why 0 days and leaks are happening more than ever: because no one really reads the code cause "I vibe-coded it". An extensive and comprehensive code review may take days, and no slopper has ever done that. I'll get a 100 line pr and going over it can easily take hours, especially when something looks wrong and I need to test it. And it's a good reason why I'd never trust the "You are absolutely correct, apologies for the oversight, here's a revised version:"

> Is the code bad? It sounds like they had no idea he was using AI. That seems to imply there was nothing wrong with the code as-is. Why not judge it on it's merits?

Anyone that has used AI at all knows this isn't how it works. AI is extremely good at producing plausible-but-wrong outputs. It's literally optimised for plausibility, which happens to coincide with correctness a lot of the time. When it doesn't you get code that seems good and is therefore very difficult to judge on its merits.

With human written code it's a lot easier to tell if it's good or not.

There are exceptions to this - usually if you have some kind of oracle like that security work that used AddressSanitizer to verify security bugs, or if you're cloning a project you can easily compare the behaviour to the original project. Most of the time you don't have that luxury though.

  • It's also easy to overwhelm reviewers with far more code than they can possibly review. And it's also the hardest stuff to review where the code at surface level looks totally fine, but takes long hours of actual testing to make sure it works.

  • Do folks not write tests and review their own code (AI generated or not)?

    Also, citation needed:

    > With human written code it's a lot easier to tell if it's good or not