Comment by jbreckmckye

7 days ago

One of the great benefits of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge.

One of the great drawbacks of AI tools, is they allow anyone to build stuff... even if they have no ideas or knowledge.

It used to be that ShowHN was a filter: in order to show stuff, you had to have done work. And if you did the work, you probably thought about the problem, at the very least the problem was real enough to make solving it worthwhile.

Now there's no such filter function, so projects are built whether or not they're good ideas, by people who don't know very much

People who got "enabled" by AI to produce stuff, just need to learn to keep their "target audience of one"-projects to themselves. Right now it feels like those fresh parents who show every person they meet the latest photos / videos of their baby, thinking everybody will find them super cute and interesting.

  • Yeah, I think it's sort of an etiquette thing we haven't arrived at yet.

    It's a bit parallel to that thing we had in 2023 where dinguses went into every thread and proudly announced what ChatGPT had to say about the subject. Consensus eventually become that this was annoying and unhelpful.

    • At times, it seems like the only thing that has changed is that the dinguses don't bother crediting ChatGPT.

    • My manager at work still does this in work chats and it drives me a bit crazy. If I want to k own what an LLMs take on the subject is I can just go ask it.

      The chat is for people to discuss people stuff.

    • is it? im very happy with my cad tools, but im only going to show off the physical stuff i make with them, rather than the tools along the way.

    • >went into every thread and proudly announced what ChatGPT.

      That is what Show HN has become. Nobody cares what code Claude shat in response to a random person prompt. If I cared, I would be prompting Claude myself.

  • >People who got "enabled" by AI to produce stuff, just need to learn to keep their "target audience of one"-projects to themselves.

    This is what I do. I have tons of cool (to me) shit I have built with LLM assistance. I only wheel my dumb stuff out if its specifically relevant to someone.

    But I am also not doing it as a resume hobby, just as a hobby. A lot of people are trying to jump from hobby to career. The recognition is the point for some.

  • Some folks definitely give off a "How do you do, fellow coders?" vibe

  • I get that, and mostly agree but some people will really have the cutest kitten the most beautiful sunset photo. Hopefully we figure out how to discern as fast as we can churn.

    • You get diminishing returns up there though, the cutest cat photo in the world would look remarkably similar to the next 2,000 photos of cats in the cutest cat photos leaderboard. I feel we should filter on diverse topics rather than best by metrics, perhaps we'll want to discern on concern.

      1 reply →

  • Like when Instagram / digital photography, that is not what you will get but you will see a lot of revealing body parts.

  • More like the fresh parents who start schooling everyone else on how to parent…

The other element here is that the vibecoder hasn't done the interesting thing, they've pulled other people's interesting things.

Let's see, how to say this less inflamatory..

(just did this) I sit here in a hotel and I wondered if I could do some fancy video processing on the video feed from my laptop to turn it into a wildlife cam to capture the birds who keep flying by.

I ask Codex to whip something up. I iterate a few times, I ask why processing is slow, it suggests a DNN. I tell it to go ahead and add GPU support while its at it.

In a short period of time, I have an app that is processing video, doing all of the detection, applying the correct models, and works.

It's impressive _to me_ but it's not lost on me that all of the hard parts were done by someone else. Someone wrote the video library, someone wrote the easy python video parsers, someone trained and supplied the neural networks, someone did the hard work of writing a CUDA/GPU support library that 'just works'.

I get to slap this all together.

In some ways, that's the essence of software engineering. Building on the infinite layers of abstractions built by others.

In other ways, it doesn't feel earned. It feels hollow in some way and demoing or sharing that code feels equally hollow. "Look at this thing that I had AI copy-paste together!"

  • To me, part of what makes it feel hollow is that if we were to ask you about any of those layers, and why they were chosen or how they worked, you probably would stumble through an answer.

    And for something that is, as you said, impressive to you, that's fine! But the spirit of Show HN is that there was some friction involved, some learning process that you went through, that resulted in the GitHub link at the top.

  • Idk.

    I saw this come out because my boss linked it as a faster chart lib. It is ai slop but people loved it. [https://news.ycombinator.com/item?id=46706528]

    I knew i could do better so i made a version that is about 15kb and solves a fundamental issue with web gl context limits while being significantly faster.

    AI helped do alot of code esp around the compute shaders. However, i had the idea of how to solve the context limits. I also pushed past several perf bottlenecks that were from my fundamental lack of webgpu knowledge and in the process deepened my understanding of it. Pushing the bundle size down also stretched my understanding of js build ecosystems and why web workers still are not more common (special bundler setting for workers breaks often)

    Btw my version is on npm/github as chartai. You tell me if that is ai slop. I dont think it is but i could be wrong

I have yet to see any of these that wouldn’t have been far better off self-hosting an existing open source app. This habit of having an LLM either clone (or even worse, cobble together a vague facsimile) of existing software and claiming it as your own is just sort of sad.

  • I actually came to this realization recently. I'm part of a modding community for a game, and we are seeing an influx of vibe coded mods. The one distinguishing feature of these is that they are entirely parasitic. They only take, they do not contribute.

    In the past, new modders would often contribute to existing mods to get their feet wet and quite often they'd turn into maintainers when the original authors burnt out.

    But vibe coders never do this. They basically unilaterally just take existing mods' source code, feed this into their LLM of choice and generate a derivative work. They don't contribute back anything, because they don't even try to understand what they are doing.

    Their ideas might be novel, but they don't contribute in any way to the common good in terms of capabilities or infrastructure. It's becoming nigh impossible to police this, and I fear the endgame is a sea of AI generated slop which will inevitably implode once the truly innovative stuff dies and and people who actually do the work stop doing so.

    • Even before vibe coding blew up I think this problem existed, but was slowly increasing. Vibe coding accelerated the problem.

      I think "good for you, you built a mod, but now I'm searching through 1000 entries that are all the same things and mostly shit".

      I think it's like many popular forums or Reddit communities. The shitty comments float to the top and all the replies are some meme or "me too". It doesn't help anyone else and it feels masturbatory.

    • That's the essence of the corporations behind these commercial products as well. Leech off of all the work of others and then sell a product that regurgitates that said work without attribution or any back contribution.

Sometimes 'gatekeeping' is a good thing.

  • It often is. The concept of "gatekeeping" becoming well known and something people blindly rail against was a huge mistake. Not everything is for everyone, and "gatekeeping" is usually just maintaining standards.

    • Ideally the standard would just be someone's genuine interest in a project or a hobby. In the past, taking the effort to write code often was sufficient proof of that.

      AI agent coding has introduced to writing software a sort of interaction like what brands have been doing to social media.

  • I think the word you're looking for is curation. Which people who don't pass jury might call gatekeeping.

    • I'm not sure what distinction you're trying to make, but it seems like you might be distinguishing between keeping out substandard work versus keeping out the submitters.

      In which case, I kinda disagree. Substandard work is typically submitted by people who don't "get it" and thus either don't understand the standard for work or don't care about meeting it. Either way, any future submission is highly likely to fail the standard again and waste evaluation time.

      Of course, there's typically a long tail of people who submit one work to a collection and don't even bother to stick around long enough to see how the community reacts to that work. But those people, almost definitionally, aren't going to complain about being "gatekept" when the work is rejected.

To be fair, one probably needs at least one idea in order to build stuff even with AI. A prompt like "write a cool computer program and tell me what it does" seems unlikely to produce something that even the author of that prompt would deem worthy of showing to others.

Agreed, and were gonna see this everywhere that AI can touch. Our filter functions for books, video, music, etc are all now broken. And worst of all that breaking coincides with an avalanche of slop, making detection even harder.

There is this real disconnect between what the visible level of effort implies you've done, and what you actually have to do.

It's going to be interesting to see how our filters get rewired for this visually-impressive-but-otherwise-slop abundance.

  • My prediction is that reputation will be increasingly important, certain credentials and institutions will have tremendous value and influence. Normal people will have a hard time breaking out of their community, and success will look like acquiring the right credentials to appear in the trusted places.

    • That's been the trajectory for at least the last 100 years, an endless procession of certifications. Just like you can no longer get a decent-paying blue collar job without at least an HS diploma or equivalent, the days of working in tech without a university education are drying up and have been doing so for a while now.

  • I have a sci-fi series I've followed religiously for probably 10 years now. It's called the 'Undying Mercenaries' series. The author is prolific, like he's been putting out a book in this series every 6 months since 2011. I'm sure he has used ghost writers in the past, but the books were always generally a good time.

    Last year though I purchased the next book in the series and I am 99% sure it was AI generated. None of the characters behaved consistently, there was a ton of random lewd scenes involving characters from books past. There were paragraphs and paragraphs of purple prose describing the scene but not actually saying anything. It was just so unlike every other book in the series. It was like someone just pasted all the previous books into an LLM and pushed the go button.

    I was so shocked and disappointing that I paid good money for some AI slop I've stopped following the author entirely. It was a real eye opener for me. I used to enjoy just taking a chance on a new book because the fact that it made it through publishing at least implied some minimum quality standard, but now I'm really picky about what books I pick up because the quality floor is so much lower than in the past.

    • Yes, I have not bought a few books after reading their free chapters and getting suspicious.

      Honestly: there is SO much media, certainly for entertainment. I may just pretend nothing after 2022 exists.

      1 reply →

    • If you've some time to burn, write the author and/or his publisher and let them know that the guy's new ghostwriter sucks shit. If this is very seriously making your consider not picking up the next book in the series, be sure to mention that.

      If folks just stop purchasing the new books, they can imagine a reason for the lost sales that's convenient for them, but if folks tell them why they stopped purchasing, there's a lot less room for that kind of nonsense.

      1 reply →

  • People will build AI 'quality detectors' to sort and filter the slop. The problem is of course it won't work very well and will drown all the human channels that are trying to curate various genres. I'm not optimistic about things not all turning into a grey sludge of similar mediocre material everywhere.

    • Exactly, and we will have those who will "game" the "detectors" like they already "game" the social media "algorithms" :\

    • I believe that a history of written work verified by stylometry will be a viable reputation system.

"One of the great benefits of AI tools, is they allow people to build stuff, even if they have no ideas or knowledge."

Wait, what? That's a great benefit?

  • Sure, there's many examples (I have a few personal ones as well) where I'm just building small tools and helpers for myself which I just wouldn't have done before because it would take me half a day. Or non-technical people at work that now just build some macros and scripts for Google Sheets that they would've never done before to automate little things.

    • I'm in the same boat. I use AI to generate tons of small things for work. None of it is shareable online because it's unique to my workplace and it's not some generic reusable tool, and for the most part the scripts are boring. Their most interesting attribute is how little effort they took, not their originality or grandness of scale.

> so projects are built whether or not they're good ideas

Let’s be honest, this was always the case. The difference now is that nobody cares about the implementation, as all side projects are assumed to be vibecoded.

So when execution is becoming easier, it’s the ideas that matter more…

  • The appearance of execution is much easier, quality execution (producing something anybody wants to use) might be easier, maybe not.

    • This is something that I was thinking about today. We're at the point where anyone can vibe code a product that "appears" to work. There's going to be a glut of garbage.

      It used to be that getting to that point required a lot of effort. So, in producing something large, there were quality indicators, and you could calibrate your expectations based on this.

      Nowadays, you can get the large thing done - meanwhile the internal codebase is a mess and held together with AI duct-tape.

      In the past, this codebase wouldn't scale, the devs would quit, the project would stall, and most of the time the things written poorly would die off. Not every time, but most of the time -- or at least until someone wrote the thing better/faster/more efficiently.

      How can you differentiate between 10 identical products, 9 of which were vibecoded, and 1 of which wasn't. The one which wasn't might actually recover your backups when it fails. The other 9, whoops, never tested that codepath. Customers won't know until the edge cases happen.

      It's the app store affect but magnified and applied to everything. Search for a product, find 200 near-identical apps, all somehow "official" -- 90% of which are scams or low-effort trash.

      2 replies →