Comment by simonw
8 hours ago
I expect most side-projects are being built with AI-assistance now. Side projects are typically time constrained - if AI saves you time, why wouldn't you use it?
They're also the ideal place to try out new AI tools that your professional work might not let you experiment with.
(The headline of this piece doesn't really do it justice - it misuses "vibe coded" and fails to communicate that the substance of the post is about visual design traits common with AI-generated frontends, which is a much more interesting conversation to be having. UPDATE: the headline changed, it's now much better - "Show HN submissions tripled and now mostly have the same vibe-coded look" - it was previously "Show HN submissions tripled and are now mostly vibe-coded")
My biggest issue with LLM‑assisted webpages (Claude Code is especially egregious) is the lack of respect for basic web content accessibility guidelines.
The number of dark‑mode sites I’ve seen where the text (and subtext) are various shades of dark brown or beige is just awful. For reference, you want a contrast ratio between the text and background of at least ~4:1 to be on the safe side.
This isn't even that hard to fix - hell you can add the Web Content Accessibility Guidelines to a skill.
https://webaim.org/resources/contrastchecker
Were/Are human-generated side projects better in this respect?
I assume not, but the emphasis here is that a new tool is homogenizing these projects and due to its scale it is more important that this homogenous output is up to a higher standard.
A hundred self-thought devs not implementing accessibility standards is a different problem than a school teaching 100 students lacking these standards in its curriculum.
The "default" light-mode look of most popular UI frameworks wouldn't have that same issue unless you put a lot of time into customizing your own styling, which most side projects wouldn't bother with (unless that look and feel was the point of the project). There certainly would be poor UI decisions but more likely in layout/placement/navigation, which could still be problematic for accessibility but probably not in a "is this color scheme even readable" kind of way.
Plus given time constraints, they generally wouldn't try to cram huge amounts of tiny text into every visible inch of the page without some intentional reason to do so (using that somewhat hard to read console-ish font Claude seems to love as a default).
Maybe the dark mode/terminal font/high text density look presents as "cool looking" at first glance for one-shotting evals so they've all converged on it. But to OP's point, this seems like a solvable (or at least mitigable) issue if models or harnesses were concerned about it.
I've genuinely had solid results from telling Claude "... and make sure it has good accessibility".
I could see that. I’ve found that the more specificity you add to your prompt and less freedom you give Claude Code to kind of just “do its own thing”, the better your results will be.
FWIW, there’s also an official frontend-design skill for CC [1]. A while back I incorporated some of the more relevant guidance from WCAG into it [2].
[1] - https://github.com/anthropics/claude-code/blob/main/plugins/...
[2] - https://www.w3.org/WAI/standards-guidelines/wcag
1 reply →
Something I've noticed when people complain about stuff like accessibility or other things that LLMs do "wrong", it really is a case of "you're holding it wrong." The LLM does indeed know how to do it right and it sometimes does so autonomously but when it doesn't, you can simply ask it to do so.
In other words, I've found people like the above to think of LLMs as fairly static, as if we couldn't change their behavior with a simple sentence, instead of complaining about it. It's strange, to me at least.
1 reply →
If you have some good sources let me know, I'll turn it into a guide that Claude can read
I use the W3 preliminary guidelines - you could try adapting them into a bespoke skill as a good start.
https://www.w3.org/WAI/test-evaluate/preliminary
Another possibility (although I’ve never actually tried this myself) is an MCP server that someone built specifically to connect to Lighthouse, which includes accessibility testing as part of its benchmarks.
https://github.com/priyankark/lighthouse-mcp
I think this is a second order thing when you are building a side project.
[dead]
I think it's fine, so long as the intent is to refine the thing after you've validated the product idea and direction. There are a million things to optimize in web pages, and AI can't simply one-shot good decisions yet.
Honestly, my accessibility on my apps/websites is much better now with AI because you can just tell AI to do it (and run automated tests to validate it worked) vs not doing it at all for a small side project with 2 users.
Just chiming in to say I don't care at all about accessibility and I find it bewildering that every thread sharing some project has a comment like this.
People assume that accessibility is all about some small minority of less abled people who can't "read good", but it's a broad category that affects all users. If you build following the guidelines then you end up with a quality product that can be used by people who stumbled upon it while doom-scrolling instead of enjoying their beach vacation. The best analogy I heard was about drop-kerbs/curb-cuts... people wonder why we're catering for a small minority of wheelchair users everywhere and then they have a kid (or wheel luggage from the airport) and realize how great they are.
1 reply →
> I find it bewildering that every thread sharing some project has a comment like this.
Those of us who care that technology be accessible to as many people as possible, such as low vision users, find it relevant. You can ignore it if you wish.
Just chiming in to say that the idea someone would "not care at all about accessibility" (and openly state as much) is bewildering to me.
1 reply →
> Just chiming in to say I don't care at all about accessibility
See Rawls 'Original Position' on why you should care: https://en.wikipedia.org/wiki/Original_position
> chiming in to say I don't care at all about accessibility
I hope you remember that well into your adult life.
Your hearing may be lost. Even if you could still read, the website doesn't offer an accurate transcription. You have to rely on someone else (or some other tech) to transcribe. You have to hope their hearing and language skills are good enough for an accurate transcription.
Your vision may be lost. Even if you could still hear, the website doesn't offer an accurate transcription. You have to rely on someone else (or some other tech) to transcribe. You have to hope their reading comprehension and language skills are good enough for an accurate transcription.
Your limbs may be lost. Some apps let you tab around. Some apps make it impossible to find a button until you hover your mouse. Some apps simply don't load unless you press some magic keystrokes. Good luck.
You brought this problem upon yourself, 30 years ago. You brought this problem upon others. People won't care about your problems. Why should they, when you didn't care about theirs?
> I find it bewildering that every thread sharing some project has a comment like this.
Accessibility is legally required and not difficult to add.
Would you deny service to black people? Islamic people? Gay people? Refusing to provide accessibility in your service is no different. You are actively discriminating in a way which could be illegal and certainly is unethical and amoral.
2 replies →
I care about accessibility, but I agree with your sentiment. There is this recurring pattern people have when trying to detract from AI. They realize that saying they dislike AI for economic reasons is not going to garner any sympathy, so they try to hide behind some noble cause. At one point, it was about water use in datacenters. At another point, they become defenders for megacorporations' copyright. Now, they are trying the "AI doesn't cares about accessibility" angle. They are just trying to find some reason that sticks.
That's until you want to fill a form and find out it's dark grey text on a different dark grey background so you don't see what you're typing even with 20/20 sight :)
There's a whole industry around suing website owners who have websites that aren't accessible. It's kind of messed up. The WSJ did a story on it a while back: https://www.wsj.com/business/entrepreneurship/small-business...
1 reply →
Consider not being bewildered that people care about things you don't care about.
Accessibility is a broad umbrella of features that enable a ton of really cool stuff for everybody, not just the disabled. Things like agentic computer use is only possible because of "accessibility".
1 reply →
As they say, everyone will eventually become disabled in some form or fashion. When your eyes go due to old age you'll be glad to still be able to use the internet.
This seems very weirdly exclusionary to me. Don’t you care at all about the users trying to use your site?
TIL slibhb will be young forever
I find it bewildering that every thread sharing some project has a comment like this.
Because Western society functions for the common good. We are not animals fighting for survival in the wilderness.
And because a web site not being accessible is a liability. Target was sued and had to pay millions for having your attitude.
I'm blind and accessibility is important to me. It is extremely disrespectful to see someone who just says "fuck you" and feels good about it. Please, at least consider that the world is bigger than you imagine and there is place for everyone in it and there is no need to be purposefully rude.
1 reply →
[flagged]
[flagged]
2 replies →
I think accessibility is a really admirable thing and helpful to society (like ramps or parking). But stop shoving your wants on others when you can fix it on your own. Just write a chrome plugin using ai that adjusts css to set contrast ratio of your choice. Can even use a local llm to figure out replacement colors.
Accessibility that can be had on client side should not be a concern on server side.
>stop shoving your wants
“Don’t have bad vision if you didn’t want to be technical!”
(came across that way)
That's a really terrible option for the vast majority of people who simply lack that kind of tech savviness to be able to do it. And in my opinion, it's kind of selfish.
It also doesn't solve the issue if somebody is browsing your site on a mobile phone where the extension might not even work properly.
WCAG is not difficult - but it does require some modicum of effort.
3 replies →
stop shoving your wants on others when you can fix it on your own. Just write a chrome plugin using ai that adjusts css to set contrast ratio of your choice. Can even use a local llm to figure out replacement colors.
Stop shoving your wants on others when you can fix it yourself.
Just get some concrete and some lumber, and build that wheelchair ramp.
You can even hire a contractor to follow you around town all day building them as needed.
4 replies →
> Side projects are typically time constrained - if AI saves you time, why wouldn't you use it?
It depends what your goals are. All of my side projects were started because I wanted to learn something. Using a "skip to the end" button wouldn't really make sense for me.
The difference between people who want to learn things versus people who just want a finished product is going to be a big dividing line in the post AI world
It's also a nice opportunity to learn while getting something out!
For me it wouldn’t make sense to use ai. Like I work on personal projects because they are fun: it’s fun to think about a problem, to solve it, to implement a solution, to learn new things and to fantasise about what if it gets popular and useful. If I can use AI to flip my fingers and make it happen, well wheres the fun? I have my day to day job to use AI for mundane things
Besides, the idea of paying 200$/month to have the privilege of using ai in my side projects… it’s just stupid for me
Fun is not always about finding up the exact look or design of something - you might be having it for your own particular reason - and by the time a website has to present it might have shifted already. That's why these land and why we might be confused about the process
To me, it is incredibly fun to work in "product/idea space" and have the LLM do the gruntwork of coding for you.
It is also very fun to tackle hard engineering problems.
I enjoy both, and tend to oscillate between wanting to do a lot of one, or a lot of the other. I do recognize that I've been coding for so long that it's much more exciting to be solving "product problems" rather than "engineering problems", I suspect mostly because it's the area I've explored the least (of the two).
And there is a LOT to learn about a domain while you're working on the problem, even without even looking at the code.
I was surprised to realize that some of my friends don't share this sentiment. They take very little pleasure from being product developers, and instead really just enjoy being engineers who work on the code and the architecture. There's absolutely nothing wrong with that, I just found it very surprising. To be honest, I guess perhaps what I found the most surprising is that I am not one of those people?
And when you get your product in the hands of users can finally get that direct feedback line to/from them and can start working on the problems they find and thinking of product (not necessarily engineering) solutions for them? Man, that's so satisfying. It's like falling in love with coding all over again.
You can still have fun with your side projects. AI helps, but if you want to build something nice, you still need to provide most of the intellectual input, while AI can help with the more tedious things. I have a personal project that I abandoned because it was becoming too much for me, and there were parts that I didn't enjoy doing.
I anticipate that people with a builder spirit and strong technical background are going to be able to build awesome things in the future. What the Fabrice Bellard or John Carmack of today will be able to build?
It depends if the interesting part of the solution is the website for you. Maybe it is and that’s fine but for others it isn’t. Maybe they’ve got a cool backend thing and the ui isn’t the key part.
If it helps compare, you might have a full desire to manage a tricky server and all the various parts of it. It’d be removing the fun to just put a site on GitHub pages rather than hosting it on a pdp11. But if you want to show off your demo scene work you wouldn’t feel like you’d missed out on the fun just putting things up on a regular site.
Personally, I'm using side projects to test what a basic agentic setup can achieve, i.e. not paying for anything but the electricity bill. Reaching that state is the real side project.
(I've not landed on a good solution yet, ollama+opencode kinda works but there are often problems with parsing output and abrupt terminations - I'm sure some of it is the models, some the config, some my pitiful rtx 5090 16gb, and some are just bugs...)
It doesn't work like that. AI is not a Jinn. You cannot simply command it and have it produce an entire project from thin air. You get to have fun: do the thinking part, and let it do the boring stuff.
I have a long list of projects that I have thought about but never implemented because of lack of time and energy. LLMs have made that happen.
I like designing programming languages and developing parsers/compilers and virtual machines. But the steps beyond type-checking are so incredibly boring (and I don't like using C or LLVM as targets) that I have done the front end 15-20 times over the last couple of decades and the back end only 3-4 times.
This time, I spent two weeks developing a spec for the VM, including concurrency, exception handling and GC. And I led the AI through each subsystem till I was satisfied with the result. I now have a VM that is within 8x of C in tight loops. Without JIT. It is incredible to be able to allocate arrays of 4B elements and touch each element at random, something that would make python cry.
Working on the compiler now.
It doesn't have to be like this. For me one 20$ acc with another one for backup I rarely use, is more than enough. I leverage this tool simply as a typist - it can't think so it mustn't, it can't architect since it's merely a "guess the next word" game with many extra steps, but boy can it type fast. I just make sure it types exactly what I would have typed and nothing else, this way I get to enjoy both worlds - improve my throughput and not produce slop.
The one caveat I have with this is that the underlying project might be fun but the website/write-up might be a chore. Hence AI for the chore bit.
I don't think this is overwhelmingly the reason though - I think many are just all AI, but if the project is technically interesting it might be sufficient to get me to grimace through it.
>if AI saves you time, why wouldn't you use it
AI might (might not, but often does!) also save you from doing original thinking in the domain, which in a show my side project is what people are interested in
I don’t know if that’s true, I made a little web app for displaying the schedule for my team based on our billable hours, and I didn’t do any of the scripting myself but I did have to think a lot about what the app would do and what it would look like and what kind of functionality I wanted, tradeoffs between functionality and specific use cases, etc. It just made the scripting part go faster, that’s all.
That's still less thinking overall that someone who thought about all of that and thought about the scripting would have done.
3 replies →
That adds up over time, though, and it works in reverse. AI will always be able to read and write faster than a person can. You may be able to write the script, but in the time it would take to /literally/ write it, you're on to the next thing. And if that script is actually a feature that spans two or three or 10 files, now you're really cooking.
Why I like using AI right now is that I get to try out far more of my own ideas quickly (and find issues with them!)
Before, it was like:
"Oh, X idea is really cool, let me try it!" ... (loses interest before idea validated)
Now: "Oh, X idea is really cool, let me try it!" ... with AI, I get to actually validate that it works (ideally), or reformulate the idea if it doesn't.
Even more than validating ideas, I think my personal AI use falls into two categories:
- Exploration: I am "vibe coding" to explore a domain, add many features, refactor the app over and over, as a real time exploration of the domain to see what works and what doesn't
- Specific Execution: I have a full design, a full idea, I've thought about architecture, we're making a plan and we're executing this extremely coherent vision
I've enjoyed using AI for both cases.
1 reply →
> Why I like using AI right now is that I get to try out far more of my own ideas quickly (and find issues with them!)
This.
Coding assistants handle a great deal of the drudge work involved in refactoring. I find myself doing far more deep refactoring work as quick proofs of concept than before. It's also quite convenient to have coding assistants handle troubleshooting steps for you.
Not likely. Original thinking in a "side project" is almost never about the code itself, but the ideas and end product implementation. You might be able to invent things like Carmack's BSP implementation, Torvald's Content Addressable Storage, etc. but even things like that can be aided by LLMs at this point, at least in the prototyping/idea phases. AI doesn't prevent you from having good ideas or doing original thinking if that is your goal.
But I might want some cool original project with a boring but working web UI, so that other people can actually try out what I have created.
For sure, I'm doing something very similar, asking an AI to make a boring but working web app using an API I'm working on. The API is the interesting part and the web app is basically just to test it.
I do think though if I were to delegate the API itself to AI and say something like the code doesn't matter, the high level thinking would suffer from lack of pain working through implementation details.
Sure... and it might also help you do more original thinking in that domain, and hence help you get a lot more learning value out of the time you have for those side projects.
The trick is to deliberately use it in a way that helps you learn.
> if AI saves you time, why wouldn't you use it?
I wouldn't use it because one of the reasons that I do side projects is to enjoy myself and learn new things, and these tools tend to do much of the stuff that I enjoy and learn from.
> Side projects are typically time constrained
What is the urgency in completing side projects? Commercial projects are usually the ones where you have some urgency.
If you only have a few hours a week and you want to actually finish a project the speed with which you can build is extremely important.
Only if you think finishing your side-projects is extremely important.
> Side projects are typically time constrained - if AI saves you time, why wouldn't you use it?
There could be many reasons to not use ai in a case like this, eg: retaining more control, breaking some new ground, because it’s fun, because it’s personal, etc.
I also expect that most side projects that are made with ai end up abandoned within 3 months and contribute next to nothing to the user's personal development and that the use of ai prevented them from the kind of deliberate practice that could have led to durable skill growth which ultimately will lead to much better work (side or main projects).
I don’t expect most side-projects to be built with LLMs now. I would expect LLM uptake to be higher in the workplace (where it’s mandatory and/or people operate on the “the ends justify the means” paradigm), but outside of that there’s a higher likelihood someone is doing it because they enjoy programming and problem-solving as a process, and why outsource something you like to a black box that will regurgitate you an average of volunteer contributions (often non-consensually obtained) for some corporation’s profit?
On the visual design traits...
I'm primarily a backend developer. Most of my work has been in serving json or occasionally xml. Spring Shell in Java is something that I'm closer to working with than a GUI. When I've done web work, the most complimentary thing that was said about my design is "spartan".
So, if I was to have a web facing personal project... would black text on a white background with the default font and clunky <form> elements be ok? I know we are ok with it on the HN Settings page. They work... but they don't meet what I perceive other people have as minimum standards for web facing interfaces today.
And so... if I was to have some web facing project that I wanted to show to others, I'd probably work with some AI tooling to help create a gui, and it would very likely have the visual design traits that other AI generated front ends have.
It depends on the project, I think. If your side project is a thing you hope it will make you a millionaire, sure, AI all the way. But if your side project is a just a cool thing or a learning experience, I would say the exact opposite. I would expect $JOB to be very time-constrained and vibecoding-friendly (maybe even too friendly) whereas your side-project should be all artisanal free-range code.
If AI saves you time, why not use it on your main projects too? All other things equal, should users care about whether AI was used?
> if AI saves you time, why wouldn't you use it?
Because generally speaking, stuff that is AI generated is largely devoid of value. If it’s AI generated anyone can prompt it into existence, so the likely hood that someone will find value in and use what you made is approximately zero. What you made is likely low quality, since you vibe coded it with little effort and that always shows. Lastly you don’t even get to experience the joy of solving problems yourself or the pride of having built something with your own skill.
Using some AI to build something is fine, it’s when it’s used so much that it’s immediately obvious on the packaging - the show hn post, the readme, the code itself.
I've found that value is largely derived from polish and vision.
It's easy to prompt some stuff into existence over a weekend. It is hard to polish it, fix bugs, have tidy UX, and so on. There's this meme going around (maybe from that Silicon Valley show?) where the grey-beard says he is valued for his taste and his conviction in that taste. This is -- fortunately or not -- reality.
Vision and taste won't get you the whole way, but they are a huge part of the equation. This is why Apple, for example, was so successful under Jobs: he had vision, and he had good taste.
I agree, and for those who would counter “just use AI to polish”, those who use AI to avoid doing the work of building something are likewise going to avoid doing the work to polish it, if they even possess the taste required to do so.
I agree. The problem is the noise ratio, not how the platform was implemented.
> if AI saves you time, why wouldn't you use it
Getting a McDonald's saves time too
Appreciate the feedback, just updated the title to be more clear.
> I expect most side-projects are being built with AI-assistance now. Side projects are typically time constrained - if AI saves you time, why wouldn't you use it?
Why would you put forth anything but this line?
The only side projects I do is contributing to an existing project. You can’t use AI for it because of provenance matters. But why would I want to? I want to program.
For private side projects this makes sense if you want the outcome more than the process. But even then I am skeptical. There is the benign effect of learning things: the more you know the more you desire to to know because you get more and more aware of the infinite horizon of not-knowing. I haven’t experienced this myself for “building”, but based on anecdotes I’m not psyched about the psychological profile of getting everything for free (in terms of programming). Some people seem to get manic about it. What’s the point of realizing your desires if that just means producing more of them? And the key to satiating that unsatiable desire is to put tokens into the alienation machine.
For side projects that you publicize (show hn) this makes less sense. There is a freaking glut of “I built this” with the predictable feedback around the Net, in these times: why the F would I take the time to test what you have “built” when I can “build” the same thing and get exactly what I want?
This fact, which i do believe to be true, has completely killed my interest in almost all of other peoples projects.
My interest in a project has always been rooted in the idea that its interesting to see other knowledgable people or people learning to attack a problem for themselves. I have really never cared about the "thing that it does." I liked reading the code, dissecting attempts and really learning about the person that wrote it through their line by line decisions.
That is now all gone. The "noise ratio" of slop projects which have none of the previously interesting thought and intentionality have drowned out the "rigorous projects."
It's actually very sad for me, it was something I previously really enjoyed. I am looking for a board that aggregates projects that still have that interesting "human factor" i would subscribe in a heartbeat.
I've been coding for 20 years now, almost every single afternoon.
I've never met someone who has spent more time coding than me (although for sure such people exist). I love writing code, I consider it an art form. I don't mind spending days optimizing a function until the code is beautiful (at least to me).
I also have dozens of projects in mind that I don't have time to go through; cue the meme of "I bought another domain that will sit empty for years", I have like 60 of those right now.
AI assistance/vibecoding, whatever you call it, has been a massive win for me because now I can sketch out those projects in a weekend, put them out and then, if I decide they're worth spending more time on, tradcode the parts that I really care about. As it is for many others, AI is another tool in my toolbox. It's the pencil and paper I use to draft stuff.
It's tricky because I do get that we all want to get rid of low-value AI slop, but also, it wouldn't be fair to me, and people like me, to have authentic projects discredited just because you used AI in the creative process; not just as part of it, but perhaps even to write ALL of the code. And then, why would that be a bad thing?
What difference does it make if it was me writing functionally identical code letter-by-letter instead of writing a comprehensive prompt and guiding AI to do as I wish?