It's not that people care about quality, but that people expect things to "just work".
Regarding the point about accessibility, there are a ton of little details that must be explicitly written into the HTML that aren't necessarily the default behavior. Some common features of CSS and JS can break accessibility too.
None of this code would obvious to an LLM, or even human devs, but it's still what's expected. Without precisely written and effectively read-only boilerplate your webpage is gonna be trash and the specifics are a moving target and hotly debated. This back and forth is a human problem, not a code problem. That's why it's "hard".
I use the web every day as a blind user with a screenreader.
I would 100% of the time prefer to encounter the median website written by Opus 4.5 than the median website written by a human developer in terms of accessibility!
That's really interesting. Are you speaking from experience with websites where you know who authored them or from seeing code written by humans and Opus 4.5 respectively?
Satisfying constraints like these isn't merely about knowing the spec and having lots of examples. Accessibility requirements are even more subjective than ordinary requirements already are to begin with.
Accessibility testing sounds like something an LLM might be good at. Provide it with tools to access your website only through a screen reader (simulated, text not audio), ask it to complete tasks, measure success rate. That should be way easier for an LLM than image-based driving a web browser.
But accessiblity on the frontend is to a large extend patterns - if it looks like a checkbox it should have the appropriate ARIA tag, and patterns are easy for an LLM.
It's just… a lot of people don't see this on their bottom line. Or any line. My awareness of accessibility issues is the Web Accessibility Initiative and the Apple Developer talks and docs, but I don't think I've ever once been asked to focus on them. If anything, I've had ideas shot down.
What AI does do is make it cheap to fill in gaps. 1500 junior developers for the price of one, if you know how to manage them. But still, even there, they'd only be filling in gaps as well as the nature of those gaps have been documented in text, not the lived experience of people with e.g. limited vision, or limited joint mobility whose fingers won't perform all the usual gestures.
Even without that issue, I'd expect any person with a disability to describe an AI-developed accessibility solution as "slop": because I've had to fix up a real codebase where nobody before me had noticed the FAQ was entirely Bob Ross quotes (the app wasn't about painting, or indeed in English), I absolutely anticipate that a vibe-coded accessibility solution will do something equally weird, perhaps having some equivalent to "As a large language model…" or to hard-code some example data that has nothing to do with the current real value of a widget.
I think perhaps the nuance in the middle here is that for most projects, the quality that professional components bring is less important.
Internal tools and prototypes, both things that quality components can accelerate, have been strong use-cases for these component libraries, just as much as polished commercial customer-facing products.
And I bet volume-wise there's way more of the former than the latter.
So while I think most people who care about quality know you can't (yet) blindly use LLM output in your final product, it's completely ok for internal tools and prototyping.
If you can produce something that works 80% of the time for 5% of the cost? People take that all the time when they buy cheap shit off Temu or Amazon.
They almost completely just give money back if it fails/sucks, and they are still coming out ahead.
Amazon (AWS) is not cheap! :D
It's not that people care about quality, but that people expect things to "just work".
Regarding the point about accessibility, there are a ton of little details that must be explicitly written into the HTML that aren't necessarily the default behavior. Some common features of CSS and JS can break accessibility too.
None of this code would obvious to an LLM, or even human devs, but it's still what's expected. Without precisely written and effectively read-only boilerplate your webpage is gonna be trash and the specifics are a moving target and hotly debated. This back and forth is a human problem, not a code problem. That's why it's "hard".
I use the web every day as a blind user with a screenreader.
I would 100% of the time prefer to encounter the median website written by Opus 4.5 than the median website written by a human developer in terms of accessibility!
That's really interesting. Are you speaking from experience with websites where you know who authored them or from seeing code written by humans and Opus 4.5 respectively?
2 replies →
Knowing obscure things you need to do for accessibility is actually something I would expect an llm to be pretty good at.
Satisfying constraints like these isn't merely about knowing the spec and having lots of examples. Accessibility requirements are even more subjective than ordinary requirements already are to begin with.
Accessibility is an interesting space for quality because under the ADA you can be sued for it and be exposed to huge liability.
Accessibility testing sounds like something an LLM might be good at. Provide it with tools to access your website only through a screen reader (simulated, text not audio), ask it to complete tasks, measure success rate. That should be way easier for an LLM than image-based driving a web browser.
But accessiblity on the frontend is to a large extend patterns - if it looks like a checkbox it should have the appropriate ARIA tag, and patterns are easy for an LLM.
That kind of pattern was easy before AI.
It's just… a lot of people don't see this on their bottom line. Or any line. My awareness of accessibility issues is the Web Accessibility Initiative and the Apple Developer talks and docs, but I don't think I've ever once been asked to focus on them. If anything, I've had ideas shot down.
What AI does do is make it cheap to fill in gaps. 1500 junior developers for the price of one, if you know how to manage them. But still, even there, they'd only be filling in gaps as well as the nature of those gaps have been documented in text, not the lived experience of people with e.g. limited vision, or limited joint mobility whose fingers won't perform all the usual gestures.
Even without that issue, I'd expect any person with a disability to describe an AI-developed accessibility solution as "slop": because I've had to fix up a real codebase where nobody before me had noticed the FAQ was entirely Bob Ross quotes (the app wasn't about painting, or indeed in English), I absolutely anticipate that a vibe-coded accessibility solution will do something equally weird, perhaps having some equivalent to "As a large language model…" or to hard-code some example data that has nothing to do with the current real value of a widget.
Oh no I'm very cynical about that.
I think perhaps the nuance in the middle here is that for most projects, the quality that professional components bring is less important.
Internal tools and prototypes, both things that quality components can accelerate, have been strong use-cases for these component libraries, just as much as polished commercial customer-facing products.
And I bet volume-wise there's way more of the former than the latter.
So while I think most people who care about quality know you can't (yet) blindly use LLM output in your final product, it's completely ok for internal tools and prototyping.
LLMs are not that cheaper, a customizable accessible component is still worth hours of work.