Comment by ctoth
17 hours ago
If agents is what it finally takes to get good a11y I'll take it. I'll bitch about it, but I'll take it.
17 hours ago
If agents is what it finally takes to get good a11y I'll take it. I'll bitch about it, but I'll take it.
Playwright, the end-to-end testing framework for the web, provides a strong incentive to give sites good a11y: Playwright tests are an absolute delight to read, write and maintain on properly accessible sites, when using the accessibility locators. Somewhat less so when using a soup of CSS selector and getByText()-style locators.
One thing I am curious about is a hybrid approach where LLMs work in conjunction with vision models (and probes which can query/manipulate the DOM) to generate Playwright code which wraps browser access to the site in a local, programmable API. Then you'd have agents use that API to access the site rather than going through the vision agents for everything.
This is precisely how the Playwright MCP works, which lets something like Claude directly test a website.
https://playwright.dev/docs/getting-started-mcp#accessibilit...
I've mentioned several times and gotten snarky remarks about how rewriting your code so it fits in your head, and in the LLM's context helps the LLM code better, to which people complain about rewriting code just for an LLM, not realizing that the suggestion is to follow better coding principles to let the LLM code better, which has the net benefit of letting humans code better! Well looks like, if you support accessibility in your web apps correctly, Playwright MCP will work correctly for you.
Amazing.
Was looking for this comment. I'd like to see this approach in the comparison...having the LLM build a playwright script and use it. I suspect it would beat time-to-market for the api, and be close-ish in elapsed time per transaction.
Harder to scale if it's doing a lot of them, I suppose.
Using playwright-cli with Claude code is highly effective for debugging locally deployed web apps with essentially zero setup.
Very real risk of this going in reverse: people building inaccessible websites to prevent AI use.
Or human engineers limiting AI-consumable documentation to improve job security!
Those people probably aren't working on anything useful anyways, so its no big deal.
I've found that by far the most useful websites as a programmer are also the ones most resistant to AI. This would be a huge loss for anyone vision impaired
5 replies →
That’s such an extremely small niche of people it’s not a real risk.
"AI" is a made up hype thing. It's just computers and computer programs. For real!
i think this goes both ways too :) agents have been a boon for everyone with disabilities, carpal tunnel, RSI, ADHD, anything
and now the fact that interfaces need to be accessible to agents, not just humans, ironically increases it for humans in return
And lets not forget that not all disabilities are chronic. Many disabilities are situational or temporary. AI is a great assist for a hangover day for example...
I mean…I guess. But this is ridiculous - how many layers does our technology need to bash through to update two records on remote systems? I get that value is being added at some point - but just charge some micropayment for transactions. This is just too much.
Ever read Vernor Vinge’s a deepness in the sky? Digital archeologist, coming right up.