← Back to context

Comment by charcircuit

10 hours ago

The best practices are changing. Many accessibility features were built due to the computer not being understand correctly. For example how something that looks like a checkbox despite being just a div is would not get recognized properly. Now with AI, the AI understands what a checkbox is and can understand from the styling that there is a checkbox there.

That's a huge resource cost though, and simply unnecessary. We should be building semantically valid HTML from the beginning rather than leaning on a GPU cluster to parse the function based on the entire HTML, CSS, and JS on the page (or a screenshot requiring image parsing by a word predictor).

  • That's the point of solving problems with LLMs. We pay a large resource cost, but in return we get general intelligence to understand things.

Or just use <input type="checkbox"> in the first place and save humans and machines a whole bunch of time.

  • That's already possible today yet there are still people who don't which is why a more general solution for the screen reader is needed rather than requiring every site developer to do something special.