“AI” won’t solve accessibility

In our tech-focused society, there is this ever present notion that “accessibility will be solved by some technology”. But it won’t. Making things accessible is a fundamentally human challenge that needs human solutions in human contexts. I wrote about automated testing before.

Support Eric’s independent work

I'm a web accessibility professional who cares deeply about inclusion and an open web for everyone. I work with Axess Lab as an accessibility specialist. Previously, I worked with Knowbility, the World Wide Web Consortium, and Aktion Mensch. In this blog I publish my own thoughts and research about the web industry.

Sign up for a €5/Month Membership Subscribe to the Infrequent Newsletter Follow me on Mastodon

And because the current technology en vogue is “AI”, currently some people float “AI” as the way to make accessibility happen, and not just “improve some aspects” or “help some users to get to better information”, but in the sense of “nobody will need to know anything about accessibility, as AI will make it irrelevant”.

This is a dangerous proposal. In her book Against Technoableism, Ashley Shew outlines how accessibility “solutions” are often forced onto disabled people by nondisabled people who do not share the experience and think that they know better.1

A few weeks ago, Jacob Nielsen formulated the following idea in his ill-advised column:

I foresee a much more radical approach to generative UI to emerge shortly — maybe in 5 years or so. In this second-generation generative UI, the user interface is generated afresh every time the user accesses the app. Most important, this means that different users will get drastically different designs. [… F]reshly generated UIs also mean that the experience will adapt to the user as he or she learns more about the system. For example, a more simplified experience can be shown to beginners, and advanced features surfaced for expert users.

And now Gregg Vanderheiden writes along the same lines on the Accessibility Guidelines Working Group2 mailing list, the group he formerly chaired:

You [Patrick H. Lauke] asked — when does it end?

wow — I hope it ends with us not having to do anything (or almost nothing) with regard to accessibility regulations for ICT because each individual gets information and interface presented to them that is optimized for them as an individual !

This is a dangerous and impractical idea for many reasons, including:

  • Consistent user interfaces are easy to use and remember. User interfaces that change shape or hierarchy when the user interacts with them will make them harder to use.
  • User Interfaces are designed with intention[^ at least they should be], they are to guide users to a specific outcome. An “AI” cannot know a certain intention, it might come up with interfaces that are harder to use.
  • How dynamic would those adaptions be? Would a website look different when I get there with a migraine? And if so, will the generated UI that I need to learn be better for me, or would I be done with my task before I had even learned the UI?
  • What would be the parameters to adapt the UI? Where would the “AI” get the information about the website? Directly from the database? From a general purpose UI?
  • How would an “AI” make sure that a website is inclusive? How would it prevent to leave screen reader users with a text-only version, excluding screen reader users with sight from imagery?
  • How would the “AI” know what the user needs? Does this put an unnecessary burden on the user to communicate needs in detail and accurately to get a usable result? Is that not shifting the burden for accessibility from paid professionals to disabled people?
  • What about cognitive disabilities? How does an “AI” ensure they do not continue infantilizing people with cognitive disabilities and keeping vital information from them? This already happens today because humans make assumptions about others. That’s why I think it’s essential that the original, unchanged content is always available to everyone.
  • Also, I do not see a world where companies are willing to give one of their main marketing avenues in the hand of “AIs”.

Hidde wrote in detail about the “generateability” of UI.

Now, Gregg might be thinking about user-side “AI” to make those changes. And I think enhancing user preferences, especially in mainstream browsers, makes a lot of sense. I even helped to start a W3C Community Group about that a few years ago3 .

But none of that is “Generative UI”. It is a base UI, which must still be accessible and then adapting it to user’s needs. And yeah, currently, we need to write custom CSS to do this, or go through browser and operating system settings to select the options we need.

This can surely be automated: “I want my websites to display text at least 12px and fill the available viewport. Display them in an off-black background color with pinkish-white text color.” Taking an LLM and interpreting that into a custom style sheet or operating system settings makes a lot of sense to me to lower the barrier of entry for these settings.

The simple fact is that we already have all the technology to make wide-spread accessibility a reality. Today. We have guidelines that, while not covering 100% of the disability spectrum, cover a lot of the user needs. User needs that fundamentally do not change.

Will we have some “AI” technology breakthrough at some point? Maybe. But do we really want to squander “AI” technology on simple things that are essentials? Like having “AI” figure out the text on a button or the alternative text of the 1000th edit pencil icon? Is that a good use of our resources? Especially with the incredible amount of energy these models currently use.

I wished I were surprised that people who used to be leaders during the web revolution cling now to the next thing4 . It’s challenging to stay relevant. But there isn’t even experimental evidence that the “generative UI” works, let alone that it will be production ready any time soon. “Generative UI” is nice speculative fiction, but it is that. A guess of the future.

Tech “leaders” promised self-driving cars “next year” for a long time. Disabled people are often used in their marketing. Finally freeing them from the need to stay at home. But the reality is that better infrastructure, mostly through public transport and safe to use pedestrianized areas (with car access for those who need it) already would solve for a lot of the need. Would self-driving cars still be an improvement or even essential for some? Sure. But they do not (meaningfully for everyone) exist.

So let’s improve what we have, and make the world and the web better now, instead of waiting for lofty technologies that may never come.

  1. Disclaimer: While I have a history of being disabled (child-age asthma), I would not consider me currently disabled as it was only a temporary disability.
  2. The Accessibility Guidelines Working Group (AG WG) is the place that develops all W3C Accessibility Guidelines that are not ARIA related. It’s the successor/renaming of the original Web Content Accessibility Guidelines Working Group or short WCAG WG.
  3. That was before I realized how bad my burnout from W3C work was, so I had to stop.
  4. I’m especially not surprised that Vanderheiden is an AI fan, he planned to sit on a panel with Overlay vendors until the panel got cancelled because contrary to claims in the conference’s program, it was not IAAP sanctioned.

Comments & Webmentions

Comments were disabled on this page.

Preferences (beta)

Select a Theme
Font Settings
Visitor Counting

Preferences are saved on your computer and never transmitted to the server.