Can Digital Accessibility Trust Artificial Intelligence Technology?

Tesla interior dashboard. Photo by Bram Van Oost.

Tesla interior dashboard. Photo by Bram Van Oost.

 

In 2016, Tesla premiered the first self-driving car on the market. There was just one catch: the vehicle wasn’t self-driving yet. Tesla equipped each vehicle with the ability to test the customer’s driving against its software. When the car consistently makes safer decisions than the driver, Tesla will release its autonomous AI software to each of the already purchased vehicles. As of this week, Tesla began releasing its AI software to a first-round of users.  

Elon Musk, Tesla’s founder, uses Tesla’s current customer base as information gatherers and research subjects and capitalizes on a fundamental aspect of machine learning and AI, understanding humans.

What better way to understand this intersection than to sell a vehicle that can learn to recognize behaviors? 

 

Research as a Start

Throughout my graduate research, I conducted a survey to understand the junction of digital accessibility and artificial intelligence. Ten out of 22 respondents said they strongly agree that they are interested in new applications of AI. That number drops by 40% when asked about new digital accessibility applications.

This discrepancy could indicate one of three things: users are content with their current offerings, uneasy with what new digital accessibility tools would be, or do not trust new products with emerging technology. Based on the remaining results, I believe it to be the latter.

 

 
We’re not just augmenting humans. A crutch is an augmentation. Agents that know how to fade will improve humans
— Christopher Noessel
 

 

The most astounding statistic was the percentage of people who indicated they were interested in a new artificial intelligence application, 95%. Additionally, 77% of respondents selected either “strongly agree” or “somewhat agree” when asked if they would like their technology to understand their habits and abilities. However, 32% of respondents answered “strongly agree” or “somewhat agree” to the following statement: I do not feel comfortable with technology, knowing my ability and acting based on that. 

Why are people optimistic about new AI applications to solve digital accessibility issues, yet not comfortable with the same technology knowing and performing upon their ability? 

These results point to distrust in artificial intelligence, including suspicion for AI to operate on their behalf. Therefore, any potential solution must be balanced, allowing technology to take the lead and still giving ultimate control to the users. 

Google home sitting on bookshelf. Photo by Kazden Cattapan.

Google home sitting on bookshelf. Photo by Kazden Cattapan.

 

AI as a Solution

Noessel speaks of this balance in his book Designing Agentive Technology, “Many agentive services that offer to improve skill set will need to be built in similar ways- as a scaffold that helps the user grow and then falls away as needed”. 

Scaffolding is erected on the side of a building when it needs repair, but scaffolding is not permanent. Noessel presents agentive technology and AI as a transient solution for users, and in the case of this study, for those with disabilities. 

AI can offer trust, unlike any other technology. Individuals who rely on accessibility need to trust the products that are aiding them in operating their life. My research findings solidify that artificial intelligence can deliver a missing piece to many accessibility tools, providing that needed trust and delivering adequate solutions to those with vision impairments.  

Previous
Previous

accessibility.ai

Next
Next

Communicating the Importance of Accessibility