BIMA’s recent Global Accessibility Awareness Day events, hosted by the Inclusive Design Council, fostered insightful conversations about the future of digital accessibility and connected passionate people working in the accessibility space. The events brought together diverse perspectives, including organisations at different stages of their accessibility journeys. We focussed on 3 key topics, the European Accessibility Act (EAA), AI, and tooling.
To help keep the conversation going, we’ve summarised some key points from across the events below.
While awareness of regulations like the EAA is growing, uncertainty around practical implementation remains. People are seeking clarity on aspects like conformance testing and documentation requirements with a big unknown being how the regulators plan to monitor adherence and what they expect from organisations on day 1.
A key takeaway was the need to address the misconception that accessibility is a quick fix handled by a single person, rather than a comprehensive, ongoing effort requiring dedicated resources and expertise. For example, achieving website accessibility involves not only technical adjustments like adding alt text to images but also considering workflows that prevent introducing additional accessibility issues. Many accessibility experts are seeing companies expecting their accessibility compliance to be a one off project fixed within a matter of months, which anyone who’s worked in accessibility before, knows will not be the case.
Discussions highlighted the crucial interplay between evolving regulations, such as the European Accessibility Act (EAA), and the rapid advancement of AI. Both must act as checks and balances for each other. This symbiotic relationship is crucial for several reasons.
AI systems, particularly those designed to enhance accessibility, must be developed and deployed responsibly, adhering to ethical guidelines and legal frameworks. Regulations like the EAA provide a crucial foundation for this responsible development, ensuring that AI-powered accessibility solutions are inclusive and do not inadvertently create new barriers.
Conversely, AI can play a vital role in supporting regulatory compliance and helping businesses to quickly get a snapshot of how big the task ahead is, as well as building in efficiencies for the remediation journey and strategic planning.
AI’s potential to revolutionise accessibility, particularly through assistive technologies like screen readers and AI-powered captioning, was a key focus. Real world examples, such as smart glasses enabling visually impaired individuals to navigate their surroundings, demonstrated the transformative impact of these technologies.
However, concerns were raised about the transparency of AI algorithms and the potential for bias in the data they rely on. For instance, an AI-powered chatbot designed to provide support may not provide the right accessibility support as it’s trained on incorrect data.
While AI generated output is often trusted, closer scrutiny is needed to address potential gaps, especially concerning data representation for underserved communities. Careful consideration of training data is crucial when using AI to address accessibility barriers, and collecting representative data is necessary to ensure inclusivity.
Automated accessibility tools, such as WAVE and axe, were recognised as valuable for initial assessments and raising awareness. These tools can quickly identify common accessibility issues like missing alt text or insufficient colour contrast. However, they should be seen as a starting point, not a complete solution.
Automated tools cannot fully evaluate the user experience or account for the nuances of human perception and cognition. Manual audits and human judgement, including user testing with people with disabilities, remain essential for comprehensive accessibility evaluation. For example, while a tool might flag an image as having alt text, it cannot always determine if the alt text is useful.
There was also discussion around how tools are evolving and there is some excitement for the possibilities but also a concern around how much current tools promise vs how much they deliver.
A central theme was the crucial role of research in shaping a future where both regulation and AI demonstrably improve human lives, particularly for people with disabilities. Both accessibility regulations, like the EAA, and AI-powered solutions share the ultimate goal of fostering greater inclusion and independence. Robust research is essential not only for feeding into the development of both regulations and AI tools, but also for rigorously validating their real-world impact. This involves developing and refining testing methodologies that accurately reflect the diverse needs and lived experiences of people with disabilities, moving beyond technical compliance to measure genuine improvements in usability and quality of life.
Additionally, ongoing research is needed to explore the potential unintended consequences of both regulations and AI solutions, ensuring they contribute positively to human experience and avoid creating new barriers. This continuous cycle of research, development, and validation is crucial for building a truly inclusive and accessible digital world.
Building a truly inclusive and accessible digital world requires a multifaceted approach. While regulations like the EAA provide a crucial framework, practical implementation remains a key challenge. The rapid advancement of AI offers both exciting opportunities and potential pitfalls. By fostering collaboration between regulators, developers, researchers, and people with disabilities, and by prioritising human-centred research and design, we can harness the power of technology to create a more equitable and accessible future for all.