If you love to create infographics, then you might have come across our previously published ultimate infographic resource kits for designers post. There post contains tons of GUI kits, elements, inte
Freebies
Software development often resembles a wild ride on a giant roller coaster. It’s well known for being hard to estimate, for taking up more time than anticipated, and for unexpected problems to pop u
Business
If you love our last freebie release of 5 sets of inforgaphic banner elements, we are back with another infographic freebie release, made available exclusively for our loyal Hongkiat readers by our fr
Freebies
Unmasking The Magic: The Wizard Of Oz Method For UX Research Unmasking The Magic: The Wizard Of Oz Method For UX Research Victor Yocco 2025-07-10T10:00:00+00:00 2025-07-16T16:32:47+00:00 New technologies and innovative concepts frequently enter the product development lifecycle, promising to revolutionize user experiences. However, even the […]
Accessibility
2025-07-10T10:00:00+00:00
2025-07-16T16:32:47+00:00
New technologies and innovative concepts frequently enter the product development lifecycle, promising to revolutionize user experiences. However, even the most ingenious ideas risk failure without a fundamental grasp of user interaction with these new experiences.
Consider the plight of the Nintendo Power Glove. Despite being a commercial success (selling over 1 million units), its release in late 1989 was followed by its discontinuation less than a full year later in 1990. The two games created solely for the Power Glove sold poorly, and there was little use for the Glove with Nintendo’s already popular traditional console games.
A large part of the failure was due to audience reaction once the product (which allegedly was developed in 8 weeks) was cumbersome and unintuitive. Users found syncing the glove to the moves in specific games to be extremely frustrating, as it required a process of coding the moves into the glove’s preset move buttons and then remembering which buttons would generate which move. With the more modern success of Nintendo’s WII and other movement-based controller consoles and games, we can see the Power Glove was a concept ahead of its time.

If Power Glove’s developers wanted to conduct effective research prior to building it out, they would have needed to look beyond traditional methods, such as surveys and interviews, to understand how a user might truly interact with the Glove. How could this have been done without a functional prototype and slowing down the overall development process?
Enter the Wizard of Oz method, a potent tool for bridging the chasm between abstract concepts and tangible user understanding, as one potential option. This technique simulates a fully functional system, yet a human operator (“the Wizard”) discreetly orchestrates the experience. This allows researchers to gather authentic user reactions and insights without the prerequisite of a fully built product.
The Wizard of Oz (WOZ) method is named in tribute to the similarly named book by Frank L. Baum. In the book, the Wizard is simply a man hidden behind a curtain, manipulating the reality of those who travel the land of Oz. Dorothy, the protagonist, exposes the Wizard for what he is, essentially an illusion or a con who is deceiving those who believe him to be omnipotent. Similarly, WOZ takes technologies that may or may not currently exist and emulates them in a way that should convince a research participant they are using an existing system or tool.
WOZ enables the exploration of user needs, validation of nascent concepts, and mitigation of development risks, particularly with complex or emerging technologies.
The product team in our above example might have used this method to have users simulate the actions of wearing the glove, programming moves into the glove, and playing games without needing a fully functional system. This could have uncovered the illogical situation of asking laypeople to code their hardware to be responsive to a game, show the frustration one encounters when needing to recode the device when changing out games, and also the cumbersome layout of the controls on the physical device (even if they’d used a cardboard glove with simulated controls drawn in crayon on the appropriate locations.
Jeff Kelley credits himself (PDF) with coining the term WOZ method in 1980 to describe the research method he employed in his dissertation. However, Paula Roe credits Don Norman and Allan Munro for using the method as early as 1973 to conduct testing on an airport automated travel assistant. Regardless of who originated the method, both parties agree that it gained prominence when IBM later used it to conduct studies on a speech-to-text tool known as The Listening Typewriter (see Image below).

In this article, I’ll cover the core principles of the WOZ method, explore advanced applications taken from practical experience, and demonstrate its unique value through real-world examples, including its application to the field of agentic AI. UX practitioners can use the WOZ method as another tool to unlock user insights and craft human-centered products and experiences.
The WOZ method operates on the premise that users believe they are interacting with an autonomous system while a human wizard manages the system’s responses behind the scenes. This individual, often positioned remotely (or off-screen), interprets user inputs and generates outputs that mimic the anticipated functionality of the experience.
A successful WOZ study involves several key roles:
Creating a convincing illusion is key to the success of a WOZ study. This necessitates careful planning of the research environment and the tasks users will undertake. Consider a study evaluating a new voice command system for smart home devices. The research setup might involve a physical mock-up of a smart speaker and predefined scenarios like “Play my favorite music” or “Dim the living room lights.” The wizard, listening remotely, would then trigger the appropriate responses (e.g., playing a song, verbally confirming the lights are dimmed).
Or perhaps it is a screen-based experience testing a new AI-powered chatbot. You have users entering commands into a text box, with another member of the product team providing responses simultaneously using a tool like Figma/Figjam, Miro, Mural, or other cloud-based software that allows multiple users to collaborate simultaneously (the author has no affiliation with any of the mentioned products).
Maintaining the illusion of a genuine system requires the following:
Transparency is crucial, even in a method that involves a degree of deception. Participants should always be debriefed after the session, with a clear explanation of the Wizard of Oz technique and the reasons for its use. Data privacy must be maintained as with any study, and participants should feel comfortable and respected throughout the process.
The WOZ method occupies a unique space within the UX research toolkit:
This method proves particularly valuable when exploring truly novel interactions or complex systems where building a fully functional prototype is premature or resource-intensive. It allows researchers to answer fundamental questions about user needs and expectations before committing significant development efforts.
Let’s move beyond the foundational aspects of the WOZ method and explore some more advanced techniques and critical considerations that can elevate its effectiveness.
It’s a fair question to ask whether WOZ is truly a time-saver compared to even cruder prototyping methods like paper prototypes or static digital mockups.
While paper prototypes are incredibly fast to create and test for basic flow and layout, they fundamentally lack dynamic responsiveness. Static mockups offer visual fidelity but cannot simulate complex interactions or personalized outputs.
The true time-saving advantage of the WOZ emerges when testing novel, complex, or AI-driven concepts. It allows researchers to evaluate genuine user interactions and mental models in a seemingly live environment, collecting rich behavioral data that simpler prototypes cannot. This fidelity in simulating a dynamic experience, even with a human behind the curtain, often reveals critical usability or conceptual flaws far earlier and more comprehensively than purely static representations, ultimately preventing costly reworks down the development pipeline.
While the core principle of the WOZ method is straightforward, its true power lies in nuanced application and thoughtful execution. Seasoned practitioners may leverage several advanced techniques to extract richer insights and address more complex research questions.
The WOZ method isn’t necessarily a one-off endeavor. Employing it in iterative cycles can yield significant benefits. Initial rounds might focus on broad concept validation and identifying fundamental user reactions. Subsequent iterations can then refine the simulated functionality based on previous findings.
For instance, after an initial study reveals user confusion with a particular interaction flow, the simulation can be adjusted, and a follow-up study can assess the impact of those changes. This iterative approach allows for a more agile and user-centered exploration of complex experiences.
Simulating complex systems can be difficult for one wizard. Breaking complex interactions into smaller, manageable steps is crucial. Consider researching a multi-step onboarding process for a new software application. Instead of one person trying to simulate the entire flow, different aspects could be handled sequentially or even by multiple team members coordinating their responses.
Clear communication protocols and well-defined responsibilities are essential in such scenarios to maintain a seamless user experience.
While qualitative observation is a cornerstone of the WOZ method, defining clear metrics can add a layer of rigor to the findings. These metrics should match research goals. For example, if the goal is to assess the intuitiveness of a new navigation pattern, you might track the number of times users express confusion or the time it takes them to complete specific tasks.
Combining these quantitative measures with qualitative insights provides a more comprehensive understanding of the user experience.
The WOZ method isn’t an island. Its effectiveness can be amplified by integrating it with other research techniques. Preceding a WOZ study with user interviews can help establish a deeper understanding of user needs and mental models, informing the design of the simulated experience. Following a WOZ study, surveys can gather broader quantitative feedback on the concepts explored. For example, after observing users interact with a simulated AI-powered scheduling tool, a survey could gauge their overall trust and perceived usefulness of such a system.
WOZ, as with all methods, has limitations. A few examples of scenarios where other methods would likely yield more reliable findings would be:
The wizard’s skill is critical to the method’s success. Training the individual(s) who will be simulating the system is essential. This training should cover:
All of this suggests the need for practice in advance of running the actual session. We shouldn’t forget to have a number of dry runs in which we ask our colleagues or those who are willing to assist to not only participate but also think about possible responses that could stump the wizard or throw things off if the user might provide them during a live session.
I suggest having a believable prepared error statement ready to go for when a user throws a curveball. A simple response from the wizard of “I’m sorry, I am unable to perform that task at this time” might be enough to move the session forward while also capturing a potentially unexpected situation your team can address in the final product design.
The debriefing session following the WOZ interaction is an additional opportunity to gather rich qualitative data. Beyond asking “What did you think?” effective debriefing involves sharing the purpose of the study and the fact that the experience was simulated.
Researchers should then conduct psychological probing to understand the reasons behind user behavior and reactions. Asking open-ended questions like “Why did you try that?” or “What were you expecting to happen when you clicked that button?” can reveal valuable insights into user mental models and expectations.
Exploring moments of confusion, frustration, or delight in detail can uncover key areas for design improvement. Think about the potential information the Power Gloves’ development team could have uncovered if they’d asked participants what the experience of programming the glove and trying to remember what they’d programmed into which set of keys had been.
The value of the WOZ method becomes apparent when examining its application in real-world research scenarios. Here is an in-depth review of one scenario and a quick summary of another study involving WOZ, where this technique proved invaluable in shaping user experiences.
A significant challenge in the realm of emerging technologies lies in user comprehension. This was particularly evident when our team began exploring the potential of Agentic AI for enterprise HR software.
Agentic AI refers to artificial intelligence systems that can autonomously pursue goals by making decisions, taking actions, and adapting to changing environments with minimal human intervention. Unlike generative AI that primarily responds to direct commands or generates content, Agentic AI is designed to understand user intent, independently plan and execute multi-step tasks, and learn from its interactions to improve performance over time. These systems often combine multiple AI models and can reason through complex problems. For designers, this signifies a shift towards creating experiences where AI acts more like a proactive collaborator or assistant, capable of anticipating needs and taking the initiative to help users achieve their objectives rather than solely relying on explicit user instructions for every step.
Preliminary research, including surveys and initial interviews, suggested that many HR professionals, while intrigued by the concept of AI assistance, struggled to grasp the potential functionality and practical implications of truly agentic systems — those capable of autonomous action and proactive decision-making. We saw they had no reference point for what agentic AI was, even after we attempted relevant analogies to current examples.
Building a fully functional agentic AI prototype at this exploratory stage was impractical. The underlying algorithms and integrations were complex and time-consuming to develop. Moreover, we risked building a solution based on potentially flawed assumptions about user needs and understanding. The WOZ method offered a solution.
We designed a scenario where HR employees interacted with what they believed was an intelligent AI assistant capable of autonomously handling certain tasks. The facilitator presented users with a web interface where they could request assistance with tasks like “draft a personalized onboarding plan for a new marketing hire” or “identify employees who might benefit from proactive well-being resources based on recent activity.”
Behind the scenes, a designer acted as the wizard. Based on the user’s request and the (simulated) available data, the designer would craft a response that mimicked the output of an agentic AI. For the onboarding plan, this involved assembling pre-written templates and personalizing them with details provided by the user. For the well-being resource identification, the wizard would select a plausible list of employees based on the general indicators discussed in the scenario.
Crucially, the facilitator encouraged users to interact naturally, asking follow-up questions and exploring the system’s perceived capabilities. For instance, a user might ask, “Can the system also schedule the initial team introductions?” The wizard, guided by pre-defined rules and the overall research goals, would respond accordingly, perhaps with a “Yes, I can automatically propose meeting times based on everyone’s calendars” (again, simulated).
As recommended, we debriefed participants following each session. We began with transparency, explaining the simulation and that we had another live human posting the responses to the queries based on what the participant was saying. Open-ended questions explored initial reactions and envisioned use. Task-specific probing, like “Why did you expect that?” revealed underlying assumptions. We specifically addressed trust and control (“How much trust…? What level of control…?”). To understand mental models, we asked how users thought the “AI” worked. We also solicited improvement suggestions (“What features…?”).
By focusing on the “why” behind user actions and expectations, these debriefings provided rich qualitative data that directly informed subsequent design decisions, particularly around transparency, human oversight, and prioritizing specific, high-value use cases. We also had a research participant who understood agentic AI and could provide additional insight based on that understanding.
This WOZ study yielded several crucial insights into user mental models of agentic AI in an HR context:
Based on these findings, we made several key design decisions:
In another project, we used the WOZ method to evaluate user interaction with a voice interface for controlling in-car functions. Our research question focused on the naturalness and efficiency of voice commands for tasks like adjusting climate control, navigating to points of interest, and managing media playback.
We set up a car cabin simulator with a microphone and speakers. The wizard, located in an adjacent room, listened to the user’s voice commands and triggered the corresponding actions (simulated through visual changes on a display and audio feedback). This allowed us to identify ambiguous commands, areas of user frustration with voice recognition (even though it was human-powered), and preferences for different phrasing and interaction styles before investing in complex speech recognition technology.
These examples illustrate the versatility and power of the method in addressing a wide range of UX research questions across diverse product types and technological complexities. By simulating functionality, we can gain invaluable insights into user behavior and expectations early in the design process, leading to more user-centered and ultimately more successful products.
The WOZ method, far from being a relic of simpler technological times, retains relevance as we navigate increasingly sophisticated and often opaque emerging technologies.
The WOZ method’s core strength, the ability to simulate complex functionality with human ingenuity, makes it uniquely suited for exploring user interactions with systems that are still in their nascent stages.
“
WOZ In The Age Of AI
Consider the burgeoning field of AI-powered experiences. Researching user interaction with generative AI, for instance, can be effectively done through WOZ. A wizard could curate and present AI-generated content (text, images, code) in response to user prompts, allowing researchers to assess user perceptions of quality, relevance, and trust without needing a fully trained and integrated AI model.
Similarly, for personalized recommendation systems, a human could simulate the recommendations based on a user’s stated preferences and observed behavior, gathering valuable feedback on the perceived accuracy and helpfulness of such suggestions before algorithmic development.
Even autonomous systems, seemingly the antithesis of human control, can benefit from WOZ studies. By simulating the autonomous behavior in specific scenarios, researchers can explore user comfort levels, identify needs for explainability, and understand how users might want to interact with or override such systems.
Virtual And Augmented Reality
Immersive environments like virtual and augmented reality present new frontiers for user experience research. WOZ can be particularly powerful here.
Imagine testing a novel gesture-based interaction in VR. A researcher tracking the user’s hand movements could trigger corresponding virtual events, allowing for rapid iteration on the intuitiveness and comfort of these interactions without the complexities of fully programmed VR controls. Similarly, in AR, a wizard could remotely trigger the appearance and behavior of virtual objects overlaid onto the real world, gathering user feedback on their placement, relevance, and integration with the physical environment.
The Human Factor Remains Central
Despite the rapid advancements in artificial intelligence and immersive technologies, the fundamental principles of human-centered design remain as relevant as ever. Technology should serve human needs and enhance human capabilities.
The WOZ method inherently focuses on understanding user reactions and behaviors and acts as a crucial anchor in ensuring that technological progress aligns with human values and expectations.
“
It allows us to inject the “human factor” into the design process of even the most advanced technologies. Doing this may help ensure these innovations are not only technically feasible but also truly usable, desirable, and beneficial.
The WOZ method stands as a powerful and versatile tool in the UX researcher’s toolkit. The WOZ method’s ability to bypass limitations of early-stage development and directly elicit user feedback on conceptual experiences offers invaluable advantages. We’ve explored its core mechanics and covered ways of maximizing its impact. We’ve also examined its practical application through real-world case studies, including its crucial role in understanding user interaction with nascent technologies like agentic AI.
The strategic implementation of the WOZ method provides a potent means of de-risking product development. By validating assumptions, uncovering unexpected user behaviors, and identifying potential usability challenges early on, teams can avoid costly rework and build products that truly resonate with their intended audience.
I encourage all UX practitioners, digital product managers, and those who collaborate with research teams to consider incorporating the WOZ method into their research toolkit. Experiment with its application in diverse scenarios, adapt its techniques to your specific needs and don’t be afraid to have fun with it. Scarecrow costume optional.
Meet Accessible UX Research, A Brand-New Smashing Book Meet Accessible UX Research, A Brand-New Smashing Book Vitaly Friedman 2025-06-20T16:00:00+00:00 2025-06-25T15:04:30+00:00 UX research can take so much of the guesswork out of the design process! But it’s easy to forget just how different people are and […]
Accessibility
2025-06-20T16:00:00+00:00
2025-06-25T15:04:30+00:00
UX research can take so much of the guesswork out of the design process! But it’s easy to forget just how different people are and how their needs and preferences can vary. We can’t predict the needs of every user, but we shouldn’t expect different people using the product in roughly the same way. That’s how we end up with an incomplete, inaccurate, or simply wrong picture of our customers.
There is no shortage of accessibility checklists and guidelines. But accessibility isn’t a checklist. It doesn’t happen by accident. It’s a dedicated effort to include and consider and understand different needs of different users to make sure everyone can use our products successfully. That’s why we’ve teamed up with Michele A. Williams on a shiny new book around just that.
Meet Accessible UX Research, your guide to making UX research more inclusive of participants with different needs — from planning and recruiting to facilitation, asking better questions, avoiding bias, and building trust. Pre-order the book.

{
“sku”: “accessible-ux-research”,
“type”: “Book”,
“price”: “44.00”,
“prices”: [{
“amount”: “44.00”,
“currency”: “USD”,
“items”: [
{“amount”: “34.00”, “type”: “Book”},
{“amount”: “10.00”, “type”: “E-Book”}
]
}, {
“amount”: “44.00”,
“currency”: “EUR”,
“items”: [
{“amount”: “34.00”, “type”: “Book”},
{“amount”: “10.00”, “type”: “E-Book”}
]
}
]
}
$
44.00
Quality hardcover. Free worldwide shipping starting in August 2025.
100 days money-back-guarantee.
{
“sku”: “accessible-ux-research-ebook”,
“type”: “E-Book”,
“price”: “19.00”,
“prices”: [{
“amount”: “19.00”,
“currency”: “USD”
}, {
“amount”: “19.00”,
“currency”: “EUR”
}
]
}
$
19.00
Free!
DRM-free, of course. ePUB, Kindle, PDF available for download later this summer.
Included with your Smashing Membership.
Download PDF, ePUB, Kindle.
Thanks for being smashing! ❤️
The book isn’t a checklist for you to complete as a part of your accessibility work. It’s a practical guide to inclusive UX research, from start to finish. If you’ve ever felt unsure how to include disabled participants, or worried about “getting it wrong,” this book is for you. You’ll get clear, practical strategies to make your research more inclusive, effective, and reliable.
Inside, you’ll learn how to:
The book also challenges common assumptions about disability and urges readers to rethink what inclusion really means in UX research and beyond. Let’s move beyond compliance and start doing research that reflects the full diversity of your users. Whether you’re in industry or academia, this book gives you the tools — and the mindset — to make it happen.
High-quality hardcover. Written by Dr. Michele A. Williams. Cover art by Espen Brunborg. Print shipping in August 2025. eBook available for download later this summer. Pre-order the book.
Whether a UX professional who conducts research in industry or academia, or more broadly part of an engineering, product, or design function, you’ll want to read this book if…

{
“sku”: “accessible-ux-research”,
“type”: “Book”,
“price”: “44.00”,
“prices”: [{
“amount”: “44.00”,
“currency”: “USD”,
“items”: [
{“amount”: “34.00”, “type”: “Book”},
{“amount”: “10.00”, “type”: “E-Book”}
]
}, {
“amount”: “44.00”,
“currency”: “EUR”,
“items”: [
{“amount”: “34.00”, “type”: “Book”},
{“amount”: “10.00”, “type”: “E-Book”}
]
}
]
}
$
44.00
Quality hardcover. Free worldwide shipping starting in August 2025.
100 days money-back-guarantee.
{
“sku”: “accessible-ux-research-ebook”,
“type”: “E-Book”,
“price”: “19.00”,
“prices”: [{
“amount”: “19.00”,
“currency”: “USD”
}, {
“amount”: “19.00”,
“currency”: “EUR”
}
]
}
$
19.00
Free!
DRM-free, of course. ePUB, Kindle, PDF available for download later this summer.
Included with your Smashing Membership.
Download PDF, ePUB, Kindle.
Thanks for being smashing! ❤️
Dr. Michele A. Williams is owner of M.A.W. Consulting, LLC – Making Accessibility Work. Her 20+ years of experience include influencing top tech companies as a Senior User Experience (UX) Researcher and Accessibility Specialist and obtaining a PhD in Human-Centered Computing focused on accessibility. An international speaker, published academic author, and patented inventor, she is passionate about educating and advising on technology that does not exclude disabled users.
“Accessible UX Research stands as a vital and necessary resource. In addressing disability at the User Experience Research layer, it helps to set an equal and equitable tone for products and features that resonates through the rest of the creation process. The book provides a solid framework for all aspects of conducting research efforts, including not only process considerations, but also importantly the mindset required to approach the work.
This is the book I wish I had when I was first getting started with my accessibility journey. It is a gift, and I feel so fortunate that Michele has chosen to share it with us all.”
Eric Bailey, Accessibility Advocate
“User research in accessibility is non-negotiable for actually meeting users’ needs, and this book is a critical piece in the puzzle of actually doing and integrating that research into accessibility work day to day.”
Devon Pershing, Author of The Accessibility Operations Guidebook
“Our decisions as developers and designers are often based on recommendations, assumptions, and biases. Usually, this doesn’t work, because checking off lists or working solely from our own perspective can never truly represent the depth of human experience. Michele’s book provides you with the strategies you need to conduct UX research with diverse groups of people, challenge your assumptions, and create truly great products.”
Manuel Matuzović, Author of the Web Accessibility Cookbook
“This book is a vital resource on inclusive research. Michele Williams expertly breaks down key concepts, guiding readers through disability models, language, and etiquette. A strong focus on real-world application equips readers to conduct impactful, inclusive research sessions. By emphasizing diverse perspectives and proactive inclusion, the book makes a compelling case for accessibility as a core principle rather than an afterthought. It is a must-read for researchers, product-makers, and advocates!”
Anna E. Cook, Accessibility and Inclusive Design Specialist
Producing a book takes quite a bit of time, and we couldn’t pull it off without the support of our wonderful community. A huge shout-out to Smashing Members for the kind, ongoing support. The eBook is and always will be free for Smashing Members as soon as it’s out. Plus, Members get a friendly discount when purchasing their printed copy. Just sayin’! 😉
Promoting best practices and providing you with practical tips to master your daily coding and design challenges has always been (and will be) at the core of everything we do at Smashing.
In the past few years, we were very lucky to have worked together with some talented, caring people from the web community to publish their wealth of experience as printed books that stand the test of time. Addy, Heather, and Steven are three of these people. Have you checked out their books already?
A deep dive into how production sites of different sizes tackle performance, accessibility, capabilities, and developer experience at scale.
Everything you need to know to put your users first and make a better web.
Learn how touchscreen devices really work — and how people really use them.
What I Wish Someone Told Me When I Was Getting Into ARIA What I Wish Someone Told Me When I Was Getting Into ARIA Eric Bailey 2025-06-16T13:00:00+00:00 2025-06-25T15:04:30+00:00 If you haven’t encountered ARIA before, great! It’s a chance to learn something new and exciting. If […]
Accessibility
2025-06-16T13:00:00+00:00
2025-06-25T15:04:30+00:00
If you haven’t encountered ARIA before, great! It’s a chance to learn something new and exciting. If you have heard of ARIA before, this might help you better understand it or maybe even teach you something new!
These are all things I wish someone had told me when I was getting started on my web accessibility journey. This post will:
It is my hope that in doing so, this post will help make an oft-overlooked yet vital corner of web design and development easier to approach.
This is not a recipe book for how to use ARIA to build accessible websites and web apps. It is also not a guide for how to remediate an inaccessible experience. A lot of accessibility work is highly contextual. I do not know the specific needs of your project or organization, so trying to give advice here could easily do more harm than good.
Instead, think of this post as a “know before you go” guide. I’m hoping to give you a good headspace to approach ARIA, as well as highlight things to watch out for when you undertake your journey. So, with that out of the way, let’s dive in!
ARIA is what you turn to if there is not a native HTML element or attribute that is better suited for the job of communicating interactivity, purpose, and state.
Think of it like a spice that you sprinkle into your markup to enhance things.
Adding ARIA to your HTML markup is a way of providing additional information to a website or web app for screen readers and voice control software.
Here is an illustration to help communicate what I mean by this:

button element will instruct assistive technology to report it as a button, letting someone know that it can be activated to perform a predefined action.aria-pressed="true" means that someone or something has previously activated the button, and it is now in a “pushed in” state that sustains its action.This overall pattern will let people who use assistive technology know:
ARIA has been around for a long time, with the first version published on September 26th, 2006.

ARIA was created to provide a bridge between the limitations of HTML and the need for making interactive experiences understandable by assistive technology.
“
The latest version of ARIA is version 1.2, published on June 6th, 2023. Version 1.3 is slated to be released relatively soon, and you can read more about it in this excellent article by Craig Abbott.
You may also see it referred to as WAI-ARIA, where WAI stands for “Web Accessibility Initiative.” The WAI is part of the W3C, the organization that sets standards for the web. That said, most accessibility practitioners I know call it “ARIA” in written and verbal communication and leave out the “WAI-” part.
The reason for this is simple: The web was a lot less mature in the past than it is now. The most popular operating system in 2006 was Windows XP. The iPhone didn’t exist yet; it was released a year later.
From a very high level, ARIA is a snapshot of the operating system interaction paradigms of this time period. This is because ARIA recreates them.

Smartphones with features like tappable, swipeable, and draggable surfaces were far less commonplace. Single Page Application “web app” experiences were also rare, with Ajax-based approaches being the most popular. This means that we have to build the experiences of today using the technology of 2006. In a way, this is a good thing. It forces us to take new and novel experiences and interrogate them.
Interactions that cannot be broken down into smaller, more focused pieces that map to ARIA patterns are most likely inaccessible. This is because they won’t be able to be operated by assistive technology or function on older or less popular devices.
I may be biased, but I also think these sorts of novel interactions that can’t translate also serve as a warning that a general audience will find them to be confusing and, therefore, unusable. This belief is important to consider given that the internet serves:
Contemporary expectations for keyboard-based interaction for web content — checkboxes, radios, modals, accordions, and so on — are sourced from Windows XP and its predecessor operating systems. These interaction models are carried forward as muscle memory for older people who use assistive technology. Younger people who rely on assistive technology also learn these de facto standards, thus continuing the cycle.
What does this mean for you? Someone using a keyboard to interact with your website or web app will most likely try these Windows OS-based keyboard shortcuts first. This means things like pressing:
This is not to say that ARIA has stagnated. It is constantly being worked on with new additions, removals, and clarifications. Remember, it is now at version 1.2, with version 1.3 arriving soon.
In parallel, HTML as a language also reflects this evolution. Elements were originally created to support a document-oriented web and have been gradually evolving to support more dynamic, app-like experiences. The great bit here is that this is all conducted in the open and is something you can contribute to if you feel motivated to do so.
There are five rules included in ARIA’s documentation to help steer how you approach it:
<a>) for a link rather than a div with a click handler and a role of link.div.role="presentation" or aria-hidden="true" on a focusable element.button element.Observing these five rules will do a lot to help you out. The following is more context to provide even more support.
There is a structured grammar to ARIA, and it is centered around roles, as well as states and properties.
A Role is what assistive technology reads and then announces. A lot of people refer to this in shorthand as semantics. HTML elements have implied roles, which is why an anchor element will be announced as a link by screen readers with no additional work.

Implied roles are almost always better to use if the use case calls for them. Recall the first rule of ARIA here. This is usually what digital accessibility practitioners refer to when they say, “Just use semantic HTML.”
There are many reasons for favoring implied roles. The main consideration is better guarantees of support across an unknown number of operating systems, browsers, and assistive technology combinations.
Roles have categories, each with its own purpose. The Abstract role category is notable in that it is an organizing supercategory not intended to be used by authors:
Abstract roles are used for the ontology. Authors MUST NOT use abstract roles in content.
<!-- This won't work, don't do it -->
<h2 role="sectionhead">
Anatomy and physiology
</h2>
<!-- Do this instead -->
<section aria-labeledby="anatomy-and-physiology">
<h2 id="anatomy-and-physiology">
Anatomy and physiology
</h2>
</section>
Additionally, in the same way, you can only declare ARIA on certain things, you can only declare some ARIA as children of other ARIA declarations. An example of this is the the listitem role, which requires a role of list to be present on its parent element.
So, what’s the best way to determine if a role requires a parent declaration? The answer is to review the official definition.
States and properties are the other two main parts of ARIA‘s overall taxonomy.
Implicit roles are provided by semantic HTML, and explicit roles are provided by ARIA. Both describe what an element is. States describe that element’s characteristics in a way that assistive technology can understand. This is done via property declarations and their companion values.

ARIA states can change quickly or slowly, both as a result of human interaction as well as application state. When the state is changed as a result of human interaction, it is considered an “unmanaged state.” Here, a developer must supply the underlying JavaScript logic to control the interaction.
When the state changes as a result of the application (e.g., operating system, web browser, and so on), this is considered “managed state.” Here, the application automatically supplies the underlying logic.
Think of ARIA as an extension of HTML attributes, a suite of name/value pairs. Some values are predefined, while others are author-supplied:

For the examples in the previous graphic, the polite value for aria-live is one of the three predefined values (off, polite, and assertive). For aria-label, “Save” is a text string manually supplied by the author.
You declare ARIA on HTML elements the same way you declare other attributes:
<!--
Applies an id value of
"carrot" to the div
-->
<div id="carrot"></div>
<!--
Hides the content of this paragraph
element from assistive technology
-->
<p aria-hidden="true">
Assistive technology can't read this
</p>
<!--
Provides an accessible name of "Stop",
and also communicates that the button
is currently pressed. A type property
with a value of "button" prevents
browser form submission.
-->
<button
aria-label="Stop"
aria-pressed="true"
type="button">
<!-- SVG icon -->
</button>
Other usage notes:
class or id. The order of declarations does not matter here, either.It might also be helpful to know that boolean attributes are treated a little differently in ARIA when compared to HTML. Hidde de Vries writes about this in his post, “Boolean attributes in HTML and ARIA: what’s the difference?”.
In this context, “hardcoding” means directly writing a static attribute or value declaration into your component, view, or page.
A lot of ARIA is designed to be applied or conditionally modified dynamically based on application state or as a response to someone’s action. An example of this is a show-and-hide disclosure pattern:
aria-expanded attribute is toggled from false to true to communicate if the disclosure is in an expanded or collapsed state.hidden attribute is conditionally removed or added in tandem to show or hide the disclosure’s full content area.<div class="disclosure-container">
<button
aria-expanded="false"
class="disclosure-toggle"
type="button">
How we protect your personal information
</button>
<div
hidden
class="disclosure-content">
<ul>
<li>Fast, accurate, thorough and non-stop protection from cyber attacks</li>
<li>Patching practices that address vulnerabilities that attackers try to exploit</li>
<li>Data loss prevention practices help to ensure data doesn't fall into the wrong hands</li>
<li>Supply risk management practices help ensure our suppliers adhere to our expectations</li>
</ul>
<p>
<a href="/security/">Learn more about our security best practices</a>.
</p>
</div>
</div>
A common example of a hardcoded ARIA declaration you’ll encounter on the web is making an SVG icon inside a button decorative:
<button type="button>
<svg aria-hidden="true">
<!-- SVG code -->
</svg>
Save
</button>
Here, the string “Save” is what is required for someone to understand what the button will do when they activate it. The accompanying icon helps that understanding visually but is considered redundant and therefore decorative.
An implied role is all you need if you’re using semantic HTML. Explicitly declaring its role via ARIA does not confer any additional advantages.
<!--
You don't need to declare role="button" here.
Using the <button> element will make assistive
technology announce it as a button. The
role="button" declaration is redundant.
-->
<button role="button">
Save
</button>
You might occasionally run into these redundant declarations on HTML sectioning elements, such as <main role="main">, or <footer role="contentinfo">. This isn’t needed anymore, and you can just use the <main> or <footer> elements.
The reason for this is historic. These declarations were done for support reasons, in that it was a stop-gap technique for assistive technology that needed to be updated to support these new-at-the-time HTML elements.
Contemporary assistive technology does not need these redundant declarations. Think of it the same way that we don’t have to use vendor prefixes for the CSS border-radius property anymore.
Note: There is an exception to this guidance. There are circumstances where certain complex and complicated markup patterns don’t work as expected for assistive technology. In these cases, we want to hardcode the implicit role as explicit ARIA to ensure it works. This assistive technology support concern is covered in more detail later in this post.
Both implicit and explicit roles are announced by screen readers. You don’t need to include that part for things like the interactive element’s text string or an aria-label.
<!-- Don't do this -->
<button
aria-label="Save button"
type="button">
<!-- Icon SVG -->
</button>
<!-- Do this instead -->
<button
aria-label="Save"
type="button">
<!-- Icon SVG -->
</button>
Had we used the string value of “Save button” for our Save button, a screen reader would announce it along the lines of, “Save button, button.” That’s redundant and confusing.
We sometimes refer to website and web app navigation colloquially as menus, especially if it’s an e-commerce-style mega menu.
In ARIA, menus mean something very specific. Don’t think of global or in-page navigation or the like. Think of menus in this context as what appears when you click the Edit menu button on your application’s menubar.

Using a role improperly because its name seems like an appropriate fit at first glance creates confusion for people who do not have the context of the visual UI. Their expectations will be set with the announcement of the role, then subverted when it does not act the way it is supposed to.
Imagine if you click on a link, and instead of taking you to another webpage, it sends something completely unrelated to your printer instead. It’s sort of like that.
Declaring role="menu" is a common example of a misapplied role, but there are others. The best way to know what a role is used for? Go straight to the source and read up on it.
These roles are caption, code, deletion, emphasis, generic, insertion, paragraph, presentation, strong, subscript, and superscript.
This means you can try and provide an accessible name for one of these elements — say via aria-label — but it won’t work because it’s disallowed by the rules of ARIA’s grammar.
<!-- This won't work-->
<strong aria-label="A 35% discount!">
$39.95
</strong>
<!-- Neither will this -->
<code title="let JavaScript example">
let submitButton = document.querySelector('button[type="submit"]');
</code>
For these examples, recall that the role is implicit, sourced from the declared HTML element.
Note here that sometimes a browser will make an attempt regardless and overwrite the author-specified string value. This overriding is a confusing act for all involved, which led to the rule being established in the first place.
I’ve witnessed some developers guess-adding CSS classes, such as .background-red or .text-white, to their markup and being rewarded if the design visually updates correctly.
The reason this works is that someone previously added those classes to the project. With ARIA, the people who add the content we can use are the Accessible Rich Internet Applications Working Group. This means each new version of ARIA has a predefined set of properties and values. Assistive technology is then updated to parse those attributes and values, although this isn’t always a guarantee.
Declaring ARIA, which isn’t part of that predefined set, means assistive technology won’t know what it is and consequently won’t announce it.
<!--
There is no "selectpanel" role in ARIA.
Because of this, this code will be announced
as a button and not as a select panel.
-->
<button
role="selectpanel"
type="button">
Choose resources
</button>
This speaks to the previous section, where ARIA won’t understand words spoken to it that exist outside its limited vocabulary.
There are no console errors for malformed ARIA. There’s also no alert dialog, beeping sound, or flashing light for your operating system, browser, or assistive technology. This fact is yet another reason why it is so important to test with actual assistive technology.
You don’t have to be an expert here, either. There is a good chance your code needs updating if you set something to announce as a specific state and assistive technology in its default configuration does not announce that state.
Applying ARIA to something does not automatically “unlock” capabilities. It only sends a hint to assistive technology about how the interactive content should behave.
For assistive technology like screen readers, that hint could be for how to announce something. For assistive technology like refreshable Braille displays, it could be for how it raises and lowers its pins. For example, declaring role="button" on a div element does not automatically make it clickable. You will still need to:
div element in JavaScript,This all makes me wonder why you can’t save yourself some work and use a button element in the first place, but that is a different story for a different day.
Additionally, adjusting an element’s role via ARIA does not modify the element’s native functionality. For example, you can declare role="image" on a div element. However, attempting to declare the alt or src attributes on the div won’t work. This is because alt and src are not supported attributes for div.

This speaks to the previous section on ARIA only exposing something’s presence. Don’t forget that certain HTML elements have primary and secondary interactive capabilities built into them.
For example, an anchor element’s primary capability is navigating to whatever URL value is provided for its href attribute. Secondary capabilities for an anchor element include copying the URL value, opening it in a new tab or incognito window, and so on.

These secondary capabilities are still preserved. However, it may not be apparent to someone that they can use them — or use them in the way that they’d expect — depending on what is announced.
The opposite is also true. When an element has no capabilities, having its role adjusted does not grant it any new abilities. Remember, ARIA only announces. This is why that div with a role of button assigned to it won’t do anything when clicked if no companion JavaScript logic is also present.

A lot of the previous content may make it seem like ARIA is something you should avoid using altogether. This isn’t true. Know that this guidance is written to help steer you to situations where HTML does not offer the capability to describe an interaction out of the box. This space is where you want to use ARIA.
Knowing how to identify this area requires spending some time learning what HTML elements there are, as well as what they are and are not used for. I quite like HTML5 Doctor’s Element Index for upskilling on this.
This is analogous to how HTML has both global attributes and attributes that can only be used on a per-element basis. For example, aria-describedby can be used on any HTML element or role. However, aria-posinset can only be used with article, comment, listitem, menuitem, option, radio, row, and tab roles. Remember here that these roles can be provided by either HTML or ARIA.
Learning what states require which roles can be achieved by reading the official reference. Check for the “Used in Roles” portion of each entry’s characteristics:

aria-setsize. (Large preview)
Automated code scanners — like axe, WAVE, ARC Toolkit, Pa11y, equal-access, and so on — can catch this sort of thing if they are written in error. I’m a big fan of implementing these sorts of checks as part of a continuous integration strategy, as it makes it a code quality concern shared across the whole team.
Speaking of technology that listens, it is helpful to know that the ARIA you declare instructs the browser to speak to the operating system the browser is installed on. Assistive technology then listens to what the operating system reports. It then communicates that to the person using the computer, tablet, smartphone, and so on.

A person can then instruct assistive technology to request the operating system to take action on the web content displayed in the browser.

This interaction model is by design. It is done to make interaction from assistive technology indistinguishable from interaction performed without assistive technology.
There are a few reasons for this approach. The most important one is it helps preserve the privacy and autonomy of the people who rely on assistive technologies.
This support issue was touched on earlier and is a difficult fact to come to terms with.
Contemporary developers enjoy the hard-fought, hard-won benefits of the web standards movement. This means you can declare HTML and know that it will work with every major browser out there. ARIA does not have this. Each assistive technology vendor has its own interpretation of the ARIA specification. Oftentimes, these interpretations are convergent. Sometimes, they’re not.
Assistive technology vendors also have support roadmaps for their products. Some assistive technology vendors:
There is also the operating system layer to contend with, which I’ll cover in more detail in a little bit. Here, the mechanisms used to communicate with assistive technology are dusty, oft-neglected areas of software development.
With these layers comes a scenario where the assistive technology can support the ARIA declared, but the operating system itself cannot communicate the ARIA’s presence, or vice-versa. The reasons for this are varied but ultimately boil down to a historic lack of support, prioritization, and resources. However, I am optimistic that this is changing.
Additionally, there is no equivalent to Caniuse, Baseline, or Web Platform Status for assistive technology. The closest analog we have to support checking resources is a11ysupport.io, but know that it is the painstaking work of a single individual. Its content may not be up-to-date, as the work is both Herculean in its scale and Sisyphean in its scope. Because of this, I must re-stress the importance of manually testing with assistive technology to determine if the ARIA you use works as intended.
How To Determine ARIA Support
There are three main layers to determine if something is supported:
Each operating system (e.g., Windows, macOS, Linux) has its own way of communicating what content is present to assistive technology. Each piece of assistive technology has to accommodate how to parse that communication.
Some assistive technology is incompatible with certain operating systems. An example of this is not being able to use VoiceOver with Windows, or JAWS with macOS. Furthermore, each version of each operating system has slight variations in what is reported and how. Sometimes, the operating system needs to be updated to “teach” it the updated AIRA vocabulary. Also, do not forget that things like bugs and regressions can occur.
There is no “one true way” to make assistive technology. Each one is built to address different access needs and wants and is done so in an opinionated way — think how different web browsers have different features and UI.
Each piece of assistive technology that consumes web content has its own way of communicating this information, and this is by design. It works with what the operating system reports, filtered through things like heuristics and preferences.

aria-label. (Large preview)
Like operating systems, assistive technology also has different versions with what each version is capable of supporting. They can also be susceptible to bugs and regressions.
Another two factors worth pointing out here are upgrade hesitancy and lack of financial resources. Some people who rely on assistive technology are hesitant to upgrade it. This is based on a very understandable fear of breaking an important mechanism they use to interact with the world. This, in turn, translates to scenarios like holding off on updates until absolutely necessary, as well as disabling auto-updating functionality altogether.
Lack of financial resources is sometimes referred to as the disability or crip tax. Employment rates tend to be lower for disabled populations, and with that comes less money to spend on acquiring new technology and updating it. This concern can and does apply to operating systems, browsers, and assistive technology.
Some assistive technology works better with one browser compared to another. This is due to the underlying mechanics of how the browser reports its content to assistive technology. Using Firefox with NVDA is an example of this.
Additionally, the support for this reporting sometimes only gets added for newer versions. Unfortunately, it also means support can sometimes accidentally regress, and people don’t notice before releasing the browser update — again, this is due to a historic lack of resources and prioritization.
Common ARIA declarations you’ll come across include, but are not limited to:
aria-label,aria-labelledby,aria-describedby,aria-hidden,aria-live.These are more common because they’re more supported. They are more supported because many of these declarations have been around for a while. Recall the previous section that discussed actual assistive technology support compared to what the ARIA specification supplies.
Newer, more esoteric ARIA, or historically deprioritized declarations, may not have that support yet or may never. An example of how complicated this can get is aria-controls.
aria-controls is a part of ARIA that has been around for a while. JAWS had support for aria-controls, but then removed it after user feedback. Meanwhile, every other screen reader I’m aware of never bothered to add support.
What does that mean for us? Determining support, or lack thereof, is best accomplished by manual testing with assistive technology.
This fact takes into consideration the complexities in preferences, different levels of support, bugs, regressions, and other concerns that come with ARIA’s usage.
Philosophically, it’s a lot like adding more interactive complexity to your website or web app via JavaScript. The larger the surface area your code covers, the bigger the chance something unintended happens.
Consider the amount of ARIA added to a component or discrete part of your experience. The more of it there is declared nested into the Document Object Model (DOM), the more it interacts with parent ARIA declarations. This is because assistive technology reads what the DOM exposes to help determine intent.
A lot of contemporary development efforts are isolated, feature-based work that focuses on one small portion of the overall experience. Because of this, they may not take this holistic nesting situation into account. This is another reason why — you guessed it — manual testing is so important.
Anecdotally, WebAIM’s annual Millions report — an accessibility evaluation of the top 1,000,000 websites — touches on this phenomenon:
Increased ARIA usage on pages was associated with higher detected errors. The more ARIA attributes that were present, the more detected accessibility errors could be expected. This does not necessarily mean that ARIA introduced these errors (these pages are more complex), but pages typically had significantly more errors when ARIA was present.
There is a chance that ARIA, which is authored inaccurately, will actually function as intended with assistive technology. While I do not recommend betting on this fact to do your work, I do think it is worth mentioning when it comes to things like debugging.
This is due to the wide range of familiarity there is with people who author ARIA.
Some of the more mature assistive technology vendors try to accommodate the lower end of this familiarity. This is done in order to better enable the people who use their software to actually get what they need.
There isn’t an exhaustive list of what accommodations each piece of assistive technology has. Think of it like the forgiving nature of a browser’s HTML parser, where the ultimate goal is to render content for humans.
aria-label Is Trickyaria-label is one of the most common ARIA declarations you’ll run across. It’s also one of the most misused.
aria-label can’t be applied to non-interactive HTML elements, but oftentimes is. It can’t always be translated and is oftentimes overlooked for localization efforts. Additionally, it can make things frustrating to operate for people who use voice control software, where the visible label differs from what the underlying code uses.
Another problem is when it overrides an interactive element’s pre-existing accessible name. For example:
<!-- Don't do this -->
<a
aria-label="Our services"
href="/services/">
Services
</a>
This is a violation of WCAG Success Criterion 2.5.3: Label in Name, pure and simple. I have also seen it used as a way to provide a control hint. This is also a WCAG failure, in addition to being an antipattern:
<!-- Also don't do this -->
<a
aria-label="Click this link to learn more about our unique and valuable services"
href="/services/">
Services
</a>
These factors — along with other considerations — are why I consider aria-label a code smell.
aria-live Is Even TrickierLive region announcements are powered by aria-live and are an important part of communicating updates to an experience to people who use screen readers.
Believe me when I say that getting aria-live to work properly is tricky, even under the best of scenarios. I won’t belabor the specifics here. Instead, I’ll point you to “Why are my live regions not working?”, a fantastic and comprehensive article published by TetraLogical.
Also referred to as the APG, the ARIA Authoring Practices Guide should be treated with a decent amount of caution.

The guide was originally authored to help demonstrate ARIA’s capabilities. As a result, its code examples near-exclusively, overwhelmingly, and disproportionately favor ARIA.
Unfortunately, the APG’s latest redesign also makes it far more approachable-looking than its surrounding W3C documentation. This is coupled with demonstrating UI patterns in a way that signals it’s a self-serve resource whose code can be used out of the box.
These factors create a scenario where people assume everything can be used as presented. This is not true.
Recall that just because ARIA is listed in the spec does not necessarily guarantee it is supported. Adrian Roselli writes about this in detail in his post, “No, APG’s Support Charts Are Not ‘Can I Use’ for ARIA”.
Also, remember the first rule of ARIA and know that an ARIA-first approach is counter to the specification’s core philosophy of use.
In my experience, this has led to developers assuming they can copy-paste code examples or reference how it’s structured in their own efforts, and everything will just work. This leads to mass frustration:
This is to say nothing about things like timelines and resourcing, working relationships, reputation, and brand perception.
The APG’s main strength is highlighting what keyboard keypresses people will expect to work on each pattern.
Consider the listbox pattern. It details keypresses you may expect (arrow keys, Space, and Enter), as well as less-common ones (typeahead selection and making multiple selections). Here, we need to remember that ARIA is based on the Windows XP era. The keyboard-based interaction the APG suggests is built from the muscle memory established from the UI patterns used on this operating system.
While your tree view component may look visually different from the one on your operating system, people will expect it to be keyboard operable in the same way. Honoring this expectation will go a long way to ensuring your experiences are not only accessible but also intuitive and efficient to use.
Another strength of the APG is giving standardized, centralized names to UI patterns. Is it a dropdown? A listbox? A combobox? A select menu? Something else?
When it comes to digital accessibility, these terms all have specific meanings, as well as expectations that come with them. Having a common vocabulary when discussing how an experience should work goes a long way to ensuring everyone will be on the same page when it comes time to make and maintain things.
VoiceOver on macOS has been experiencing a lot of problems over the last few years. If I could wager a guess as to why this is, as an outsider, it is that Apple’s priorities are focused elsewhere.
The bulk of web development efforts are conducted on macOS. This means that well-intentioned developers will reach for VoiceOver, as it comes bundled with macOS and is therefore more convenient. However, macOS VoiceOver usage has a drastic minority share for desktops and laptops. It is under 10% of usage, with Windows-based JAWS and NVDA occupying a combined 78.2% majority share:

The sad, sorry truth of the matter is that macOS VoiceOver, in its current state, has a lot of problems. It should only be used to confirm that it can operate the experience the way Windows-based screen readers can.
This means testing on Windows with NVDA or JAWS will create an experience that is far more accurate to what most people who use screen readers on a laptop or desktop will experience.
Because of this situation, I heavily encourage a workflow that involves:
Most of the time, I find myself having to declare redundant ARIA on the semantic HTML I write in order to address missed expected announcements for macOS VoiceOver.
macOS VoiceOver testing is still important to do, as it is not the fault of the person who uses macOS VoiceOver to get what they need, and we should ensure they can still have access.
You can use apps like VirtualBox and Windows evaluation Virtual Machines to use Windows in your macOS development environment. Services like AssistivLabs also make on-demand, preconfigured testing easy.
What About iOS VoiceOver?
Despite sharing the same name, VoiceOver on iOS is a completely different animal. As software, it is separate from its desktop equivalent and also enjoys a whopping 70.6% usage share.
With this knowledge, know that it’s also important to test the ARIA you write on mobile to make sure it works as intended.
ARIA attributes can be targeted via CSS the way other HTML attributes can. Consider this HTML markup for the main navigation portion of a small e-commerce site:
<nav aria-label="Main">
<ul>
<li>
<a href="/home/">Home</a>
<a href="/products/">Products</a>
<a aria-current="true" href="/about-us/">About Us</a>
<a href="/contact/">Contact</a>
</li>
</ul>
</nav>
The presence of aria-current="true" on the “About Us” link will tell assistive technology to announce that it is the current part of the site someone is on if they are navigating through the main site navigation.
We can also tie that indicator of being the current part of the site into something that is shown visually. Here’s how you can target the attribute in CSS:
nav[aria-label="Main"] [aria-current="true"] {
border-bottom: 2px solid #ffffff;
}
This is an incredibly powerful way to tie application state to user-facing state. Combine it with modern CSS like :has() and view transitions and you have the ability to create robust, sophisticated UI with less reliance on JavaScript.
Tests are great. They help guarantee that the code you work on will continue to do what you intended it to do.
A lot of web UI-based testing will use the presence of classes (e.g., .is-expanded) or data attributes (ex, data-expanded) to verify a UI’s existence, position and states. These types of selectors also have a far greater likelihood to be changed as time goes on when compared to semantic code and ARIA declarations.
This is something my coworker Cam McHenry touches on in his great post, “How I write accessible Playwright tests”. Consider this piece of Playwright code, which checks for the presence of a button that toggles open an edit menu:
// Selects an element with a role of `button`
// that has an accessible name of "Edit"
const editMenuButton = await page.getByRole('button', { name: "Edit" });
// Requires the edit button to have a property
// of `aria-haspopup` with a value of `true`
expect(editMenuButton).toHaveAttribute('aria-haspopup', 'true');
The test selects UI based on outcome rather than appearance. That’s a far more reliable way to target things in the long-term.
This all helps to create a virtuous feedback cycle. It enshrines semantic HTML and ARIA’s presence in your front-end UI code, which helps to guarantee accessible experiences don’t regress. Combining this with styling, you have a powerful, self-contained system for building robust, accessible experiences.
Web accessibility can be about enabling important things like scheduling medical appointments. It is also about fun things like chatting with your friends. It’s also used for every web experience that lives in between.
Using semantic HTML — supplemented with a judicious application of ARIA — helps you enable these experiences. To sum things up, ARIA:
aria-label, the ARIA Authoring Practices Guide, and macOS VoiceOver support;Viewed one way, ARIA is arcane, full of misconceptions, and fraught with potential missteps. Viewed another, ARIA is a beautiful and elegant way to programmatically communicate the interactivity and state of a user interface.
I choose the second view. At the end of the day, using ARIA helps to ensure that disabled people can use a web experience the same way everyone else can.
Thank you to Adrian Roselli and Jan Maarten for their feedback.
Creating The “Moving Highlight” Navigation Bar With JavaScript And CSS Creating The “Moving Highlight” Navigation Bar With JavaScript And CSS Blake Lundquist 2025-06-11T13:00:00+00:00 2025-06-25T15:04:30+00:00 I recently came across an old jQuery tutorial demonstrating a “moving highlight” navigation bar and decided the concept was due for […]
Accessibility
2025-06-11T13:00:00+00:00
2025-06-25T15:04:30+00:00
I recently came across an old jQuery tutorial demonstrating a “moving highlight” navigation bar and decided the concept was due for a modern upgrade. With this pattern, the border around the active navigation item animates directly from one element to another as the user clicks on menu items. In 2025, we have much better tools to manipulate the DOM via vanilla JavaScript. New features like the View Transition API make progressive enhancement more easily achievable and handle a lot of the animation minutiae.

In this tutorial, I will demonstrate two methods of creating the “moving highlight” navigation bar using plain JavaScript and CSS. The first example uses the getBoundingClientRect method to explicitly animate the border between navigation bar items when they are clicked. The second example achieves the same functionality using the new View Transition API.
Let’s assume that we have a single-page application where content changes without the page being reloaded. The starting HTML and CSS are your standard navigation bar with an additional div element containing an id of #highlight. We give the first navigation item a class of .active.
See the Pen [Moving Highlight Navbar Starting Markup [forked]](https://codepen.io/smashingmag/pen/EajQyBW) by Blake Lundquist.
For this version, we will position the #highlight element around the element with the .active class to create a border. We can utilize absolute positioning and animate the element across the navigation bar to create the desired effect. We’ll hide it off-screen initially by adding left: -200px and include transition styles for all properties so that any changes in the position and size of the element will happen gradually.
#highlight {
z-index: 0;
position: absolute;
height: 100%;
width: 100px;
left: -200px;
border: 2px solid green;
box-sizing: border-box;
transition: all 0.2s ease;
}
We want the highlight element to animate when a user changes the .active navigation item. Let’s add a click event handler to the nav element, then filter for events caused only by elements matching our desired selector. In this case, we only want to change the .active nav item if the user clicks on a link that does not already have the .active class.
Initially, we can call console.log to ensure the handler fires only when expected:
const navbar = document.querySelector('nav');
navbar.addEventListener('click', function (event) {
// return if the clicked element doesn't have the correct selector
if (!event.target.matches('nav a:not(active)')) {
return;
}
console.log('click');
});
Open your browser console and try clicking different items in the navigation bar. You should only see "click" being logged when you select a new item in the navigation bar.
Now that we know our event handler is working on the correct elements let’s add code to move the .active class to the navigation item that was clicked. We can use the object passed into the event handler to find the element that initialized the event and give that element a class of .active after removing it from the previously active item.
const navbar = document.querySelector('nav');
navbar.addEventListener('click', function (event) {
// return if the clicked element doesn't have the correct selector
if (!event.target.matches('nav a:not(active)')) {
return;
}
- console.log('click');
+ document.querySelector('nav a.active').classList.remove('active');
+ event.target.classList.add('active');
});
Our #highlight element needs to move across the navigation bar and position itself around the active item. Let’s write a function to calculate a new position and width. Since the #highlight selector has transition styles applied, it will move gradually when its position changes.
Using getBoundingClientRect, we can get information about the position and size of an element. We calculate the width of the active navigation item and its offset from the left boundary of the parent element. Then, we assign styles to the highlight element so that its size and position match.
// handler for moving the highlight
const moveHighlight = () => {
const activeNavItem = document.querySelector('a.active');
const highlighterElement = document.querySelector('#highlight');
const width = activeNavItem.offsetWidth;
const itemPos = activeNavItem.getBoundingClientRect();
const navbarPos = navbar.getBoundingClientRect()
const relativePosX = itemPos.left - navbarPos.left;
const styles = {
left: `${relativePosX}px`,
width: `${width}px`,
};
Object.assign(highlighterElement.style, styles);
}
Let’s call our new function when the click event fires:
navbar.addEventListener('click', function (event) {
// return if the clicked element doesn't have the correct selector
if (!event.target.matches('nav a:not(active)')) {
return;
}
document.querySelector('nav a.active').classList.remove('active');
event.target.classList.add('active');
+ moveHighlight();
});
Finally, let’s also call the function immediately so that the border moves behind our initial active item when the page first loads:
// handler for moving the highlight
const moveHighlight = () => {
// ...
}
// display the highlight when the page loads
moveHighlight();
Now, the border moves across the navigation bar when a new item is selected. Try clicking the different navigation links to animate the navigation bar.
See the Pen [Moving Highlight Navbar [forked]](https://codepen.io/smashingmag/pen/WbvMxqV) by Blake Lundquist.
That only took a few lines of vanilla JavaScript and could easily be extended to account for other interactions, like mouseover events. In the next section, we will explore refactoring this feature using the View Transition API.
The View Transition API provides functionality to create animated transitions between website views. Under the hood, the API creates snapshots of “before” and “after” views and then handles transitioning between them. View transitions are useful for creating animations between documents, providing the native-app-like user experience featured in frameworks like Astro. However, the API also provides handlers meant for SPA-style applications. We will use it to reduce the JavaScript needed in our implementation and more easily create fallback functionality.
For this approach, we no longer need a separate #highlight element. Instead, we can style the .active navigation item directly using pseudo-selectors and let the View Transition API handle the animation between the before-and-after UI states when a new navigation item is clicked.
We’ll start by getting rid of the #highlight element and its associated CSS and replacing it with styles for the nav a::after pseudo-selector:
<nav>
- <div id="highlight"></div>
<a href="#" class="active">Home</a>
<a href="#services">Services</a>
<a href="#about">About</a>
<a href="#contact">Contact</a>
</nav>
- #highlight {
- z-index: 0;
- position: absolute;
- height: 100%;
- width: 0;
- left: 0;
- box-sizing: border-box;
- transition: all 0.2s ease;
- }
+ nav a::after {
+ content: " ";
+ position: absolute;
+ left: 0;
+ top: 0;
+ width: 100%;
+ height: 100%;
+ border: none;
+ box-sizing: border-box;
+ }
For the .active class, we include the view-transition-name property, thus unlocking the magic of the View Transition API. Once we trigger the view transition and change the location of the .active navigation item in the DOM, “before” and “after” snapshots will be taken, and the browser will animate the border across the bar. We’ll give our view transition the name of highlight, but we could theoretically give it any name.
nav a.active::after {
border: 2px solid green;
view-transition-name: highlight;
}
Once we have a selector that contains a view-transition-name property, the only remaining step is to trigger the transition using the startViewTransition method and pass in a callback function.
const navbar = document.querySelector('nav');
// Change the active nav item on click
navbar.addEventListener('click', async function (event) {
if (!event.target.matches('nav a:not(.active)')) {
return;
}
document.startViewTransition(() => {
document.querySelector('nav a.active').classList.remove('active');
event.target.classList.add('active');
});
});
Above is a revised version of the click handler. Instead of doing all the calculations for the size and position of the moving border ourselves, the View Transition API handles all of it for us. We only need to call document.startViewTransition and pass in a callback function to change the item that has the .active class!
At this point, when clicking on a navigation link, you’ll notice that the transition works, but some strange sizing issues are visible.

This sizing inconsistency is caused by aspect ratio changes during the course of the view transition. We won’t go into detail here, but Jake Archibald has a detailed explanation you can read for more information. In short, to ensure the height of the border stays uniform throughout the transition, we need to declare an explicit height for the ::view-transition-old and ::view-transition-new pseudo-selectors representing a static snapshot of the old and new view, respectively.
::view-transition-old(highlight) {
height: 100%;
}
::view-transition-new(highlight) {
height: 100%;
}
Let’s do some final refactoring to tidy up our code by moving the callback to a separate function and adding a fallback for when view transitions aren’t supported:
const navbar = document.querySelector('nav');
// change the item that has the .active class applied
const setActiveElement = (elem) => {
document.querySelector('nav a.active').classList.remove('active');
elem.classList.add('active');
}
// Start view transition and pass in a callback on click
navbar.addEventListener('click', async function (event) {
if (!event.target.matches('nav a:not(.active)')) {
return;
}
// Fallback for browsers that don't support View Transitions:
if (!document.startViewTransition) {
setActiveElement(event.target);
return;
}
document.startViewTransition(() => setActiveElement(event.target));
});
Here’s our view transition-powered navigation bar! Observe the smooth transition when you click on the different links.
See the Pen [Moving Highlight Navbar with View Transition [forked]](https://codepen.io/smashingmag/pen/ogXELKE) by Blake Lundquist.
Animations and transitions between website UI states used to require many kilobytes of external libraries, along with verbose, confusing, and error-prone code, but vanilla JavaScript and CSS have since incorporated features to achieve native-app-like interactions without breaking the bank. We demonstrated this by implementing the “moving highlight” navigation pattern using two approaches: CSS transitions combined with the getBoundingClientRect() method and the View Transition API.
getBoundingClientRect() method documentation
Designing For Neurodiversity Designing For Neurodiversity Vitaly Friedman 2025-06-02T08:00:00+00:00 2025-06-25T15:04:30+00:00 This article is sponsored by TetraLogical Neurodivergent needs are often considered as an edge case that doesn’t fit into common user journeys or flows. Neurodiversity tends to get overlooked in the design process. Or it […]
Accessibility
2025-06-02T08:00:00+00:00
2025-06-25T15:04:30+00:00
This article is sponsored by TetraLogical
Neurodivergent needs are often considered as an edge case that doesn’t fit into common user journeys or flows. Neurodiversity tends to get overlooked in the design process. Or it is tackled late in the process, and only if there is enough time.
But people aren’t edge cases. Every person is just a different person, performing tasks and navigating the web in a different way. So how can we design better, more inclusive experiences that cater to different needs and, ultimately, benefit everyone? Let’s take a closer look.

There is quite a bit of confusion about both terms on the web. Different people think and experience the world differently, and neurodiversity sees differences as natural variations, not deficits. It distinguishes between neurotypical and neurodivergent people.
According to various sources, around 15–40% of the population has neurodivergent traits. These traits can be innate (e.g., autism) or acquired (e.g., trauma). But they are always on a spectrum, and vary a lot. A person with autism is not neurodiverse — they are neurodivergent.
One of the main strengths of neurodivergent people is how imaginative and creative they are, coming up with out-of-the-box ideas quickly. With exceptional levels of attention, strong long-term memory, a unique perspective, unbeatable accuracy, and a strong sense of justice and fairness.
Being different in a world that, to some degree, still doesn’t accept these differences is exhausting. So unsurprisingly, neurodivergent people often bring along determination, resilience, and high levels of empathy.
As a designer, I often see myself as a path-maker. I’m designing reliable paths for people to navigate to their goals comfortably. Without being blocked. Or confused. Or locked out.
That means respecting the simple fact that people’s needs, tasks, and user journeys are all different, and that they evolve over time. And: most importantly, it means considering them very early in the process.
Better accessibility is better for everyone. Instead of making decisions that need to be reverted or refined to be compliant, we can bring a diverse group of people — with accessibility needs, with neurodiversity, frequent and infrequent users, experts, newcomers — in the process, and design with them, rather than for them.
A wonderful resource that helps us design for cognitive accessibility is Stéphanie Walter’s Neurodiversity and UX toolkit. It includes practical guidelines, tools, and resources to better understand and design for dyslexia, dyscalculia, autism, and ADHD.

Another fantastic resource is Will Soward’s Neurodiversity Design System. It combines neurodiversity and user experience design into a set of design standards and principles that you can use to design accessible learning interfaces.
Last but not least, I’ve been putting together a few summaries about neurodiversity and inclusive design over the last few years, so you might find them helpful, too:
A huge thank-you to everyone who has been writing, speaking, and sharing articles, resources, and toolkits on designing for diversity. The topic is often forgotten and overlooked, but it has an incredible impact. 👏🏼👏🏽👏🏾
WCAG 3.0’s Proposed Scoring Model: A Shift In Accessibility Evaluation WCAG 3.0’s Proposed Scoring Model: A Shift In Accessibility Evaluation Mikhail Prosmitskiy 2025-05-02T11:00:00+00:00 2025-06-25T15:04:30+00:00 Since their introduction in 1999, the Web Content Accessibility Guidelines (WCAG) have shaped how we design and develop inclusive digital products. […]
Accessibility
2025-05-02T11:00:00+00:00
2025-06-25T15:04:30+00:00
Since their introduction in 1999, the Web Content Accessibility Guidelines (WCAG) have shaped how we design and develop inclusive digital products. The WCAG 2.x series, released in 2008, introduced clear technical criteria judged in a binary way: either a success criterion is met or not. While this model has supported regulatory clarity and auditability, its “all-or-nothing” nature often fails to reflect the nuance of actual user experience (UX).
Over time, that disconnect between technical conformance and lived usability has become harder to ignore. People engage with digital systems in complex, often nonlinear ways: navigating multistep flows, dynamic content, and interactive states. In these scenarios, checking whether an element passes a rule doesn’t always answer the main question: can someone actually use it?
WCAG 3.0 is still in draft, but is evolving — and it represents a fundamental rethinking of how we evaluate accessibility. Rather than asking whether a requirement is technically met, it asks how well users with disabilities can complete meaningful tasks. Its new outcome-based model introduces a flexible scoring system that prioritizes usability over compliance, shifting focus toward the quality of access rather than the mere presence of features.
WCAG 3.0 was first introduced as a public working draft by the World Wide Web Consortium (W3C) Accessibility Guidelines Working Group in early 2021. The draft is still under active development and is not expected to reach W3C Recommendation status for several years, if not decades, by some accounts. This extended timeline reflects both the complexity of the task and the ambition behind it:
WCAG 3.0 isn’t just an update — it’s a paradigm shift.
Unlike WCAG 2.x, which focused primarily on web pages, WCAG 3.0 aims to cover a much broader ecosystem, including applications, tools, connected devices, and emerging interfaces like voice interaction and extended reality. It also rebrands itself as the W3C Accessibility Guidelines (while the WCAG acronym remains the same), signaling that accessibility is no longer a niche concern — it’s a baseline expectation across the digital world.
Importantly, WCAG 3.0 will not immediately replace 2.x. Both standards will coexist, and conformance to WCAG 2.2 will continue to be valid and necessary for some time, especially in legal and policy contexts.
This expansion isn’t just technical.
WCAG 3.0 reflects a deeper philosophical shift: accessibility is moving from a model of compliance toward a model of effectiveness.
“
Rules alone can’t capture whether a system truly works for someone. That’s why WCAG 3.0 leans into flexibility and future-proofing, aiming to support evolving technologies and real-world use over time. It formalizes a principle long understood by practitioners:
Inclusive design isn’t about passing a test; it’s about enabling people.
WCAG 2.x is structured around four foundational principles — Perceivable, Operable, Understandable, and Robust (aka POUR) — and testable success criteria organized into three conformance levels (A, AA, AAA). While technically precise, these criteria often emphasize implementation over impact.
WCAG 3.0 reorients this structure toward user needs and real outcomes. Its hierarchy is built on:
This shift is more than organizational. It reflects a deeper commitment to aligning technical implementation with UX. Outcomes speak the language of capability, which is about what users should be able to do (rather than just technical presence).
Crucially, outcomes are also where conformance scoring begins to take shape. For example, imagine a checkout flow on an e-commerce website. Under WCAG 2.x, if even one field in the checkout form lacks a label, the process may fail AA conformance entirely. However, under WCAG 3.0, that same flow might be evaluated across multiple outcomes (such as keyboard navigation, form labeling, focus management, and error handling), with each outcome receiving a separate score. If most areas score well but the error messaging is poor, the overall rating might be “Good” instead of “Excellent”, prompting targeted improvements without negating the entire flow’s accessibility.
Rather than relying on pass or fail outcomes, WCAG 3.0 introduces a scoring model that reflects how well accessibility is supported. This shift allows teams to recognize partial successes and prioritize real improvements.
Each outcome in WCAG 3.0 is evaluated through one or more atomic tests. These can include the following:
The result of these tests produces a score for each outcome, often normalized on a 0-4 or 0-5 scale, with labels like Poor, Fair, Good, and Excellent. These scores are then aggregated across functional categories (vision, mobility, cognition, etc.) and user flows.
This allows teams to measure progress, not just compliance. A product that improves from “Fair” to “Good” over time shows real evolution — a concept that doesn’t exist in WCAG 2.x.
To ensure that severity still matters, WCAG 3.0 introduces critical errors, which are high-impact accessibility failures that can override an otherwise positive score.
For example, consider a checkout flow. Under WCAG 2.x, a single missing label might cause the entire flow to fail conformance. WCAG 3.0, however, evaluates multiple outcomes — like form labeling, keyboard access, and error handling — each with its own score. Minor issues, such as unclear error messages or a missing label on an optional field, might lower the rating from “Excellent” to “Good”, without invalidating the entire experience.
But if a user cannot complete a core action, like submitting the form, making a purchase, or logging in, that constitutes a critical error. These failures directly block task completion and significantly reduce the overall score, regardless of how polished the rest of the experience is.
On the other hand, problems with non-essential features — like uploading a profile picture or changing a theme color — are considered lower-impact and won’t weigh as heavily in the evaluation.
In place of categorizing conformance in tiers of Level A, Level AA, and Level AAA, WCAG 3.0 proposes three different conformance tiers:
Unlike in WCAG 2.2, where Level AAA is often seen as aspirational and inconsistent, these levels are intended to incentivize progression. They can also be scoped in the sense that teams can claim conformance for a checkout flow, mobile app, or specific feature, allowing iterative improvement.
While WCAG 3.0 is still being developed, its direction is clear. That said, it’s important to acknowledge that the guidelines are not expected to be finalized in a few years. Here’s how teams can prepare:
These practices won’t just make your product more inclusive; they’ll position your team to excel under WCAG 3.0.
Even though WCAG 3.0 presents a bold step toward more holistic accessibility, several structural risks deserve early attention, especially for organizations navigating regulation, scaling design systems, or building sustainable accessibility practices. Importantly, many of these risks are interconnected: challenges in one area may amplify issues in others.
The move from binary pass or fail criteria to scored evaluations introduces room for subjective interpretation. Without standardized calibration, the same user flow might receive different scores depending on the evaluator. This makes comparability and repeatability harder, particularly in procurement or multi-vendor environments. A simple alternative text might be rated as “adequate” by one team and “unclear” by another.
That same subjectivity leads to a second concern: the erosion of clear compliance thresholds. Scored evaluations replace the binary clarity of “compliant” or “not” with a more flexible, but less definitive, outcome. This could complicate legal enforcement, contractual definitions, and audit reporting. In practice, a product might earn a “Good” rating while still presenting critical usability gaps for certain users, creating a disconnect between score and actual access.
As clarity around compliance blurs, so does alignment with existing legal frameworks. Many current laws explicitly reference WCAG 2.x and its A, AA, and AAA levels (e.g. Section 508 of the Rehabilitation Act of 1973, European Accessibility Act, The Public Sector Bodies (Websites and Mobile Applications) (No. 2) Accessibility Regulations 2018).
Until WCAG 3.0 is formally mapped to those standards, its use in regulated contexts may introduce risk. Teams operating in healthcare, finance, or public sectors will likely need to maintain dual conformance strategies in the interim, increasing cost and complexity.
Perhaps most concerning, this ambiguity can set the stage for a “minimum viable accessibility” mindset. Scored models risk encouraging “Bronze is good enough” thinking, particularly in deadline-driven environments. A team might deprioritize improvements once they reach a passing grade, even if essential barriers remain.
For example, a mobile app with strong keyboard support but missing audio transcripts could still achieve a passing tier, leaving some users excluded.
WCAG 3.0 marks a new era in accessibility — one that better reflects the diversity and complexity of real users. By shifting from checklists to scored evaluations and from rigid technical compliance to practical usability, it encourages teams to prioritize real-world impact over theoretical perfection.
As one might say, “It’s not about the score. It’s about who can use the product.” In my own experience, I’ve seen teams pour hours into fixing minor color contrast issues while overlooking broken keyboard navigation, leaving screen reader users unable to complete essential tasks. WCAG 3.0’s focus on outcomes reminds us that accessibility is fundamentally about functionality and inclusion.
At the same time, WCAG 3.0’s proposed scoring models introduce new responsibilities. Without clear calibration, stronger enforcement patterns, and a cultural shift away from “good enough,” we risk losing the very clarity that made WCAG 2.x enforceable and actionable. The promise of flexibility only works if we use it to aim higher, not to settle earlier.
“
For teams across design, development, and product leadership, this shift is a chance to rethink what success means. Accessibility isn’t about ticking boxes — it’s about enabling people.
By preparing now, being mindful of the risks, and focusing on user outcomes, we don’t just get ahead of WCAG 3.0 — we build digital experiences that are truly usable, sustainable, and inclusive.
Building An Offline-Friendly Image Upload System Building An Offline-Friendly Image Upload System Amejimaobari Ollornwi 2025-04-23T10:00:00+00:00 2025-06-25T15:04:30+00:00 So, you’re filling out an online form, and it asks you to upload a file. You click the input, select a file from your desktop, and are good to […]
Accessibility
2025-04-23T10:00:00+00:00
2025-06-25T15:04:30+00:00
So, you’re filling out an online form, and it asks you to upload a file. You click the input, select a file from your desktop, and are good to go. But something happens. The network drops, the file disappears, and you’re stuck having to re-upload the file. Poor network connectivity can lead you to spend an unreasonable amount of time trying to upload files successfully.
What ruins the user experience stems from having to constantly check network stability and retry the upload several times. While we may not be able to do much about network connectivity, as developers, we can always do something to ease the pain that comes with this problem.
One of the ways we can solve this problem is by tweaking image upload systems in a way that enables users to upload images offline — eliminating the need for a reliable network connection, and then having the system retry the upload process when the network becomes stable, without the user intervening.
This article is going to focus on explaining how to build an offline-friendly image upload system using PWA (progressive web application) technologies such as IndexedDB, service workers, and the Background Sync API. We will also briefly cover tips for improving the user experience for this system.
Here’s a flow chart for an offline-friendly image upload system.

As shown in the flow chart, the process unfolds as follows:
IndexedDB.IndexedDB.IndexedDB, the system waits to detect when the network connection is restored to continue with the next step.IndexedDB.
The first step in the system implementation is allowing the user to select their images. There are different ways you can achieve this:
<input type="file"> element;I would advise that you use both. Some users prefer to use the drag-and-drop interface, while others think the only way to upload images is through the <input type="file"> element. Having both options will help improve the user experience. You can also consider allowing users to paste images directly in the browser using the Clipboard API.
At the heart of this solution is the service worker. Our service worker is going to be responsible for retrieving the image from the IndexedDB store, uploading it when the internet connection is restored, and clearing the IndexedDB store when the image has been uploaded.
To use a service worker, you first have to register one:
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('/service-worker.js')
.then(reg => console.log('Service Worker registered', reg))
.catch(err => console.error('Service Worker registration failed', err));
}
Remember, the problem we are trying to solve is caused by unreliable network connectivity. If this problem does not exist, there is no point in trying to solve anything. Therefore, once the image is selected, we need to check if the user has a reliable internet connection before registering a sync event and storing the image in IndexedDB.
function uploadImage() {
if (navigator.onLine) {
// Upload Image
} else {
// register Sync Event
// Store Images in IndexedDB
}
}
Note: I’m only using the navigator.onLine property here to demonstrate how the system would work. The navigator.onLine property is unreliable, and I would suggest you come up with a custom solution to check whether the user is connected to the internet or not. One way you can do this is by sending a ping request to a server endpoint you’ve created.
Once the network test fails, the next step is to register a sync event. The sync event needs to be registered at the point where the system fails to upload the image due to a poor internet connection.
async function registerSyncEvent() {
if ('SyncManager' in window) {
const registration = await navigator.serviceWorker.ready;
await registration.sync.register('uploadImages');
console.log('Background Sync registered');
}
}
After registering the sync event, you need to listen for it in the service worker.
self.addEventListener('sync', (event) => {
if (event.tag === 'uploadImages') {
event.waitUntil(sendImages());
}
});
The sendImages function is going to be an asynchronous process that will retrieve the image from IndexedDB and upload it to the server. This is what it’s going to look like:
async function sendImages() {
try {
// await image retrieval and upload
} catch (error) {
// throw error
}
}
The first thing we need to do in order to store our image locally is to open an IndexedDB store. As you can see from the code below, we are creating a global variable to store the database instance. The reason for doing this is that, subsequently, when we want to retrieve our image from IndexedDB, we wouldn’t need to write the code to open the database again.
let database; // Global variable to store the database instance
function openDatabase() {
return new Promise((resolve, reject) => {
if (database) return resolve(database); // Return existing database instance
const request = indexedDB.open("myDatabase", 1);
request.onerror = (event) => {
console.error("Database error:", event.target.error);
reject(event.target.error); // Reject the promise on error
};
request.onupgradeneeded = (event) => {
const db = event.target.result;
// Create the "images" object store if it doesn't exist.
if (!db.objectStoreNames.contains("images")) {
db.createObjectStore("images", { keyPath: "id" });
}
console.log("Database setup complete.");
};
request.onsuccess = (event) => {
database = event.target.result; // Store the database instance globally
resolve(database); // Resolve the promise with the database instance
};
});
}
With the IndexedDB store open, we can now store our images.
Now, you may be wondering why an easier solution like
localStoragewasn’t used for this purpose.The reason for that is that
IndexedDBoperates asynchronously and doesn’t block the main JavaScript thread, whereaslocalStorageruns synchronously and can block the JavaScript main thread if it is being used.
Here’s how you can store the image in IndexedDB:
async function storeImages(file) {
// Open the IndexedDB database.
const db = await openDatabase();
// Create a transaction with read and write access.
const transaction = db.transaction("images", "readwrite");
// Access the "images" object store.
const store = transaction.objectStore("images");
// Define the image record to be stored.
const imageRecord = {
id: IMAGE_ID, // a unique ID
image: file // Store the image file (Blob)
};
// Add the image record to the store.
const addRequest = store.add(imageRecord);
// Handle successful addition.
addRequest.onsuccess = () => console.log("Image added successfully!");
// Handle errors during insertion.
addRequest.onerror = (e) => console.error("Error storing image:", e.target.error);
}
With the images stored and the background sync set, the system is ready to upload the image whenever the network connection is restored.
Once the network connection is restored, the sync event will fire, and the service worker will retrieve the image from IndexedDB and upload it.
async function retrieveAndUploadImage(IMAGE_ID) {
try {
const db = await openDatabase(); // Ensure the database is open
const transaction = db.transaction("images", "readonly");
const store = transaction.objectStore("images");
const request = store.get(IMAGE_ID);
request.onsuccess = function (event) {
const image = event.target.result;
if (image) {
// upload Image to server here
} else {
console.log("No image found with ID:", IMAGE_ID);
}
};
request.onerror = () => {
console.error("Error retrieving image.");
};
} catch (error) {
console.error("Failed to open database:", error);
}
}
Once the image has been uploaded, the IndexedDB store is no longer needed. Therefore, it should be deleted along with its content to free up storage.
function deleteDatabase() {
// Check if there's an open connection to the database.
if (database) {
database.close(); // Close the database connection
console.log("Database connection closed.");
}
// Request to delete the database named "myDatabase".
const deleteRequest = indexedDB.deleteDatabase("myDatabase");
// Handle successful deletion of the database.
deleteRequest.onsuccess = function () {
console.log("Database deleted successfully!");
};
// Handle errors that occur during the deletion process.
deleteRequest.onerror = function (event) {
console.error("Error deleting database:", event.target.error);
};
// Handle cases where the deletion is blocked (e.g., if there are still open connections).
deleteRequest.onblocked = function () {
console.warn("Database deletion blocked. Close open connections and try again.");
};
}
With that, the entire process is complete!
While we’ve done a lot to help improve the experience by supporting offline uploads, the system is not without its limitations. I figured I would specifically call those out because it’s worth knowing where this solution might fall short of your needs.
IndexedDB Storage PoliciesIndexedDB. For instance, in Safari, data stored in IndexedDB has a lifespan of seven days if the user doesn’t interact with the website. This is something you should bear in mind if you do come up with an alternative for the background sync API that supports Safari.Since the entire process happens in the background, we need a way to inform the users when images are stored, waiting to be uploaded, or have been successfully uploaded. Implementing certain UI elements for this purpose will indeed enhance the experience for the users. These UI elements may include toast notifications, upload status indicators like spinners (to show active processes), progress bars (to show state progress), network status indicators, or buttons to provide retry and cancel options.
Poor internet connectivity can disrupt the user experience of a web application. However, by leveraging PWA technologies such as IndexedDB, service workers, and the Background Sync API, developers can help improve the reliability of web applications for their users, especially those in areas with unreliable internet connectivity.
What Does It Really Mean For A Site To Be Keyboard Navigable What Does It Really Mean For A Site To Be Keyboard Navigable Eleanor Hecks 2025-04-18T13:00:00+00:00 2025-06-25T15:04:30+00:00 Efficient navigation is vital for a functional website, but not everyone uses the internet the same way. […]
Accessibility
2025-04-18T13:00:00+00:00
2025-06-25T15:04:30+00:00
Efficient navigation is vital for a functional website, but not everyone uses the internet the same way. While most visitors either scroll on mobile or click through with a mouse, many people only use their keyboards. Up to 10 million American adults have carpal tunnel syndrome, which may cause pain when holding a mouse, and vision problems can make it difficult to follow a cursor. Consequently, you should keep your site keyboard navigable to achieve universal appeal and accessibility.
Keyboard navigation allows users to engage with your website solely through keyboard input. That includes using shortcuts and selecting elements with the Tab and Enter keys.
There are more than 500 keyboard shortcuts among operating systems and specific apps your audience may use. Standard ones for web navigation include Ctrl + F to find words or resources, Shift + Arrow to select text, and Ctrl + Tab to move between browser tabs. While these are largely the responsibilities of the software companies behind the specific browser or OS, you should still consider them.
Single-button navigation is another vital piece of keyboard navigability. Users may move between clickable items with the Tab and Shift keys, use the Arrow keys to scroll, press Enter or Space to “click” a link, and exit pop-ups with Esc.

The Washington Post homepage goes further. Pressing Tab highlights clickable elements as it should, but the first button press brings up a link to the site’s accessibility statement first. Users can navigate past this, but including it highlights how the design understands how keyboard navigability is a matter of accessibility.
You should understand how people may use these controls so you can build a site that facilitates them. These navigation options are generally standard, so any deviation or lack of functionality will stand out. Ensuring keyboard navigability, especially in terms of enabling these specific shortcuts and controls, will help you meet such expectations and avoid turning users away.
Keyboard navigability is crucial for a few reasons. Most notably, it makes your site more accessible. In the U.S. alone, over one in four people have a disability, and many such conditions affect technology use. For instance, motor impairments make it challenging for someone to use a standard mouse, and users with vision problems typically require keyboard and screen reader use.
Beyond accounting for various usage needs, enabling a wider range of control methods makes a site convenient. Using a keyboard rather than a mouse is faster when it works as it should and may feel more comfortable. Considering how workers spend nearly a third of their workweek looking for information, any obstacles to efficiency can be highly disruptive.
Falling short in these areas may lead to legal complications. Regulations like the Americans with Disabilities Act necessitate tech accessibility. While the ADA has no binding rules for what constitutes an accessible website, it specifically mentions keyboard navigation in its nonbinding guidance. Failing to support such functionality does not necessarily mean you’ll face legal penalties, but courts can use these standards to inform their decision on whether your site is reasonably accessible.
In 2023, Kitchenaid faced a class-action lawsuit for failing to meet such standards. Plaintiffs alleged that the company’s site didn’t support alt text or keyboard navigation, making it inaccessible to users with visual impairments. While the case ultimately settled out of court, it’s a reminder of the potential legal and financial repercussions of overlooking inclusivity.
Outside the law, an inaccessible site presents ethical concerns, as it shows preferential treatment for those who can use a mouse, even if that’s unintentional. Even without legal action, public recognition of this bias may lead to a drop in visitors and a tainted public image.
Thankfully, ensuring keyboard navigability is a straightforward user experience design practice. Because navigation is standard across OSes and browsers, keyboard-accessible sites employ a few consistent elements.
Web Accessibility In Mind states that sites must provide a visual indicator of elements currently in focus when users press Tab. Focus indicators are typically a simple box around the highlighted icon.
These are standard in CSS, but some designers hide them, so avoid using outline:0 or outline:none to limit their visibility. You can also increase the contrast or change the indicator’s color in CSS.

The CNN Breaking News homepage is a good example of a strong focus indicator. Pressing Tab immediately brings up the box, which is bold enough to see easily and even uses a white border when necessary to stand out against black or dark-colored site elements.
The order in which the focus indicator moves between elements also matters. Generally speaking, pressing the Tab key should move it from left to right and top to bottom — the same way people read in English.
A few errors can stand in the way. Disabled buttons disrupt keyboard navigation flow by skipping an element with no explanation or highlighting it without making it clickable. Similarly, an interface where icons don’t fall in a predictable left-to-right, top-to-bottom order will make logical tab movement difficult.

The Sutton Maddock Vehicle Rental site is a good example of what not to do. When you press Tab, the focus indicator jumps from “Contact” to the Facebook link before going backward to the Twitter link. It starts at the right and moves left when it goes to the next line — the opposite order of what feels natural.
Skip links are also essential. These interactive elements let keyboard users jump to specific content without repeated keystrokes. Remember, these skips must be one of the first areas highlighted when you press Tab so they work as intended.

The HSBC Group homepage has a few skip navigation links. Pressing Tab pulls up three options, letting users quickly jump to whichever part of the site interests them.
Finally, all interactive elements on a keyboard-navigable site should be accessible via keystrokes. Anything people can click on or drag with a cursor should also support navigation and interaction. Enabling this is as simple as letting users select all items with the Tab or Arrow keys and press them with Space or Enter.

Appropriately, this Arizona State University page on keyboard accessibility showcases this concept well. All drop-down menus are possible to open by navigating to them via Tab and pressing Enter, so users don’t need a mouse to interact with them.
After designing a keyboard-accessible UX, you should test it to ensure that it works properly. The easiest way to do this is to explore the site solely with your keyboard. The chart below outlines the criteria to look for when determining whether your site is legitimately keyboard navigable.
| Keyboard Navigable | Not Keyboard Navigable | |
|---|---|---|
| Clickable Elements | All elements are reachable through the keyboard and open when you press Enter. | Only some elements are possible to reach through the keyboard. Some links may be broken or not open when you press Enter. |
| Focus Indicators | Pressing Tab, Space, or Enter brings up a focus indicator that is easy to see in all browsers. | Focus indicators may not appear when pressing all buttons. The box may be hard to see or only appear in some browsers. |
| Skip Navigation Links | Pressing Tab for the first time pulls up at least one skip link to take users to much-visited content or menus. Continuing to press Tab moves the focus indicator past these links to highlight elements on the page as normal. | No skip links appear when pressing Tab for the first time. Alternatively, they appear after moving through all other elements. Skip links may not be functional. |
| Screen Reader Support | Screen readers can read each element when highlighted with the focus indicator. | Some elements may not encourage any action from screen readers when highlighted. |
The Web Content Accessibility Guidelines outline two test rules to verify keyboard navigability:
Employ both standards to review your UX before making a site live.
Typical issues include the inability to highlight elements with the Tab key or things that don’t fall in a natural order. You can discover both problems by trying to access everything with your keyboard. However, you may prefer to conduct a navigability audit through a third party. Many private companies offer these services, but you can also use the Bureau of Internet Accessibility for a basic WCAG audit.
Keyboard navigability ensures you cater to all needs and preferences for an inclusive, accessible website design. While it’s straightforward to implement, it’s also easy to miss, so remember these principles when designing your UX and testing your site.
WCAG provides several techniques you can employ to meet keyboard accessibility standards and enhance your users’ experience:
Follow these guidelines and use WCAG’s test rules to create an accessible site. Remember to re-check it every time you add elements or change your UX.
Additionally, consider the following recommended reads to learn more about keyboards and their role in accessibility:
User-friendliness is an industry best practice that demonstrates your commitment to inclusivity for all. Even users without disabilities will appreciate intuitive, efficient keyboard navigation.
Fostering An Accessibility Culture Fostering An Accessibility Culture Daniel Devesa Derksen-Staats 2025-04-17T08:00:00+00:00 2025-06-25T15:04:30+00:00 A year ago, I learned that my role as an accessibility engineer was at risk of redundancy. It was a tough moment, both professionally and personally. For quite some time, my mind […]
Accessibility
2025-04-17T08:00:00+00:00
2025-06-25T15:04:30+00:00
A year ago, I learned that my role as an accessibility engineer was at risk of redundancy. It was a tough moment, both professionally and personally. For quite some time, my mind raced with guilt, self-doubt, plain sadness… But as I sat with these emotions, I found one line of thought that felt productive: reflection. What did I do well? What could I have done better? What did I learn?
Looking back, I realized that as part of a small team in a massive organization, we focused on a long-term goal that we also believed was the most effective and sustainable path: gradually shaping the organization’s culture to embrace accessibility.
Around the same time, I started listening to “Atomic Habits” by James Clear. The connection was immediate. Habits and culture are tightly linked concepts, and fostering an accessibility culture was really about embedding accessibility habits into everyone’s processes. That’s what we focused on. It took us time (and plenty of trial and error) to figure this out, and while there’s no definitive playbook for creating an accessibility program at a large organization, I thought it might help others if I shared my experiences.
Before we dive in, here’s a quick note: This is purely my personal perspective, and you’ll find a bias towards culture and action in big organizations. I’m not speaking on behalf of any employer, past or present. The progress we made was thanks to the incredible efforts of every member of the team and beyond. I hope these reflections resonate with those looking to foster an accessibility culture at their own companies.
To effectively shape habits, it’s crucial to focus on systems and processes (who we want to become) rather than obsessing over a final goal (or what we want to achieve). This perspective is especially relevant in accessibility.
Take the goal of making your app accessible. If you focus solely on achieving compliance without changing your systems (embedding accessibility into processes and culture), progress will be temporary.
For example, you might request an accessibility audit and fix the flagged issues to achieve compliance. While this can provide “quick” results, it’s often a short-lived solution.
Software evolves constantly: features are rewritten, old code is removed, and new functionality is added. Without an underlying system in place, accessibility issues can quickly resurface. Worse, this approach may reinforce the idea that accessibility is something external, checked by someone else, and fixed only when flagged. Not to mention that it becomes increasingly expensive the later accessibility issues are addressed in the process. It can also feel demoralizing when accessibility becomes synonymous with a long list of last-minute tickets when you are busiest.

Despite this, companies constantly focus on the goal rather than the systems.
“Accessibility is both a state and a practice.”
— Sommer Panage, SwiftTO talk, “Building Accessibility into Your Company, Team, and Culture”
I’ll take the liberty of tweaking that to an aspirational state. Without recognizing the importance of the practice, any progress made is at risk of regression.
Instead, I encourage organizations to focus on building habits and embedding good accessibility practices into their workflows. A strong system not only ensures lasting progress but also fosters a culture where accessibility becomes second nature.
That doesn’t mean goals are useless — they’re very effective in setting up direction.
In my team, we often said (only half-jokingly) that our ultimate goal was to put ourselves out of a job. This mindset reflects an important principle: accessibility is a cross-organizational responsibility, not the task of a single person or team.
That’s why, in my opinion, focusing solely on compliance rather than culture transformation (or prioritizing the “state” of accessibility over the “practice”) is a flawed strategy.
The real goal should be to build a user-centric culture where accessibility is embedded in every workflow, decision, and process. By doing so, companies can create products where accessibility is not about checking boxes and closing tickets but delivering meaningful and inclusive experiences to all users.
Different companies (of various sizes, structures, and cultures) will approach accessibility differently, depending on where they are in their journey. I still have to meet, though, an accessibility team that ever felt they had enough resources. This makes careful resource allocation a cornerstone of your strategy. And while there’s no one-size-fits-all solution, shifting left (addressing issues earlier in the development process) tends to be the most effective approach in most cases.
If your company has a design system, partnering with the team that owns it can be one of your biggest wins. Fixing a single component used across dozens of places improves the experience everywhere it’s used. This approach scales beautifully.
Involvement in foundational decisions and discussions, like choosing color palettes, typography, and component interactions, and so on, can also be very valuable. Contributing to documentation and guidelines tailored to accessibility can help teams across the organization make informed decisions.
For a deeper dive, I recommend Feli Bernutz’s excellent talk, “Designing APIs: How to Ensure Accessibility in Design Systems.”
It is worth repeating, you’ll need as many allies as possible. The more limited your resources, the more important this becomes. Something as simple as a Slack channel that becomes a safe space where people can ask questions and share tips can go a long way. Other ideas include lunch-and-learns, regular meetups, office hours, or building a more formal champions network. And, very importantly, it is about finding ways of recognising and celebrating wins and everyone’s good work.
If you’re exploring this, I highly recommend joining the Champions of Accessibility Network (CAN) group. It’s a great way to learn and connect with others who are passionate about accessibility.
Education is key for scaling accessibility efforts. While not everyone needs to be an expert, we should strive for everyone to know the basics. Repeatedly raising basic issues like missing accessibility labels, small target sizes, poor color contrast, and so on, can’t be productive.
Consider periodic training for different roles (PMs, designers, engineers…), embedding accessibility into onboarding sessions and documentation. You’ll need to find what works for you.
At Spotify, I found onboarding sessions for designers highly effective, as most features start with design. A Deque case study found that 67% of automatically detectable accessibility issues originate with design, reinforcing the importance of this approach. If your company has an education or training programme, partner with them. At Spotify, they were our biggest allies. They’ll help you get it right.
Everything that can be automated should eventually be automated. We know there’s already a lot on your plate, and automation should help lighten the load. This is especially true in larger organizations, where it can help scale efforts more efficiently. However, automated accessibility checks are not the silver bullet some might hope for.
One key issue is viewing automation as the solution rather than a safety net. Some companies claim automated tools catch as much as 57% of all issues or even 80% of issues by volume (PDF), though it is widely accepted that the figure is about 30%. Native mobile apps present greater challenges, making it likely that the real number is significantly lower for iOS and Android. These tools, and the high expectations around them, can create a false sense of security or reduce efforts to merely appease an automated tool of choice.
Automation doesn’t (and shouldn’t) replace intentionality. We should aim to deliver great accessible experiences from the start rather than wait for a tool to flag issues after the fact.
“
Whether your focus is on compliance or customer satisfaction, manual testing remains an essential part of the process. Whenever possible, you should also be testing with real users.
For me, the greatest value of automation is in catching basic regressions before release and serving as a gentle nudge to developers, reminding them to consider accessibility more thoughtfully. Ideally, they don’t just fix an issue and move on but take a moment to reflect:
When it comes to shaping habits, the environment matters. A strong accessibility culture isn’t built on willpower alone. It thrives on systems that encourage good practices and make bad ones harder to fall into. Nudges like automated checks, documentation, and proactive education are invaluable for keeping accessibility at the top of the mind.
I won’t lie; the moment I was first told my new job was to work on accessibility, I immediately jumped in, doing what I knew best, trying to fix as many issues as possible myself. While rewarding at first, this approach isn’t scalable in larger organizations. It can quickly lead to burnout. It also sets an expectation within the company that it’s your team’s responsibility to get it done, an expectation that becomes increasingly difficult to reset as time goes on.
Not saying you shouldn’t be hands-on, though! But you need to be strategic. Try to focus on supporting teams with complex issues, pair programming with colleagues, code reviews, or implementing cross-app improvements, ideally in partnership with the design system teams. This way, your efforts can have a broader impact.
Accessibility audits are another tool in your toolbox. Audits can be valuable but are often overused. They’re most effective after teams have done their best to make the product accessible, serving as a validation step rather than the starting point. After all, how useful is an audit if a significant portion of the flagged issues are basic problems that automated tools could have detected?
Alternatively, audits might help when you need quick results but don’t have the time or resources to upskill your workforce in time for a timely and necessary remediation.
While audits have their place and, as mentioned, can be valuable in certain situations, I wouldn’t rely on them to be the cornerstone of your strategy.
Try to find what works for your team, and, most importantly, adapt as circumstances change. Beyond the strategies mentioned, you might explore other initiatives:
It doesn’t mean one area of action is more important than another. Actually, in my view, one of the biggest reasons cultural change around accessibility takes longer than other areas is the lack of diversity in the workforce. Contributing to lines of action to address this issue might not be as immediately obvious as others.
The industry hasn’t done enough to hire people with disabilities, leaving them underrepresented in building products that truly work for them. Worse yet, they face more barriers in the hiring process. And even when they do get hired, they may find that the tools meant to enable us to do our work and be productive don’t work for them.
The key is to identify and lay out your areas of action first, then prioritize strategically while staying flexible as circumstances evolve. A thoughtful, adaptive approach ensures that no matter the challenge, your efforts remain impactful, avoiding stretching your team too thin and losing focus.
Here’s the truth that everyone working in accessibility inevitably and unfortunately faces sooner rather than later: accessibility done right, as we’ve seen so far, takes time. And that goes against the “move fast and break things” culture of quick results and short-termism that many companies still follow, even if they won’t openly admit it.
The slow-cooking nature of the process can, therefore, work against us. Being patient and trusting that small changes will aggregate and compound over time is incredibly challenging and sometimes nerve-racking. On top of that, if there’s a misalignment with leadership about what the ultimate goal is, or if there’s pressure to deliver quick results, it’s easy to feel like throwing in the towel, or worse, to experience burnout.
Unfortunately, burnout is an all-too-common issue in the accessibility community.
If you’d like to learn more about it, I highly recommend Shell Little’s talk, “The Accessibility to Burnout Pipeline.”
In those moments of doubt, it is useful to remember the quote embraced by the San Antonio Spurs NBA team, originally from social reformer Jacob Riis:
“When nothing seems to help, I go and look at a stonecutter hammering away at his rock perhaps a hundred times without as much as a crack showing in it. Yet at the hundred and first blow it will split in two, and I know it was not that blow that did it — but all that had gone before.”
— Jacob Riis
This serves as a powerful reminder that every small effort contributes to the eventual breakthrough, even when progress feels invisible.
Top-down approaches are easier, and yet, most accessibility initiatives start from the bottom. For a sustainable strategy, however, you’ll need both. If necessary, you’ll have to get buy-in from leadership or risk feeling like you’re constantly swimming upstream. Surprisingly, this is often harder than it seems. This topic could easily be an article on its own, but Vitaly Friedman offers some useful pointers in his piece “How To Make A Strong Case For Accessibility.”
In my experience, leadership buy-in is crucial to fostering an accessibility culture. Leaders often want to see how accessibility impacts the bottom line and whether investing in it is profitable. The hardest part is getting started, so if you can make a convincing case this way, do it.
I once watched a talk by Dave Dame titled “Stakeholders Agree That Accessibility Is Important, But That Does Not Mean They Will Invest In Accessibility.” He made an excellent point: You may need to speak the business language to get their attention. As Dave put it, “I have Cerebral Palsy, but my money doesn’t.”
There is also data out there suggesting that accessibility can be a worthwhile investment.

Still, I would encourage everyone to strive to change that mindset.
Doing accessibility for economic or legal reasons is valid, but it can lead to perverse incentives, where the bare minimum and compliance become the strategy, or where teams constantly need to prove their return on investment.
“
It is better to do it for the “wrong” reasons than not to do it at all. But ultimately, those aren’t the reasons we should be doing it.
The “13 Letters” podcast opened with an incredibly interesting two-part episode featuring Mike Shebanek. In it, Mike explains how Apple eventually renewed its commitment to accessibility because, in the state of Maine, schools were providing Macs and needed a screen reader for students who required one. It seems like a somewhat business-driven decision. But years later, Tim Cook famously stated, “When we work on making our devices accessible by the blind, I don’t consider the bloody ROI.” He also remarked, “Accessibility rights are human rights.”
That’s the mindset I wish more CEOs and leaders had. It is a story of how a change of mindset from “we have to do it” to “it is a core part of what we do” leads to a lasting and successful accessibility culture. Going beyond the bare minimum, Apple has become a leader in accessibility. An innovative company that consistently makes products more accessible and pushes the entire industry forward.
Once good habits are established, they tend to stick around. When I was let go, some people (I’m sure trying to comfort me) said the accessibility of the app would quickly regress and that the company would soon realize their mistake. Unexpectedly for them, I responded that I actually hoped it wouldn’t regress anytime soon. That, to me, would be the sign that I had done my job well.
And honestly, I felt confident it wouldn’t. Incredible people with deep knowledge and a passion for accessibility and building high-quality products stayed at the company. I knew the app was in good hands.
But it’s important not to fall into complacency. Cultures can be taken for granted, but they need constant nurturing and protection. A company that hires too fast, undergoes a major layoff, gets acquired, experiences high turnover, or sees changes in leadership or priorities… Any of these can pretty quickly destabilize something that took years to build.
This might not be your experience, and what we did may not work for you, but I hope you find this insight useful. I have, as they say, strong opinions, but loosely held. So I’m looking forward to knowing what you think and learning about your experiences too.
There’s no easy way or silver bullet! It’s actually very hard! The odds are against you. And we tend to constantly be puzzled about why the world is against us doing something that seems so obviously the right thing to do: to invite and include as many people as possible to use your product, to remove barriers, to avoid exclusion. It is important to talk about exclusion, too, when we talk about accessibility.
“Even though we were all talking about inclusion, we each had a different understanding of that word. Exclusion, on the other hand, is unanimously understood as being left out (…) Once we learn how to recognize exclusion, we can begin to see where a product or experience that works well for some might have barriers for someone else. Recognizing exclusion sparks a new kind of creativity on how a solution can be better.”
Something that might help: always assume goodwill and try to meet people where they are. I need to remind myself of this quite often.
“It is all about understanding where people are, meeting them where they’re at (…) People want to fundamentally do the right thing (…) They might not know what they don’t know (…) It might mean stepping back and going to the fundamentals (…) I know some people get frustrated about having to re-explain accessibility over and over again, but I believe that if we are not willing to do that, then how are we gonna change the hearts and minds of people?”
I’d encourage you to:
But honestly, anything you can do is progress. And progress is all we need, just for things to be a little better every day. Your job is incredibly important. Thanks for all you do!
Accessibility: This is the way!
Inclusive Dark Mode: Designing Accessible Dark Themes For All Users Inclusive Dark Mode: Designing Accessible Dark Themes For All Users Alex Williams 2025-04-15T13:00:00+00:00 2025-06-25T15:04:30+00:00 Dark mode, a beloved feature in modern digital interfaces, offers a visually striking alternative to traditional light themes. Its allure lies […]
Accessibility
2025-04-15T13:00:00+00:00
2025-06-25T15:04:30+00:00
Dark mode, a beloved feature in modern digital interfaces, offers a visually striking alternative to traditional light themes. Its allure lies in the striking visual contrast it provides, a departure from the light themes that have dominated our screens for decades.
However, its design often misses the mark on an important element — accessibility. For users with visual impairments or sensitivities, dark mode can introduce significant challenges if not thoughtfully implemented.
Hence, designing themes with these users in mind can improve user comfort in low-light settings while creating a more equitable digital experience for everyone. Let’s take a look at exactly how this can be done.
Dark mode can offer tangible accessibility benefits when implemented with care. For many users, especially those who experience light sensitivity, a well-calibrated dark theme can reduce eye strain and provide a more comfortable reading experience. In low-light settings, the softer background tones and reduced glare may help lessen fatigue and improve visual focus.
However, these benefits are not universal. For some users, particularly those with conditions such as astigmatism or low contrast sensitivity, dark mode can actually compromise readability. Light text on a dark background may lead to blurred edges or halo effects around characters, making it harder to distinguish content.
When you’re designing, contrast isn’t just another design element, it’s a key player in dark mode’s overall readability and accessibility. A well-designed dark mode, with the right contrast, can also enhance user engagement, creating a more immersive experience and drawing users into the content.

First and foremost, cleverly executing your site’s dark mode will result in a lower bounce rate (as much as 70%, according to one case study from Brazil). You can then further hack this statistic and greet visitors with a deep black, reinforcing your rankings in organic search results by sending positive signals to Google.
How is this possible? Well, the darker tones can hold attention longer, especially in low-light settings, leading to higher interaction rates while making your design more accessible. The point is, without proper contrast, even the sleekest dark mode design can become difficult to navigate and uncomfortable to use.
Instead of using pure black backgrounds, which can cause eye strain and make text harder to read, opt for dark grays. These softer tones help reduce harsh contrast and provide a modern look.
However, it’s important to note that color adjustments alone don’t solve technical challenges like anti-aliasing. In dark mode, anti-aliasing has the problem of halo effects, where the edges of the text appear blurred or overly luminous. To mitigate these issues, designers should test their interfaces on various devices and browsers and consider CSS properties to improve text clarity.
Real-world user testing, especially with individuals who have visual impairments, is essential to fine-tune these details and ensure an accessible experience for all users.
For individuals with low vision or color blindness, the right contrast can mean the difference between a frustrating and a seamless user experience. To keep your dark mode design looking its best, don’t forget to also:
These simple adjustments make a big difference in creating a dark mode that everyone can use comfortably.
While dark themes provide a sleek and visually appealing interface, some features still require lighter colors to remain functional and readable.
Certain interactive elements like buttons or form fields need to be easily distinguishable, especially if it involves transactions or providing personal information. Simply put, no one wants to sign documents digitally if they have to look for the right field, nor do they want to make a transaction if there is friction.
In addition to human readability, machine readability is equally important in an age of increased automation. Machine readability refers to how effective computers and bots are at extracting and processing data from the interface without human intervention. It’s important for pretty much any type of interface that has automation built into the workflows. For example, if the interface utilizes machine learning, machine readability is essential. Machine learning relies on accurate, quality data and effective interaction between different modules and systems, which makes machine readability critical to make it effective.
You can help ensure your dark mode interface is machine-readable in the following ways:
<header>, <nav>, <main>, and <footer>) and ARIA roles. When your code is organized this way, machines can read and understand your page better, regardless of whether it’s in dark or light mode.Making sure that data, especially in automated systems, is clear and accessible prevents functionality breakdowns and guarantees seamless workflows.
Although we associate visual accessibility with visual impairments, the truth is that it’s actually meant for everyone. Easier access is something we all strive for, right? But more than anything, practicality is what matters. Fortunately, the strategies below fit the description to a tee.
Contrast is the backbone of dark mode design. Without proper implementation, elements blend together, creating a frustrating user experience. Instead of looking at contrast as just a relationship between colors, try to view it in the context of other UI elements:
The use of effective typography is vital for preserving readability in dark mode. In particular, the right font choice can make your design both visually appealing and functional, while the wrong one can cause strain and confusion for users.

Thus, when designing dark themes, it’s essential to prioritize text clarity without sacrificing aesthetics. You can do this by prioritizing:
Colors in dark mode require a delicate balance to ensure accessibility. It’s not as simple as looking at a list of complimentary color pairs and basing your designs around them. Instead, you must think about how users with visual impairments will experience the dark theme design.
While avoiding color combinations like red and green for the sake of colorblind users is a widely known rule, visual impairment is more than just color blindness. In particular, you have to pay attention to:
As you can see, there are a lot of different considerations. Something you need to account for is that it’s nigh-on impossible to have a solution that will fix all the issues. You can’t test an interface for every single individual who uses it. The best you can do is make it as accessible as possible for as many users as possible, and you can always make adjustments in later iterations if there are major issues for a segment of users.

Even though dark mode doesn’t target only users with visual impairments, their input and ease of use are perhaps the most important.
The role of color perception in dark mode varies significantly among users, especially for those with visual impairments like color blindness or low vision. These conditions can make it challenging to distinguish certain colors on dark backgrounds, which can affect how users navigate and interact with your design.
In particular, some colors that seem vibrant in light mode may appear muted or blend into the background, making it difficult for users to see or interact with key elements. This is exactly why testing your color palette across different displays and lighting conditions is essential to ensure consistency and accessibility. However, you probably won’t be able to test for every single screen type, device, or environmental condition. Once again, make the dark mode interface as accessible as possible, and make adjustments in later iterations based on feedback.
For users with visual impairments, accessible color palettes can make a significant difference in their experience. Interactive elements, such as buttons or links, need to stand out clearly from the rest of the design, using colors that provide strong contrast and clear visual cues.

In the example above, Slack did an amazing job providing users with visual impairments with premade options. That way, someone can save hours of valuable time. If it wasn’t obvious by now, apps that do this find much more success in customer attraction (and retention) than those that don’t.
Dark mode is often celebrated for its ability to reduce screen glare and blue light, making it more comfortable for users who experience certain visual sensitivities, like eye strain or discomfort from bright screens.
For many, this creates a more pleasant browsing experience, particularly in low-light environments. However, dark mode isn’t a perfect solution for everyone.

Users with astigmatism, for instance, may find it difficult to read light text on a dark background. The contrast can cause the text to blur or create halos, making it harder to focus. Likewise, some users prefer dark mode for its reduced eye strain, while others may find it harder to read or simply prefer light mode.
These different factors mean that adaptability is important to better accommodate users who may have certain visual sensitivities. You can allow users to toggle between dark and light modes based on their preferences. For even greater comfort, think of providing options to customize text colors and background shades.
Switching between dark and light modes should also be smooth and unobtrusive. Whether you’re working in a bright office or relaxing in a dimly lit room, the transition should never disrupt your workflow.
On top of that, remembering your preferences automatically for future sessions creates a consistent and thoughtful user experience. These adjustments turn dark mode into a truly personalized feature, tailored to elevate every interaction you have with the interface.
While dark mode offers benefits like reduced eye strain and energy savings, it still has its limits. Focusing on key elements like contrast, readability, typography, and color perception helps guarantee that your designs are inclusive and user-friendly for all of your users.
Offering dark mode as an optional, customizable feature empowers users to interact with your interface in a way that best suits their needs. Meanwhile, prioritizing accessibility in dark mode design creates a more equitable digital experience for everyone, regardless of their abilities or preferences.