Event listeners are essential for interactivity in JavaScript, but they can quietly cause memory leaks if not removed properly. And what if your event listener needs parameters? That’s where things
Javascript
Meet TARS — a simple, repeatable, and meaningful UX metric designed specifically to track the performance of product features. Upcoming part of the Measure UX & Design Impact (use the code 🎟
Ux
If you love to create infographics, then you might have come across our previously published ultimate infographic resource kits for designers post. There post contains tons of GUI kits, elements, inte
Freebies
A Week In The Life Of An AI-Augmented Designer A Week In The Life Of An AI-Augmented Designer Lyndon Cerejo 2025-08-22T08:00:00+00:00 2025-08-27T15:32:36+00:00 Artificial Intelligence isn’t new, but in November 2022, something changed. The launch of ChatGPT brought AI out of the background and into everyday […]
Accessibility
2025-08-22T08:00:00+00:00
2025-08-27T15:32:36+00:00
Artificial Intelligence isn’t new, but in November 2022, something changed. The launch of ChatGPT brought AI out of the background and into everyday life. Suddenly, interacting with a machine didn’t feel technical — it felt conversational.
Just this March, ChatGPT overtook Instagram and TikTok as the most downloaded app in the world. That level of adoption shows that millions of everyday users, not just developers or early adopters, are comfortable using AI in casual, conversational ways. People are using AI not just to get answers, but to think, create, plan, and even to help with mental health and loneliness.
In the past two and a half years, people have moved through the Kübler-Ross Change Curve — only instead of grief, it’s AI-induced uncertainty. UX designers, like Kate (who you’ll meet shortly), have experienced something like this:
As designers move into experimentation, they’re not asking, Can I use AI? but How might I use it well?.
Using AI isn’t about chasing the latest shiny object but about learning how to stay human in a world of machines, and use AI not as a shortcut, but as a creative collaborator.
“
It isn’t about finding, bookmarking, downloading, or hoarding prompts, but experimenting and writing your own prompts.
To bring this to life, we’ll follow Kate, a mid-level designer at a FinTech company, navigating her first AI-augmented design sprint. You’ll see her ups and downs as she experiments with AI, tries to balance human-centered skills with AI tools, when she relies on intuition over automation, and how she reflects critically on the role of AI at each stage of the sprint.
The next two planned articles in this series will explore how to design prompts (Part 2) and guide you through building your own AI assistant (aka CustomGPT; Part 3). Along the way, we’ll spotlight the designerly skills AI can’t replicate like curiosity, empathy, critical thinking, and experimentation that will set you apart in a world where automation is easy, but people and human-centered design matter even more.
Note: This article was written by a human (with feelings, snacks, and deadlines). The prompts are real, the AI replies are straight from the source, and no language models were overworked — just politely bossed around. All em dashes are the handiwork of MS Word’s autocorrect — not AI. Kate is fictional, but her week is stitched together from real tools, real prompts, real design activities, and real challenges designers everywhere are navigating right now. She will primarily be using ChatGPT, reflecting the popularity of this jack-of-all-trades AI as the place many start their AI journeys before branching out. If you stick around to the end, you’ll find other AI tools that may be better suited for different design sprint activities. Due to the pace of AI advances, your outputs may vary (YOMV), possibly by the time you finish reading this sentence.
Cautionary Note: AI is helpful, but not always private or secure. Never share sensitive, confidential, or personal information with AI tools — even the helpful-sounding ones. When in doubt, treat it like a coworker who remembers everything and may not be particularly good at keeping secrets.
Kate stared at the digital mountain of feedback on her screen: transcripts, app reviews, survey snippets, all waiting to be synthesized. Deadlines loomed. Her calendar was a nightmare. Meanwhile, LinkedIn was ablaze with AI hot takes and success stories. Everyone seemed to have found their “AI groove” — except her. She wasn’t anti-AI. She just hadn’t figured out how it actually fit into her work. She had tried some of the prompts she saw online, played with some AI plugins and extensions, but it felt like an add-on, not a core part of her design workflow.
Her team was focusing on improving financial confidence for Gen Z users of their FinTech app, and Kate planned to use one of her favorite frameworks: the Design Sprint, a five-day, high-focus process that condenses months of product thinking into a single week. Each day tackles a distinct phase: Understand, Sketch, Decide, Prototype, and Test. All designed to move fast, make ideas tangible, and learn from real users before making big bets.

This time, she planned to experiment with a very lightweight version of the design sprint, almost “solo-ish” since her PM and engineer were available for check-ins and decisions, but not present every day. That gave her both space and a constraint, and made it the perfect opportunity to explore how AI could augment each phase of the sprint.
She decided to lean on her designerly behavior of experimentation and learning and integrate AI intentionally into her sprint prep, using it as both a creative partner and a thinking aid. Not with a rigid plan, but with a working hypothesis that AI would at the very least speed her up, if nothing else.
She wouldn’t just be designing and testing a prototype, but prototyping and testing what it means to design with AI, while still staying in the driver’s seat.
Follow Kate along her journey through her first AI-powered design sprint: from curiosity to friction and from skepticism to insight.
The first day of a design sprint is spent understanding the user, their problems, business priorities, and technical constraints, and narrowing down the problem to solve that week.
This morning, Kate had transcripts from recent user interviews and customer feedback from the past year from app stores, surveys, and their customer support center. Typically, she would set aside a few days to process everything, coming out with glazed eyes and a few new insights. This time, she decided to use ChatGPT to summarize that data: “Read this customer feedback and tell me how we can improve financial literacy for Gen Z in our app.”
ChatGPT’s outputs were underwhelming to say the least. Disappointed, she was about to give up when she remembered an infographic about good prompting that she had emailed herself. She updated her prompt based on those recommendations:
By the time she Aero-pressed her next cup of coffee, ChatGPT had completed its analysis, highlighting blockers like jargon, lack of control, fear of making the wrong choice, and need for blockchain wallets. Wait, what? That last one felt off.

Kate searched her sources and confirmed her hunch: AI hallucination! Despite the best of prompts, AI sometimes makes things up based on trendy concepts from its training data rather than actual data. Kate updated her prompt with constraints to make ChatGPT only use data she had uploaded, and to cite examples from that data in its results. 18 seconds later, the updated results did not mention blockchain or other unexpected results.
By lunch, Kate had the makings of a research summary that would have taken much, much longer, and a whole lot of caffeine.
That afternoon, Kate and her product partner plotted the pain points on the Gen Z app journey. The emotional mapping highlighted the most critical moment: the first step of a financial decision, like setting a savings goal or choosing an investment option. That was when fear, confusion, and lack of confidence held people back.
AI synthesis combined with human insight helped them define the problem statement as: “How might we help Gen Z users confidently take their first financial action in our app, in a way that feels simple, safe, and puts them in control?”
As she wrapped up for the day, Kate jotted down her reflections on her first day as an AI-augmented designer:
There’s nothing like learning by doing. I’ve been reading about AI and tinkering around, but took the plunge today. Turns out AI is much more than a tool, but I wouldn’t call it a co-pilot. Yet. I think it’s like a sharp intern: it has a lot of information, is fast, eager to help, but it lacks context, needs supervision, and can surprise you. You have to give it clear instructions, double-check its work, and guide and supervise it. Oh, and maintain boundaries by not sharing anything I wouldn’t want others to know.
Today was about listening — to users, to patterns, to my own instincts. AI helped me sift through interviews fast, but I had to stay curious to catch what it missed. Some quotes felt too clean, like the edges had been smoothed over. That’s where observation and empathy kicked in. I had to ask myself: what’s underneath this summary?
Critical thinking was the designerly skill I had to exercise most today. It was tempting to take the AI’s synthesis at face value, but I had to push back by re-reading transcripts, questioning assumptions, and making sure I wasn’t outsourcing my judgment. Turns out, the thinking part still belongs to me.
Day 2 of a design sprint focuses on solutions, starting by remixing and improving existing ideas, followed by people sketching potential solutions.
Optimistic, yet cautious after her experience yesterday, Kate started thinking about ways she could use AI today, while brewing her first cup of coffee. By cup two, she was wondering if AI could be a creative teammate. Or a creative intern at least. She decided to ask AI for a list of relevant UX patterns across industries. Unlike yesterday’s complex analysis, Kate was asking for inspiration, not insight, which meant she could use a simpler prompt: “Give me 10 unique examples of how top-rated apps reduce decision anxiety for first-time users — from FinTech, health, learning, or ecommerce.”
She received her results in a few seconds, but there were only 6, not the 10 she asked for. She expanded her prompt for examples from a wider range of industries. While reviewing the AI examples, Kate realized that one had accessibility issues. To be fair, the results met Kate’s ask since she had not specified accessibility considerations. She then went pre-AI and brainstormed examples with her product partner, coming up with a few unique local examples.
Later that afternoon, Kate went full human during Crazy 8s by putting a marker to paper and sketching 8 ideas in 8 minutes to rapidly explore different directions. Wondering if AI could live up to its generative nature, she uploaded pictures of her top 3 sketches and prompted AI to act as “a product design strategist experienced in Gen Z behavior, digital UX, and behavioral science”, gave it context about the problem statement, stage in the design sprint, and explicitly asked AI the following:
The results included ideas that Kate and her product partner hadn’t considered, including a progress bar that started at 20% (to build confidence), and a sports-like “stock bracket” for first-time investors.

Not bad, thought Kate, as she cherry-picked elements, combined and built on these ideas in her next round of sketches. By the end of the day, they had a diverse set of sketched solutions — some original, some AI-augmented, but all exploring how to reduce fear, simplify choices, and build confidence for Gen Z users taking their first financial step. With five concept variations and a few rough storyboards, Kate was ready to start converging on day 3.
Today was creatively energizing yet a little overwhelming! I leaned hard on AI to act as a creative teammate. It delivered a few unexpected ideas and remixed my Crazy 8s into variations I never would’ve thought of!
It also reinforced the need to stay grounded in the human side of design. AI was fast — too fast, sometimes. It spit out polished-sounding ideas that sounded right, but I had to slow down, observe carefully, and ask: Does this feel right for our users? Would a first-time user feel safe or intimidated here?
Critical thinking helped me separate what mattered from what didn’t. Empathy pulled me back to what Gen Z users actually said, and kept their voices in mind as I sketched. Curiosity and experimentation were my fuel. I kept tweaking prompts, remixing inputs, and seeing how far I could stretch a concept before it broke. Visual communication helped translate fuzzy AI ideas into something I could react to — and more importantly, test.
Design sprint teams spend Day 3 critiquing each of their potential solutions to shortlist those that have the best chance of achieving their long-term goal. The winning scenes from the sketches are then woven into a prototype storyboard.
Design sprint Wednesdays were Kate’s least favorite day. After all the generative energy during Sketching Tuesday, today, she would have to decide on one clear solution to prototype and test. She was unsure if AI would be much help with judging tradeoffs or narrowing down options, and it wouldn’t be able to critique like a team. Or could it?
Kate reviewed each of the five concepts, noting strengths, open questions, and potential risks. Curious about how AI would respond, she uploaded images of three different design concepts and prompted ChatGPT for strengths and weaknesses. AI’s critique was helpful in summarizing the pros and cons of different concepts, including a few points she had not considered — like potential privacy concerns.

She asked a few follow-up questions to confirm the actual reasoning. Wondering if she could simulate a team critique by prompting ChatGPT differently, Kate asked it to use the 6 thinking hats technique. The results came back dense, overwhelming, and unfocused. The AI couldn’t prioritize, and it couldn’t see the gaps Kate instinctively noticed: friction in onboarding, misaligned tone, unclear next steps.
In that moment, the promise of AI felt overhyped. Kate stood up, stretched, and seriously considered ending her experiments with the AI-driven process. But she paused. Maybe the problem wasn’t the tool. Maybe it was how she was using it. She made a note to experiment when she wasn’t on a design sprint clock.
She returned to her sketches, this time laying them out on the wall. No screens, no prompts. Just markers, sticky notes, and Sharpie scribbles. Human judgment took over. Kate worked with her product partner to finalize the solution to test on Friday and spent the next hour storyboarding the experience in Figma.
Kate re-engaged with AI as a reviewer, not a decider. She prompted it for feedback on the storyboard and was surprised to see it spit out detailed design, content, and micro-interaction suggestions for each of the steps of the storyboarded experience. A lot of food for thought, but she’d have to judge what mattered when she created her prototype. But that wasn’t until tomorrow!
AI exposed a few of my blind spots in the critique, which was good, but it basically pointed out that multiple options “could work”. I had to rely on my critical thinking and instincts to weigh options logically, emotionally, and contextually in order to choose a direction that was the most testable and aligned with the user feedback from Day 1.
I was also surprised by the suggestions it came up with while reviewing my final storyboard, but I will need a fresh pair of eyes and all the human judgement I can muster tomorrow.
Empathy helped me walk through the flow like I was a new user. Visual communication helped pull it all together by turning abstract steps into a real storyboard for the team to see instead of imagining.
TO DO: Experiment prompting around the 6 Thinking Hats for different perspectives.
On Day 4, the team usually turns the storyboard from the previous day into a prototype that can be tested with users on Day 5. The prototype doesn’t need to be fully functional; a simulated experience is sufficient to gather user feedback.
Kate’s prototype day often consisted of marathon Figma Design sessions and late-night pizza dinners. She was hoping AI would change that today. She fed yesterday’s storyboard to ChatGPT and asked it for screens. It took a while to generate, but she was excited to see a screen flow gradually appear on her screen, except that it had 3 ¾ screens, instead of the 6 frames from her storyboard, as you can see in the image below.

She tried different prompts but couldn’t get a complete flow. She was excited when it offered to create a Figma file. She waited. And waited. And reminded it that she was right there waiting. It finally gave her a link — a link that didn’t work. That made her question why she needed prototype screens in Figma at all. Could one of those no-code tools do the job? Kate remembered that she had just received beta access to Figma Make (Figma’s AI prompt-to-prototype tool) and decided to put it to the test.
She shared her storyboard, added context about the prototype for a design sprint, the problem, and a brief description of the audience. In a few seconds, she saw code being generated and a running description on the left. It seemed like it would take some time, so she headed to the cafeteria for lunch.

When Kate returned, she was surprised to see Figma Make had created an app along with a detailed explanation:
“I’ve created a complete finance app prototype with five key screens that guide users through setting financial goals in a friendly, low-pressure way. The app features smooth animations with Framer Motion, a progress indicator to show users where they are in the journey, and emotionally supportive visuals and copy throughout the experience. Each screen serves a specific purpose: welcoming users, helping them define relatable goals, building their plan with visual feedback, presenting a simple summary, and offering next steps without pressure.
You might want to:
- Add personalization options like user profiles or avatars.
- Implement a dark mode toggle for accessibility.
- Create additional goal templates and saving strategies.”
Since Figma Make had built the prototype, not her, Kate felt like she was User 0, and clicked her way through the interactive prototype, making a mental note of what worked, what didn’t. She was impressed to see that she could not only select a common goal (buy a home, save for education, and so on) but she could also create her own goal (hello De’Longhi Coffee Maker) and it carried over for the rest of the experience. This was something she had never been able to do in Figma Design!
Despite some obvious misses like a missing header and navigation, and some buttons not working, she was impressed! Kate tried the option to ‘Publish’ and it gave her a link that she immediately shared with her product and engineering partners. A few minutes later, they joined her in the conference room, exploring it together. The engineer scanned the code, didn’t seem impressed, but said it would work as a disposable prototype.
Kate prompted Figma Make to add an orange header and app navigation, and this time the trio kept their eyes peeled as they saw the progress in code and in English. The results were pretty good. They spent the next hour making changes to get it ready for testing. Even though he didn’t admit it, the engineer seemed impressed with the result, if not the code.

By late afternoon, they had a functioning interactive prototype. Kate fed ChatGPT the prototype link and asked it to create a usability testing script. It came up with a basic, but complete test script, including a checklist for observers to take notes.

Kate went through the script carefully and updated it to add probing questions about AI transparency, emotional check-ins, more specific task scenarios, and a post-test debrief that looped back to the sprint goal.
Kate did a dry run with her product partner, who teased her: “Did you really need me? Couldn’t your AI do it?” It hadn’t occurred to her, but she was now curious!
“Act as a Gen Z user seeing this interactive prototype for the first time. How would you react to the language, steps, and tone? What would make you feel more confident or in control?”
It worked! ChatGPT simulated user feedback for the first screen and asked if she wanted it to continue. “Yes, please,” she typed. A few seconds later, she was reading what could have very well been a screen-by-screen transcript from a test.

Kate was still processing what she had seen as she drove home, happy she didn’t have to stay late. The simulated test using AI appeared impressive at first glance. But the more she thought about it, the more disturbing it became. The output didn’t mention what the simulated user clicked, and if she had asked, she probably would have received an answer. But how useful would that be? After almost missing her exit, she forced herself to think about eating a relaxed meal at home instead of her usual Prototype-Thursday-Multitasking-Pizza-Dinner.
Today was the most meta I’ve felt all week: building a prototype about AI, with AI, while being coached by AI. And it didn’t all go the way I expected.
While ChatGPT didn’t deliver prototype screens, Figma Make coded a working, interactive prototype with interactions I couldn’t have built in Figma Design. I used curiosity and experimentation today, by asking: What if I reworded this? What if I flipped that flow?
AI moved fast, but I had to keep steering. But I have to admit that tweaking the prototype by changing the words, not code, felt like magic!
Critical thinking isn’t optional anymore — it is table stakes.
My impromptu ask of ChatGPT to simulate a Gen Z user testing my flow? That part both impressed and unsettled me. I’m going to need time to process this. But that can wait until next week. Tomorrow, I test with 5 Gen Zs — real people.
Day 5 in a design sprint is a culmination of the week’s work from understanding the problem, exploring solutions, choosing the best, and building a prototype. It’s when teams interview users and learn by watching them react to the prototype and seeing if it really matters to them.
As Kate prepped for the tests, she grounded herself in the sprint problem statement and the users: “How might we help Gen Z users confidently take their first financial action in our app — in a way that feels simple, safe, and puts them in control?”
She clicked through the prototype one last time — the link still worked! And just in case, she also had screenshots saved.
Kate moderated the five tests while her product and engineering partners observed. The prototype may have been AI-generated, but the reactions were human. She observed where people hesitated, what made them feel safe and in control. Based on the participant, she would pivot, go off-script, and ask clarifying questions, getting deeper insights.
After each session, she dropped the transcripts and their notes into ChatGPT, asking it to summarize that user’s feedback into pain points, positive signals, and any relevant quotes. At the end of the five rounds, Kate prompted them for recurring themes to use as input for their reflection and synthesis.

The trio combed through the results, with an eye out for any suspicious AI-generated results. They ran into one: “Users Trust AI”. Not one user mentioned or clicked the ‘Why this?’ link, but AI possibly assumed transparency features worked because they were available in the prototype.
They agreed that the prototype resonated with users, allowing all to easily set their financial goals, and identified a couple of opportunities for improvement: better explaining AI-generated plans and celebrating “win” moments after creating a plan. Both were fairly easy to address during their product build process.
That was a nice end to the week: another design sprint wrapped, and Kate’s first AI-augmented design sprint! She started Monday anxious about falling behind, overwhelmed by options. She closed Friday confident in a validated concept, grounded in real user needs, and empowered by tools she now knew how to steer.
Test driving my prototype with AI yesterday left me impressed and unsettled. But today’s tests with people reminded me why we test with real users, not proxies or people who interact with users, but actual end users. And GenAI is not the user. Five tests put my designerly skill of observation to the test.
GenAI helped summarize the test transcripts quickly but snuck in one last hallucination this week — about AI! With AI, don’t trust — always verify! Critical thinking is not going anywhere.
AI can move fast with words, but only people can use empathy to move beyond words to truly understand human emotions.
My next goal is to learn to talk to AI better, so I can get better results.
Over the course of five days, Kate explored how AI could fit into her UX work, not by reading articles or LinkedIn posts, but by doing. Through daily experiments, iterations, and missteps, she got comfortable with AI as a collaborator to support a design sprint. It accelerated every stage: synthesizing user feedback, generating divergent ideas, giving feedback, and even spinning up a working prototype, as shown below.

What was clear by Friday was that speed isn’t insight. While AI produced outputs fast, it was Kate’s designerly skills — curiosity, empathy, observation, visual communication, experimentation, and most importantly, critical thinking and a growth mindset — that turned data and patterns into meaningful insights. She stayed in the driver’s seat, verifying claims, adjusting prompts, and applying judgment where automation fell short.
She started the week on Monday, overwhelmed, her confidence dimmed by uncertainty and the noise of AI hype. She questioned her relevance in a rapidly shifting landscape. By Friday, she not only had a validated concept but had also reshaped her entire approach to design. She had evolved: from AI-curious to AI-confident, from reactive to proactive, from unsure to empowered. Her mindset had shifted: AI was no longer a threat or trend; it was like a smart intern she could direct, critique, and collaborate with. She didn’t just adapt to AI. She redefined what it meant to be a designer in the age of AI.
The experience raised deeper questions: How do we make sure AI-augmented outputs are not made up? How should we treat AI-generated user feedback? Where do ethics and human responsibility intersect?
Besides a validated solution to their design sprint problem, Kate had prototyped a new way of working as an AI-augmented designer.
The question now isn’t just “Should designers use AI?”. It’s “How do we work with AI responsibly, creatively, and consciously?”. That’s what the next article will explore: designing your interactions with AI using a repeatable framework.
Poll: If you could design your own AI assistant, what would it do?
Share your idea, and in the spirit of learning by doing, we’ll build one together from scratch in the third article of this series: Building your own CustomGPT.
Tools
As mentioned earlier, ChatGPT was the general-purpose LLM Kate leaned on, but you could swap it out for Claude, Gemini, Copilot, or other competitors and likely get similar results (or at least similarly weird surprises). Here are some alternate AI tools that might suit each sprint stage even better. Note that with dozens of new AI tools popping up every week, this list is far from exhaustive.
| Stage | Tools | Capability |
|---|---|---|
| Understand | Dovetail, UserTesting’s Insights Hub, Marvin | Summarize & Synthesize data |
| Sketch | Any LLM, Musely | Brainstorm concepts and ideas |
| Decide | Any LLM | Critique/provide feedback |
| Prototype | UIzard, UXPilot, Visily, Krisspy, Figma Make, Lovable, Bolt | Create wireframes and prototypes |
| Test | UserTesting, UserInterviews, PlaybookUX, Maze, plus tools from the Understand stage | Moderated and unmoderated user tests/synthesis |
The Double-Edged Sustainability Sword Of AI In Web Design The Double-Edged Sustainability Sword Of AI In Web Design Alex Williams 2025-08-20T10:00:00+00:00 2025-08-21T11:03:55+00:00 Artificial intelligence is increasingly automating large parts of design and development workflows — tasks once reserved for skilled designers and developers. This streamlining […]
Accessibility
2025-08-20T10:00:00+00:00
2025-08-21T11:03:55+00:00
Artificial intelligence is increasingly automating large parts of design and development workflows — tasks once reserved for skilled designers and developers. This streamlining can dramatically speed up project delivery. Even back in 2023, AI-assisted developers were found to complete tasks twice as fast as those without. And AI tools have advanced massively since then.
Yet this surge in capability raises a pressing dilemma:
Does the environmental toll of powering AI infrastructure eclipse the efficiency gains?
We can create websites faster that are optimized and more efficient to run, but the global consumption of energy by AI continues to climb.
As awareness grows around the digital sector’s hidden ecological footprint, web designers and businesses must grapple with this double-edged sword, weighing the grid-level impacts of AI against the cleaner, leaner code it can produce.
There’s no disputing that AI-driven automation has introduced higher speeds and efficiencies to many of the mundane aspects of web design. Tools that automatically generate responsive layouts, optimize image sizes, and refactor bloated scripts should free designers to focus on completing the creative side of design and development.
By some interpretations, these accelerated project timelines could represent a reduction in the required energy for development, and speedier production should mean less energy used.
Beyond automation, AI excels at identifying inefficiencies in code and design, as it can take a much more holistic view and assess things as a whole. Advanced algorithms can parse through stylesheets and JavaScript files to detect unused selectors or redundant logic, producing leaner, faster-loading pages. For example, AI-driven caching can increase cache hit rates by 15% by improving data availability and reducing latency. This means more user requests are served directly from the cache, reducing the need for data retrieval from the main server, which reduces energy expenditure.
AI tools can utilize next-generation image formats like AVIF or WebP, as they’re basically designed to be understood by AI and automation, and selectively compress assets based on content sensitivity. This slashes media payloads without perceptible quality loss, as the AI can use Generative Adversarial Networks (GANs) that can learn compact representations of data.
AI’s impact also brings sustainability benefits via user experience (UX). AI-driven personalization engines can dynamically serve only the content a visitor needs, which eliminates superfluous scripts or images that they don’t care about. This not only enhances perceived performance but reduces the number of server requests and data transferred, cutting downstream energy use in network infrastructure.
With the right prompts, generative AI can be an accessibility tool and ensure sites meet inclusive design standards by checking against accessibility standards, reducing the need for redesigns that can be costly in terms of time, money, and energy.
So, if you can take things in isolation, AI can and already acts as an important tool to make web design more efficient and sustainable. But do these gains outweigh the cost of the resources required in building and maintaining these tools?
Yet the carbon savings engineered at the page level must be balanced against the prodigious resource demands of AI infrastructure. Large-scale AI hinges on data centers that already account for roughly 2% of global electricity consumption, a figure projected to swell as AI workloads grow.
The International Energy Agency warns that electricity consumption from data centers could more than double by 2030 due to the increasing demand for AI tools, reaching nearly the current consumption of Japan. Training state-of-the-art language models generates carbon emissions on par with hundreds of transatlantic flights, and inference workloads, serving billions of requests daily, can rival or exceed training emissions over a model’s lifetime.
Image generation tasks represent an even steeper energy hill to climb. Producing a single AI-generated image can consume energy equivalent to charging a smartphone.
As generative design and AI-based prototyping become more common in web development, the cumulative energy footprint of these operations can quickly undermine the carbon savings achieved through optimized code.
“
Water consumption forms another hidden cost. Data centers rely heavily on evaporative cooling systems that can draw between one and five million gallons of water per day, depending on size and location, placing stress on local supplies, especially in drought-prone regions. Studies estimate a single ChatGPT query may consume up to half a liter of water when accounting for direct cooling requirements, with broader AI use potentially demanding billions of liters annually by 2027.
Resource depletion and electronic waste are further concerns. High-performance components underpinning AI services, like GPUs, can have very small lifespans due to both wear and tear and being superseded by more powerful hardware. AI alone could add between 1.2 and 5 million metric tons of e-waste by 2030, due to the continuous demand for new hardware, amplifying one of the world’s fastest-growing waste streams.
Mining for the critical minerals in these devices often proceeds under unsustainable conditions due to a lack of regulations in many of the environments where rare metals can be sourced, and the resulting e-waste, rich in toxic metals like lead and mercury, poses another form of environmental damage if not properly recycled.
Compounding these physical impacts is a lack of transparency in corporate reporting. Energy and water consumption figures for AI workloads are often aggregated under general data center operations, which obscures the specific toll of AI training and inference among other operations.
And the energy consumption reporting of the data centres themselves has been found to have been obfuscated.
Reports estimate that the emissions of data centers are up to 662% higher than initially reported due to misaligned metrics, and ‘creative’ interpretations of what constitutes an emission. This makes it hard to grasp the true scale of AI’s environmental footprint, leaving designers and decision-makers unable to make informed, environmentally conscious decisions.
Some industry advocates argue that AI’s energy consumption isn’t as catastrophic as headlines suggest. Some groups have challenged ‘alarmist’ projections, claiming that AI’s current contribution of ‘just’ 0.02% of global energy consumption isn’t a cause for concern.
Proponents also highlight AI’s supposed environmental benefits. There are claims that AI could reduce economy-wide greenhouse gas emissions by 0.1% to 1.1% through efficiency improvements. Google reported that five AI-powered solutions removed 26 million metric tons of emissions in 2024. The optimistic view holds that AI’s capacity to optimize everything from energy grids to transportation systems will more than compensate for its data center demands.
However, recent scientific analysis reveals these arguments underestimate AI’s true impact. MIT found that data centers already consume 4.4% of all US electricity, with projections showing AI alone could use as much power as 22% of US households by 2028. Research indicates AI-specific electricity use could triple from current levels annually by 2028. Moreover, Harvard research revealed that data centers use electricity with 48% higher carbon intensity than the US average.
Despite the environmental costs, AI’s use in business, particularly web design, isn’t going away anytime soon, with 70% of large businesses looking to increase their AI investments to increase efficiencies. AI’s immense impact on productivity means those not using it are likely to be left behind. This means that environmentally conscious businesses and designers must find the right balance between AI’s environmental cost and the efficiency gains it brings.
Before you plug in any AI magic, start by making sure the bones of your site are sustainable. Lean web fundamentals, like system fonts instead of hefty custom files, minimal JavaScript, and judicious image use, can slash a page’s carbon footprint by stripping out redundancies that increase energy consumption. For instance, the global average web page emits about 0.8g of CO₂ per view, whereas sustainably crafted sites can see a roughly 70% reduction.
Once that lean baseline is in place, AI-driven optimizations (image format selection, code pruning, responsive layout generation) aren’t adding to bloat but building on efficiency, ensuring every joule spent on AI actually yields downstream energy savings in delivery and user experience.
In order to make sustainable tool choices, transparency and awareness are the first steps. Many AI vendors have pledged to work towards sustainability, but independent audits are necessary, along with clear, cohesive metrics. Standardized reporting on energy and water footprints will help us understand the true cost of AI tools, allowing for informed choices.
You can look for providers that publish detailed environmental reports and hold third-party renewable energy certifications. Many major providers now offer PUE (Power Usage Effectiveness) metrics alongside renewable energy matching to demonstrate real-world commitments to clean power.
When integrating AI into your build pipeline, choosing lightweight, specialized models for tasks like image compression or code linting can be more sustainable than full-scale generative engines. Task-specific tools often use considerably less energy than general AI models, as general models must process what task you want them to complete.
There are a variety of guides and collectives out there that can guide you on choosing the ‘green’ web hosts that are best for your business. When choosing AI-model vendors, you should look at options that prioritize ‘efficiency by design’: smaller, pruned models and edge-compute deployments can cut energy use by up to 50% compared to monolithic cloud-only models. They’re trained for specific tasks, so they don’t have to expend energy computing what the task is and how to go about it.
Once you’ve chosen conscientious vendors, optimize how you actually use AI. You can take steps like batching non-urgent inference tasks to reduce idle GPU time, an approach shown to lower energy consumption overall compared to requesting ad-hoc, as you don’t have to keep running the GPU constantly, only when you need to use it.
Smarter prompts can also help make AI usage slightly more sustainable. Sam Altman of ChatGPT revealed early in 2025 that people’s propensity for saying ‘please’ and ‘thank you’ to LLMs is costing millions of dollars and wasting energy as the Generative AI has to deal with extra phrases to compute that aren’t relevant to its task. You need to ensure that your prompts are direct and to the point, and deliver the context required to complete the task to reduce the need to reprompt.
On top of being responsible with your AI tool choice and usage, there are other steps you can take to offset the carbon cost of AI usage and enjoy the efficiency benefits it brings. Organizations can reduce their own emissions and use carbon offsetting to reduce their own carbon footprint as much as possible. Combined with the apparent sustainability benefits of AI use, this approach can help mitigate the harmful impacts of energy-hungry AI.
You can ensure that you’re using green server hosting (servers run on sustainable energy) for your own site and cloud needs beyond AI, and refine your content delivery network (CDN) to ensure your sites and apps are serving compressed, optimized assets from edge locations, cutting the distance data must travel, which should reduce the associated energy use.
Organizations and individuals, particularly those with thought leadership status, can be advocates pushing for transparent sustainability specifications. This involves both lobbying politicians and regulatory bodies to introduce and enforce sustainability standards and ensuring that other members of the public are kept aware of the environmental costs of AI use.
It’s only through collective action that we’re likely to see strict enforcement of both sustainable AI data centers and the standardization of emissions reporting.
Regardless, it remains a tricky path to walk, along the double-edged sword of AI’s use in web design.
Use AI too much, and you’re contributing to its massive carbon footprint. Use it too little, and you’re likely to be left behind by rivals that are able to work more efficiently and deliver projects much faster.
“
The best environmentally conscious designers and organizations can currently do is attempt to navigate it as best they can and stay informed on best practices.
We can’t dispute that AI use in web design delivers on its promise of agility, personalization, and resource savings at the page-level. Yet without a holistic view that accounts for the environmental demands of AI infrastructure, these gains risk being overshadowed by an expanding energy and water footprint.
Achieving the balance between enjoying AI’s efficiency gains and managing its carbon footprint requires transparency, targeted deployment, human oversight, and a steadfast commitment to core sustainable web practices.
Beyond The Hype: What AI Can Really Do For Product Design Beyond The Hype: What AI Can Really Do For Product Design Nikita Samutin 2025-08-18T13:00:00+00:00 2025-08-21T11:03:55+00:00 These days, it’s easy to find curated lists of AI tools for designers, galleries of generated illustrations, and countless […]
Accessibility
2025-08-18T13:00:00+00:00
2025-08-21T11:03:55+00:00
These days, it’s easy to find curated lists of AI tools for designers, galleries of generated illustrations, and countless prompt libraries. What’s much harder to find is a clear view of how AI is actually integrated into the everyday workflow of a product designer — not for experimentation, but for real, meaningful outcomes.
I’ve gone through that journey myself: testing AI across every major stage of the design process, from ideation and prototyping to visual design and user research. Along the way, I’ve built a simple, repeatable workflow that significantly boosts my productivity.
In this article, I’ll share what’s already working and break down some of the most common objections I’ve encountered — many of which I’ve faced personally.
Pushback: “Whenever I ask AI to suggest ideas, I just get a list of clichés. It can’t produce the kind of creative thinking expected from a product designer.”
That’s a fair point. AI doesn’t know the specifics of your product, the full context of your task, or many other critical nuances. The most obvious fix is to “feed it” all the documentation you have. But that’s a common mistake as it often leads to even worse results: the context gets flooded with irrelevant information, and the AI’s answers become vague and unfocused.
Current-gen models can technically process thousands of words, but the longer the input, the higher the risk of missing something important, especially content buried in the middle. This is known as the “lost in the middle” problem.
To get meaningful results, AI doesn’t just need more information — it needs the right information, delivered in the right way. That’s where the RAG approach comes in.
Think of RAG as a smart assistant working with your personal library of documents. You upload your files, and the assistant reads each one, creating a short summary — a set of bookmarks (semantic tags) that capture the key topics, terms, scenarios, and concepts. These summaries are stored in a kind of “card catalog,” called a vector database.
When you ask a question, the assistant doesn’t reread every document from cover to cover. Instead, it compares your query to the bookmarks, retrieves only the most relevant excerpts (chunks), and sends those to the language model to generate a final answer.
Let’s break it down:
Typical chat interaction
It’s like asking your assistant to read a 100-page book from start to finish every time you have a question. Technically, all the information is “in front of them,” but it’s easy to miss something, especially if it’s in the middle. This is exactly what the “lost in the middle” issue refers to.
RAG approach
You ask your smart assistant a question, and it retrieves only the relevant pages (chunks) from different documents. It’s faster and more accurate, but it introduces a few new risks:

These aren’t reasons to avoid RAG or AI altogether. Most of them can be avoided with better preparation of your knowledge base and more precise prompts. So, where do you start?
These three short documents will give your AI assistant just enough context to be genuinely helpful:
Each document should focus on a single topic and ideally stay within 300–500 words. This makes it easier to search and helps ensure that each retrieved chunk is semantically clean and highly relevant.
In practice, RAG works best when both the query and the knowledge base are in English. I ran a small experiment to test this assumption, trying a few different combinations:
Takeaway: If you want your AI assistant to deliver precise, meaningful responses, do your RAG work entirely in English, both the data and the queries. This advice applies specifically to RAG setups. For regular chat interactions, you’re free to use other languages. A challenge also highlighted in this 2024 study on multilingual retrieval.
Once your AI assistant has proper context, it stops acting like an outsider and starts behaving more like someone who truly understands your product. With well-structured input, it can help you spot blind spots in your thinking, challenge assumptions, and strengthen your ideas — the way a mid-level or senior designer would.
Here’s an example of a prompt that works well for me:
Your task is to perform a comparative analysis of two features: “Group gift contributions” (described in group_goals.txt) and “Personal savings goals” (described in personal_goals.txt).
The goal is to identify potential conflicts in logic, architecture, and user scenarios and suggest visual and conceptual ways to clearly separate these two features in the UI so users can easily understand the difference during actual use.
Please include:
- Possible overlaps in user goals, actions, or scenarios;
- Potential confusion if both features are launched at the same time;
- Any architectural or business-level conflicts (e.g. roles, notifications, access rights, financial logic);
- Suggestions for visual and conceptual separation: naming, color coding, separate sections, or other UI/UX techniques;
- Onboarding screens or explanatory elements that might help users understand both features.
If helpful, include a comparison table with key parameters like purpose, initiator, audience, contribution method, timing, access rights, and so on.
If you want AI to go beyond surface-level suggestions and become a real design partner, it needs the right context. Not just more information, but better, more structured information.
Building a usable knowledge base isn’t difficult. And you don’t need a full-blown RAG system to get started. Many of these principles work even in a regular chat: well-organized content and a clear question can dramatically improve how helpful and relevant the AI’s responses are. That’s your first step in turning AI from a novelty into a practical tool in your product design workflow.
Pushback: “AI only generates obvious solutions and can’t even build a proper user flow. It’s faster to do it manually.”
That’s a fair concern. AI still performs poorly when it comes to building complete, usable screen flows. But for individual elements, especially when exploring new interaction patterns or visual ideas, it can be surprisingly effective.
For example, I needed to prototype a gamified element for a limited-time promotion. The idea is to give users a lottery ticket they can “flip” to reveal a prize. I couldn’t recreate the 3D animation I had in mind in Figma, either manually or using any available plugins. So I described the idea to Claude 4 in Figma Make and within a few minutes, without writing a single line of code, I had exactly what I needed.
At the prototyping stage, AI can be a strong creative partner in two areas:
AI can also be applied to multi-screen prototypes, but it’s not as simple as dropping in a set of mockups and getting a fully usable flow. The bigger and more complex the project, the more fine-tuning and manual fixes are required. Where AI already works brilliantly is in focused tasks — individual screens, elements, or animations — where it can kick off the thinking process and save hours of trial and error.
A quick UI prototype of a gamified promo banner created with Claude 4 in Figma Make. No code or plugins needed.
Here’s another valuable way to use AI in design — as a stress-testing tool. Back in 2023, Google Research introduced PromptInfuser, an internal Figma plugin that allowed designers to attach prompts directly to UI elements and simulate semi-functional interactions within real mockups. Their goal wasn’t to generate new UI, but to check how well AI could operate inside existing layouts — placing content into specific containers, handling edge-case inputs, and exposing logic gaps early.
The results were striking: designers using PromptInfuser were up to 40% more effective at catching UI issues and aligning the interface with real-world input — a clear gain in design accuracy, not just speed.
That closely reflects my experience with Claude 4 and Figma Make: when AI operates within a real interface structure, rather than starting from a blank canvas, it becomes a much more reliable partner. It helps test your ideas, not just generate them.
Pushback: “AI can’t match our visual style. It’s easier to just do it by hand.”
This is one of the most common frustrations when using AI in design. Even if you upload your color palette, fonts, and components, the results often don’t feel like they belong in your product. They tend to be either overly decorative or overly simplified.
And this is a real limitation. In my experience, today’s models still struggle to reliably apply a design system, even if you provide a component structure or JSON files with your styles. I tried several approaches:

So yes, AI still can’t help you finalize your UI. It doesn’t replace hand-crafted design work. But it’s very useful in other ways:
AI won’t save you five hours of high-fidelity design time, since you’ll probably spend that long fixing its output. But as a visual sparring partner, it’s already strong. If you treat it like a source of alternatives and fresh perspectives, it becomes a valuable creative collaborator.
“
Product designers have come a long way. We used to create interfaces in Photoshop based on predefined specs. Then we delved deeper into UX with mapping user flows, conducting interviews, and understanding user behavior. Now, with AI, we gain access to yet another level: data analysis, which used to be the exclusive domain of product managers and analysts.
As Vitaly Friedman rightly pointed out in one of his columns, trying to replace real UX interviews with AI can lead to false conclusions as models tend to generate an average experience, not a real one. The strength of AI isn’t in inventing data but in processing it at scale.
Let me give a real example. We launched an exit survey for users who were leaving our service. Within a week, we collected over 30,000 responses across seven languages.
Simply counting the percentages for each of the five predefined reasons wasn’t enough. I wanted to know:
The real challenge was… figuring out what cuts and angles were even worth exploring. The entire technical process, from analysis to visualizations, was done “for me” by Gemini, working inside Google Sheets. This task took me about two hours in total. Without AI, not only would it have taken much longer, but I probably wouldn’t have been able to reach that level of insight on my own at all.

AI enables near real-time work with large data sets. But most importantly, it frees up your time and energy for what’s truly valuable: asking the right questions.
“
A few practical notes: Working with large data sets is still challenging for models without strong reasoning capabilities. In my experiments, I used Gemini embedded in Google Sheets and cross-checked the results using ChatGPT o3. Other models, including the standalone Gemini 2.5 Pro, often produced incorrect outputs or simply refused to complete the task.
AI in design is only as good as the questions you ask it. It doesn’t do the work for you. It doesn’t replace your thinking. But it helps you move faster, explore more options, validate ideas, and focus on the hard parts instead of burning time on repetitive ones. Sometimes it’s still faster to design things by hand. Sometimes it makes more sense to delegate to a junior designer.
But increasingly, AI is becoming the one who suggests, sharpens, and accelerates. Don’t wait to build the perfect AI workflow. Start small. And that might be the first real step in turning AI from a curiosity into a trusted tool in your product design process.
Intl API: A Definitive Guide To Browser-Native InternationalizationThe Power Of The <code>Intl</code> API: A Definitive Guide To Browser-Native Internationalization The Power Of The <code>Intl</code> API: A Definitive Guide To Browser-Native Internationalization Fuqiao Xue 2025-08-08T10:00:00+00:00 2025-08-13T15:04:28+00:00 It’s a common misconception that internationalization (i18n) is simply about translating text. While crucial, translation is merely […]
Accessibility
2025-08-08T10:00:00+00:00
2025-08-13T15:04:28+00:00
It’s a common misconception that internationalization (i18n) is simply about translating text. While crucial, translation is merely one facet. One of the complexities lies in adapting information for diverse cultural expectations: How do you display a date in Japan versus Germany? What’s the correct way to pluralize an item in Arabic versus English? How do you sort a list of names in various languages?
Many developers have relied on weighty third-party libraries or, worse, custom-built formatting functions to tackle these challenges. These solutions, while functional, often come with significant overhead: increased bundle size, potential performance bottlenecks, and the constant struggle to keep up with evolving linguistic rules and locale data.
Enter the ECMAScript Internationalization API, more commonly known as the Intl object. This silent powerhouse, built directly into modern JavaScript environments, is an often-underestimated, yet incredibly potent, native, performant, and standards-compliant solution for handling data internationalization. It’s a testament to the web’s commitment to being worldwide, providing a unified and efficient way to format numbers, dates, lists, and more, according to specific locales.
Intl And Locales: More Than Just Language CodesAt the heart of Intl lies the concept of a locale. A locale is far more than just a two-letter language code (like en for English or es for Spanish). It encapsulates the complete context needed to present information appropriately for a specific cultural group. This includes:
en, es, fr).Latn for Latin, Cyrl for Cyrillic). For example, zh-Hans for Simplified Chinese, vs. zh-Hant for Traditional Chinese.US for United States, GB for Great Britain, DE for Germany). This is crucial for variations within the same language, such as en-US vs. en-GB, which differ in date, time, and number formatting.Typically, you’ll want to choose the locale according to the language of the web page. This can be determined from the lang attribute:
// Get the page's language from the HTML lang attribute
const pageLocale = document.documentElement.lang || 'en-US'; // Fallback to 'en-US'
Occasionally, you may want to override the page locale with a specific locale, such as when displaying content in multiple languages:
// Force a specific locale regardless of page language
const tutorialFormatter = new Intl.NumberFormat('zh-CN', { style: 'currency', currency: 'CNY' });
console.log(`Chinese example: ${tutorialFormatter.format(199.99)}`); // Output: ¥199.99
In some cases, you might want to use the user’s preferred language:
// Use the user's preferred language
const browserLocale = navigator.language || 'ja-JP';
const formatter = new Intl.NumberFormat(browserLocale, { style: 'currency', currency: 'JPY' });
When you instantiate an Intl formatter, you can optionally pass one or more locale strings. The API will then select the most appropriate locale based on availability and preference.
The Intl object exposes several constructors, each for a specific formatting task. Let’s delve into the most frequently used ones, along with some powerful, often-overlooked gems.
Intl.DateTimeFormat: Dates and Times, GloballyFormatting dates and times is a classic i18n problem. Should it be MM/DD/YYYY or DD.MM.YYYY? Should the month be a number or a full word? Intl.DateTimeFormat handles all this with ease.
const date = new Date(2025, 6, 27, 14, 30, 0); // June 27, 2025, 2:30 PM
// Specific locale and options (e.g., long date, short time)
const options = {
weekday: 'long',
year: 'numeric',
month: 'long',
day: 'numeric',
hour: 'numeric',
minute: 'numeric',
timeZoneName: 'shortOffset' // e.g., "GMT+8"
};
console.log(new Intl.DateTimeFormat('en-US', options).format(date));
// "Friday, June 27, 2025 at 2:30 PM GMT+8"
console.log(new Intl.DateTimeFormat('de-DE', options).format(date));
// "Freitag, 27. Juni 2025 um 14:30 GMT+8"
// Using dateStyle and timeStyle for common patterns
console.log(new Intl.DateTimeFormat('en-GB', { dateStyle: 'full', timeStyle: 'short' }).format(date));
// "Friday 27 June 2025 at 14:30"
console.log(new Intl.DateTimeFormat('ja-JP', { dateStyle: 'long', timeStyle: 'short' }).format(date));
// "2025年6月27日 14:30"
The flexibility of options for DateTimeFormat is vast, allowing control over year, month, day, weekday, hour, minute, second, time zone, and more.
Intl.NumberFormat: Numbers With Cultural NuanceBeyond simple decimal places, numbers require careful handling: thousands separators, decimal markers, currency symbols, and percentage signs vary wildly across locales.
const price = 123456.789;
// Currency formatting
console.log(new Intl.NumberFormat('en-US', { style: 'currency', currency: 'USD' }).format(price));
// "$123,456.79" (auto-rounds)
console.log(new Intl.NumberFormat('de-DE', { style: 'currency', currency: 'EUR' }).format(price));
// "123.456,79 €"
// Units
console.log(new Intl.NumberFormat('en-US', { style: 'unit', unit: 'meter', unitDisplay: 'long' }).format(100));
// "100 meters"
console.log(new Intl.NumberFormat('fr-FR', { style: 'unit', unit: 'kilogram', unitDisplay: 'short' }).format(5.5));
// "5,5 kg"
Options like minimumFractionDigits, maximumFractionDigits, and notation (e.g., scientific, compact) provide even finer control.
Intl.ListFormat: Natural Language ListsPresenting lists of items is surprisingly tricky. English uses “and” for conjunction and “or” for disjunction. Many languages have different conjunctions, and some require specific punctuation.
This API simplifies a task that would otherwise require complex conditional logic:
const items = ['apples', 'oranges', 'bananas'];
// Conjunction ("and") list
console.log(new Intl.ListFormat('en-US', { type: 'conjunction' }).format(items));
// "apples, oranges, and bananas"
console.log(new Intl.ListFormat('de-DE', { type: 'conjunction' }).format(items));
// "Äpfel, Orangen und Bananen"
// Disjunction ("or") list
console.log(new Intl.ListFormat('en-US', { type: 'disjunction' }).format(items));
// "apples, oranges, or bananas"
console.log(new Intl.ListFormat('fr-FR', { type: 'disjunction' }).format(items));
// "apples, oranges ou bananas"
Intl.RelativeTimeFormat: Human-Friendly TimestampsDisplaying “2 days ago” or “in 3 months” is common in UI, but localizing these phrases accurately requires extensive data. Intl.RelativeTimeFormat automates this.
const rtf = new Intl.RelativeTimeFormat('en-US', { numeric: 'auto' });
console.log(rtf.format(-1, 'day')); // "yesterday"
console.log(rtf.format(1, 'day')); // "tomorrow"
console.log(rtf.format(-7, 'day')); // "7 days ago"
console.log(rtf.format(3, 'month')); // "in 3 months"
console.log(rtf.format(-2, 'year')); // "2 years ago"
// French example:
const frRtf = new Intl.RelativeTimeFormat('fr-FR', { numeric: 'auto', style: 'long' });
console.log(frRtf.format(-1, 'day')); // "hier"
console.log(frRtf.format(1, 'day')); // "demain"
console.log(frRtf.format(-7, 'day')); // "il y a 7 jours"
console.log(frRtf.format(3, 'month')); // "dans 3 mois"
The numeric: 'always' option would force “1 day ago” instead of “yesterday”.
Intl.PluralRules: Mastering PluralizationThis is arguably one of the most critical aspects of i18n. Different languages have vastly different pluralization rules (e.g., English has singular/plural, Arabic has zero, one, two, many…). Intl.PluralRules allows you to determine the “plural category” for a given number in a specific locale.
const prEn = new Intl.PluralRules('en-US');
console.log(prEn.select(0)); // "other" (for "0 items")
console.log(prEn.select(1)); // "one" (for "1 item")
console.log(prEn.select(2)); // "other" (for "2 items")
const prAr = new Intl.PluralRules('ar-EG');
console.log(prAr.select(0)); // "zero"
console.log(prAr.select(1)); // "one"
console.log(prAr.select(2)); // "two"
console.log(prAr.select(10)); // "few"
console.log(prAr.select(100)); // "other"
This API doesn’t pluralize text directly, but it provides the essential classification needed to select the correct translation string from your message bundles. For example, if you have message keys like item.one, item.other, you’d use pr.select(count) to pick the right one.
Intl.DisplayNames: Localized Names For EverythingNeed to display the name of a language, a region, or a script in the user’s preferred language? Intl.DisplayNames is your comprehensive solution.
// Display language names in English
const langNamesEn = new Intl.DisplayNames(['en'], { type: 'language' });
console.log(langNamesEn.of('fr')); // "French"
console.log(langNamesEn.of('es-MX')); // "Mexican Spanish"
// Display language names in French
const langNamesFr = new Intl.DisplayNames(['fr'], { type: 'language' });
console.log(langNamesFr.of('en')); // "anglais"
console.log(langNamesFr.of('zh-Hans')); // "chinois (simplifié)"
// Display region names
const regionNamesEn = new Intl.DisplayNames(['en'], { type: 'region' });
console.log(regionNamesEn.of('US')); // "United States"
console.log(regionNamesEn.of('DE')); // "Germany"
// Display script names
const scriptNamesEn = new Intl.DisplayNames(['en'], { type: 'script' });
console.log(scriptNamesEn.of('Latn')); // "Latin"
console.log(scriptNamesEn.of('Arab')); // "Arabic"
With Intl.DisplayNames, you avoid hardcoding countless mappings for language names, regions, or scripts, keeping your application robust and lean.
You might be wondering about browser compatibility. The good news is that Intl has excellent support across modern browsers. All major browsers (Chrome, Firefox, Safari, Edge) fully support the core functionality discussed (DateTimeFormat, NumberFormat, ListFormat, RelativeTimeFormat, PluralRules, DisplayNames). You can confidently use these APIs without polyfills for the majority of your user base.
IntlThe Intl API is a cornerstone of modern web development for a global audience. It empowers front-end developers to deliver highly localized user experiences with minimal effort, leveraging the browser’s built-in, optimized capabilities.
By adopting Intl, you reduce dependencies, shrink bundle sizes, and improve performance, all while ensuring your application respects and adapts to the diverse linguistic and cultural expectations of users worldwide. Stop wrestling with custom formatting logic and embrace this standards-compliant tool!
It’s important to remember that Intl handles the formatting of data. While incredibly powerful, it doesn’t solve every aspect of internationalization. Content translation, bidirectional text (RTL/LTR), locale-specific typography, and deep cultural nuances beyond data formatting still require careful consideration. (I may write about these in the future!) However, for presenting dynamic data accurately and intuitively, Intl is the browser-native answer.
Automating Design Systems: Tips And Resources For Getting Started Automating Design Systems: Tips And Resources For Getting Started Joas Pambou 2025-08-06T10:00:00+00:00 2025-08-07T14:02:50+00:00 A design system is more than just a set of colors and buttons. It’s a shared language that helps designers and developers build […]
Accessibility
2025-08-06T10:00:00+00:00
2025-08-07T14:02:50+00:00
A design system is more than just a set of colors and buttons. It’s a shared language that helps designers and developers build good products together. At its core, a design system includes tokens (like colors, spacing, fonts), components (such as buttons, forms, navigation), plus the rules and documentation that tie all together across projects.
If you’ve ever used systems like Google Material Design or Shopify Polaris, for example, then you’ve seen how design systems set clear expectations for structure and behavior, making teamwork smoother and faster. But while design systems promote consistency, keeping everything in sync is the hard part. Update a token in Figma, like a color or spacing value, and that change has to show up in the code, the documentation, and everywhere else it’s used.
The same thing goes for components: when a button’s behavior changes, it needs to update across the whole system. That’s where the right tools and a bit of automation can make the difference. They help reduce repetitive work and keep the system easier to manage as it grows.
In this article, we’ll cover a variety of tools and techniques for syncing tokens, updating components, and keeping docs up to date, showing how automation can make all of it easier.
Let’s start with the basics. Color, typography, spacing, radii, shadows, and all the tiny values that make up your visual language are known as design tokens, and they’re meant to be the single source of truth for the UI. You’ll see them in design software like Figma, in code, in style guides, and in documentation. Smashing Magazine has covered them before in great detail.
The problem is that they often go out of sync, such as when a color or component changes in design but doesn’t get updated in the code. The more your team grows or changes, the more these mismatches show up; not because people aren’t paying attention, but because manual syncing just doesn’t scale. That’s why automating tokens is usually the first thing teams should consider doing when they start building a design system. That way, instead of writing the same color value in Figma and then again in a configuration file, you pull from a shared token source and let that drive both design and development.
There are a few tools that are designed to help make this easier.
Token Studio is a Figma plugin that lets you manage design tokens directly in your file, export them to different formats, and sync them to code.

Specify lets you collect tokens from Figma and push them to different targets, including GitHub repositories, continuous integration pipelines, documentation, and more.
Design-tokens.dev is a helpful reference if you want tips for things like how to structure tokens, format them (e.g., JSON, YAML, and so on), and think about token types.

NamedDesignTokens.guide helps with naming conventions, which is honestly a common pain point, especially when you’re working with a large number of tokens.

Once your tokens are set and connected, you’ll spend way less time fixing inconsistencies. It also gives you a solid base to scale, whether that’s adding themes, switching brands, or even building systems for multiple products.
That’s also when naming really starts to count. If your tokens or components aren’t clearly named, things can get confusing quickly.
Note: Vitaly Friedman’s “How to Name Things” is worth checking out if you’re working with larger systems.
From there, it’s all about components. Tokens define the values, but components are what people actually use, e.g., buttons, inputs, cards, dropdowns — you name it. In a perfect setup, you build a component once and reuse it everywhere. But without structure, it’s easy for things to “drift” out of scope. It’s easy to end up with five versions of the same button, and what’s in code doesn’t match what’s in Figma, for example.
Automation doesn’t replace design, but rather, it connects everything to one source.
The Figma component matches the one in production, the documentation updates when the component changes, and the whole team is pulling from the same library instead of rebuilding their own version. This is where real collaboration happens.
Here are a few tools that help make that happen:
| Tool | What It Does |
|---|---|
| UXPin Merge | Lets you design using real code components. What you prototype is what gets built. |
| Supernova | Helps you publish a design system, sync design and code sources, and keep documentation up-to-date. |
| Zeroheight | Turns your Figma components into a central, browsable, and documented system for your whole team. |
A lot of the work starts right inside your design application. Once your tokens and components are in place, tools like Supernova help you take it further by extracting design data, syncing it across platforms, and generating production-ready code. You don’t need to write custom scripts or use the Figma API to get value from automation; these tools handle most of it for you.
But for teams that want full control, Figma does offer an API. It lets you do things like the following:
The Figma API is REST-based, so it works well with custom scripts and automations. You don’t need a huge setup, just the right pieces. On the development side, teams usually use Node.js or Python to handle automation. For example:
You won’t need that level of setup for most use cases, but it’s helpful to know it’s there if your team outgrows no-code tools.
The workflow becomes easier to manage once that’s clear, and you spend less time trying to fix changes or mismatches. When tokens, components, and documentation stay in sync, your team moves faster and spends less time fixing the same issues.
Figma is a collaborative design tool used to create UIs: buttons, layouts, styles, components, everything that makes up the visual language of the product. It’s also where all your design data lives, which includes the tokens we talked about earlier. This data is what we’ll extract and eventually connect to your codebase. But first, you’ll need a setup.
To follow along:
Once you’re in, you’ll see a home screen that looks something like the following:

From here, it’s time to set up your design tokens. You can either create everything from scratch or use a template from the Figma community to save time. Templates are a great option if you don’t want to build everything yourself. But if you prefer full control, creating your setup totally works too.
There are other ways to get tokens as well. For example, a site like namedesigntokens.guide lets you generate and download tokens in formats like JSON. The only catch is that Figma doesn’t let you import JSON directly, so if you go that route, you’ll need to bring in a middle tool like Specify to bridge that gap. It helps sync tokens between Figma, GitHub, and other places.
For this article, though, we’ll keep it simple and stick with Figma. Pick any design system template from the Figma community to get started; there are plenty to choose from.

Depending on the template you choose, you’ll get a pre-defined set of tokens that includes colors, typography, spacing, components, and more. These templates come in all types: website, e-commerce, portfolio, app UI kits, you name it. For this article, we’ll be using the /Design-System-Template–Community because it includes most of the tokens you’ll need right out of the box. But feel free to pick a different one if you want to try something else.
Once you’ve picked your template, it’s time to download the tokens. We’ll use Supernova, a tool that connects directly to your Figma file and pulls out design tokens, styles, and components. It makes the design-to-code process a lot smoother.
Go to supernova.io and create an account. Once you’re in, you’ll land on a dashboard that looks like this:

To pull in the tokens, head over to the Data Sources section in Supernova and choose Figma from the list of available sources. (You’ll also see other options like Storybook or Figma variables, but we’re focusing on Figma.) Next, click on Connect a new file, paste the link to your Figma template, and click Import.

Supernova will load the full design system from your template. From your dashboard, you’ll now be able to see all the tokens.

Design tokens are great inside Figma, but the real value shows when you turn them into code. That’s how the developers on your team actually get to use them.
Here’s the problem: Many teams default to copying values manually for things like color, spacing, and typography. But when you make a change to them in Figma, the code is instantly out of sync. That’s why automating this process is such a big win.
Instead of rewriting the same theme setup for every project, you generate it, constantly translating designs into dev-ready assets, and keep everything in sync from one source of truth.
Now that we’ve got all our tokens in Supernova, let’s turn them into code. First, go to the Code Automation tab, then click New Pipeline. You’ll see different options depending on what you want to generate: React Native, CSS-in-JS, Flutter, Godot, and a few others.
Let’s go with the CSS-in-JS option for the sake of demonstration:

After that, you’ll land on a setup screen with three sections: Data, Configuration, and Delivery.
Here, you can pick a theme. At first, it might only give you “Black” as the option; you can select that or leave it empty. It really doesn’t matter for the time being.

This is where you control how the code is structured. I picked PascalCase for how token names are formatted. You can also update how things like spacing, colors, or font styles are grouped and saved.

This is where you choose how you want the output delivered. I chose “Build Only”, which builds the code for you to download.

Once you’re done, click Save. The pipeline is created, and you’ll see it listed in your dashboard. From here, you can download your token code, which is already generated.
So, what’s the point of documentation in a design system?
You can think of it as the instruction manual for your team. It explains what each token or component is, why it exists, and how to use it. Designers, developers, and anyone else on your team can stay on the same page — no guessing, no back-and-forth. Just clear context.
Let’s continue from where we stopped. Supernova is capable of handling your documentation. Head over to the Documentation tab. This is where you can start editing everything about your design system docs, all from the same place.
You can:
You’re building the documentation inside the same tool where your tokens live. In other words, there’s no jumping between tools and no additional setup. That’s where the automation kicks in. You edit once, and your docs stay synced with your design source. It all stays in one environment.

Once you’re done, click Publish and you will be presented with a new window asking you to sign in. After that, you’re able to access your live documentation site.
Automation is great. It saves hours of manual work and keeps your design system tight across design and code. The trick is knowing when to automate and how to make sure it keeps working over time. You don’t need to automate everything right away. But if you’re doing the same thing over and over again, that’s a kind of red flag.
A few signs that it’s time to consider using automation:
There are three steps you need to consider. Let’s look at each one.
If your pipeline depends on design tools, like Figma, or platforms, like Supernova, you’ll want to know when changes are made and evaluate how they impact your work, because even small updates can quietly affect your exports.
It’s a good idea to check Figma’s API changelog now and then, especially if something feels off with your token syncing. They often update how variables and components are structured, and that can impact your pipeline. There’s also an RSS feed for product updates.
The same goes for Supernova’s product updates. They regularly roll out improvements that might tweak how your tokens are handled or exported. If you’re using open-source tools like Style Dictionary, keeping an eye on the GitHub repo (particularly the Issues tab) can save you from debugging weird token name changes later.
All of this isn’t about staying glued to release notes, but having a system to check if something suddenly stops working. That way, you’ll catch things before they reach production.
A common trap teams fall into is trying to automate everything in one big run: colors, spacing, themes, components, and docs, all processed in a single click. It sounds convenient, but it’s hard to maintain, and even harder to debug.
It’s much more manageable to split your automation into pieces. For example, having a single workflow that handles your core design tokens (e.g., colors, spacing, and font sizes), another for theme variations (e.g., light and dark themes), and one more for component mapping (e.g., buttons, inputs, and cards). This way, if your team changes how spacing tokens are named in Figma, you only need to update one part of the workflow, not the entire system. It’s also easier to test and reuse smaller steps.
Even if everything runs fine, always take a moment to check the exported output. It doesn’t need to be complicated. A few key things:
PrimaryColorColorText, that’s a red flag.To catch issues early, it helps to run tools like ESLint or Stylelint right after the pipeline completes. They’ll flag odd syntax or naming problems before things get shipped.
Once your automation is stable, there’s a next layer that can boost your workflow: AI. It’s not just for writing code or generating mockups, but for helping with the small, repetitive things that eat up time in design systems. When used right, AI can assist without replacing your control over the system.
Here’s where it might fit into your workflow:
When you’re dealing with hundreds of tokens, naming them clearly and consistently is a real challenge. Some AI tools can help by suggesting clean, readable names for your tokens or components based on patterns in your design. It’s not perfect, but it’s a good way to kickstart naming, especially for large teams.
AI can also spot repeated styles or usage patterns across your design files. If multiple buttons or cards share similar spacing, shadows, or typography, tools powered by AI can group or suggest components for systemization even before a human notices.
Instead of writing everything from scratch, AI can generate first drafts of documentation based on your tokens, styles, and usage. You still need to review and refine, but it takes away the blank-page problem and saves hours.
Here are a few tools that already bring AI into the design and development space in practical ways:
This article is not about achieving complete automation in the technical sense, but more about using smart tools to streamline the menial and manual aspects of working with design systems. Exporting tokens, generating docs, and syncing design with code can be automated, making your process quicker and more reliable with the right setup.
Instead of rebuilding everything from scratch every time, you now have a way to keep things consistent, stay organized, and save time.
UX Job Interview Helpers UX Job Interview Helpers Vitaly Friedman 2025-08-05T13:00:00+00:00 2025-08-07T14:02:50+00:00 When talking about job interviews for a UX position, we often discuss how to leave an incredible impression and how to negotiate the right salary. But it’s only one part of the story. […]
Accessibility
2025-08-05T13:00:00+00:00
2025-08-07T14:02:50+00:00
When talking about job interviews for a UX position, we often discuss how to leave an incredible impression and how to negotiate the right salary. But it’s only one part of the story. The other part is to be prepared, to ask questions, and to listen carefully.
Below, I’ve put together a few useful resources on UX job interviews — from job boards to Notion templates and practical guides. I hope you or your colleagues will find it helpful.
As you are preparing for that interview, get ready with the Design Interview Kit (Figma), a helpful practical guide that covers how to craft case studies, solve design challenges, write cover letters, present your portfolio, and negotiate your offer. Kindly shared by Oliver Engel.

The Product Designer’s (Job) Interview Playbook (PDF) is a practical little guide for designers through each interview phase, with helpful tips and strategies on things to keep in mind, talking points, questions to ask, red flags to watch out for and how to tell a compelling story about yourself and your work. Kindly put together by Meghan Logan.

From my side, I can only wholeheartedly recommend to not only speak about your design process. Tell stories about the impact that your design work has produced. Frame your design work as an enabler of business goals and user needs. And include insights about the impact you’ve produced — on business goals, processes, team culture, planning, estimates, and testing.
Also, be very clear about the position that you are applying for. In many companies, titles do matter. There are vast differences in responsibilities and salaries between various levels for designers, so if you see yourself as a senior, review whether it actually reflects in the position.
Catt Small’s Guide To Successful UX Job Interviews, a wonderful practical series on how to build a referral pipeline, apply for an opening, prepare for screening and interviews, present your work, and manage salary expectations. You can also download a Notion template.

In her wonderful article, Nati Asher has suggested many useful questions to ask in a job interview when you are applying as a UX candidate. I’ve taken the liberty of revising some of them and added a few more questions that might be worth considering for your next job interview.

Before a job interview, have your questions ready. Not only will they convey a message that you care about the process and the culture, but also that you understand what is required to be successful. And this fine detail might go a long way.
Interviewers closer to business will expect you to present examples of your work using the STAR method (Situation — Task — Action — Result), and might be utterly confused if you delve into all the fine details of your ideation process or the choice of UX methods you’ve used.
As Meghan suggests, the interview is all about how your skills add value to the problem the company is currently solving. So ask about the current problems and tasks. Interview the person who interviews you, too — but also explain who you are, your focus areas, your passion points, and how you and your expertise would fit in a product and in the organization.
A final note on my end: never take a rejection personally. Very often, the reasons you are given for rejection are only a small part of a much larger picture — and have almost nothing to do with you. It might be that a job description wasn’t quite accurate, or the company is undergoing restructuring, or the finances are too tight after all.
Don’t despair and keep going. Write down your expectations. Job titles matter: be deliberate about them and your level of seniority. Prepare good references. Have your questions ready for that job interview. As Catt Small says, “once you have a foot in the door, you’ve got to kick it wide open”.
You are a bright shining star — don’t you ever forget that.
You can find more details on design patterns and UX in Smart Interface Design Patterns, our 15h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview. Use code BIRDIE to save 15% off.

$ 495.00 $ 699.00
Get Video + UX Training
25 video lessons (15h) + Live UX Training.
100 days money-back-guarantee.
40 video lessons (15h). Updated yearly.
Also available as a UX Bundle with 2 video courses.
Designing Better UX For Left-Handed People Designing Better UX For Left-Handed People Vitaly Friedman 2025-07-25T15:00:00+00:00 2025-07-30T15:33:12+00:00 Many products — digital and physical — are focused on “average” users — a statistical representation of the user base, which often overlooks or dismisses anything that deviates from that average, […]
Accessibility
2025-07-25T15:00:00+00:00
2025-07-30T15:33:12+00:00
Many products — digital and physical — are focused on “average” users — a statistical representation of the user base, which often overlooks or dismisses anything that deviates from that average, or happens to be an edge case. But people are never edge cases, and “average” users don’t really exist. We must be deliberate and intentional to ensure that our products reflect that.
Today, roughly 10% of people are left-handed. Yet most products — digital and physical — aren’t designed with them in mind. And there is rarely a conversation about how a particular digital experience would work better for their needs. So how would it adapt, and what are the issues we should keep in mind? Well, let’s explore what it means for us.

.course-intro{–shadow-color:206deg 31% 60%;background-color:#eaf6ff;border:1px solid #ecf4ff;box-shadow:0 .5px .6px hsl(var(–shadow-color) / .36),0 1.7px 1.9px -.8px hsl(var(–shadow-color) / .36),0 4.2px 4.7px -1.7px hsl(var(–shadow-color) / .36),.1px 10.3px 11.6px -2.5px hsl(var(–shadow-color) / .36);border-radius:11px;padding:1.35rem 1.65rem}@media (prefers-color-scheme:dark){.course-intro{–shadow-color:199deg 63% 6%;border-color:var(–block-separator-color,#244654);background-color:var(–accent-box-color,#19313c)}}
This article is part of our ongoing series on UX. You can find more details on design patterns and UX strategy in Smart Interface Design Patterns 🍣 — with live UX training coming up soon. Jump to table of contents.
It’s easy to assume that left-handed people are usually left-handed users. However, that’s not necessarily the case. Because most products are designed with right-handed use in mind, many left-handed people have to use their right hand to navigate the physical world.
From very early childhood, left-handed people have to rely on their right hand to use tools and appliances like scissors, openers, fridges, and so on. That’s why left-handed people tend to be ambidextrous, sometimes using different hands for different tasks, and sometimes using different hands for the same tasks interchangeably. However, only 1% of people use both hands equally well (ambidextrous).

In the same way, right-handed people aren’t necessarily right-handed users. It’s common to be using a mobile device in both left and right hands, or both, perhaps with a preference for one. But when it comes to writing, a preference is stronger.
Because left-handed users are in the minority, there is less demand for left-handed products, and so typically they are more expensive, and also more difficult to find. Troubles often emerge with seemingly simple tools — scissors, can openers, musical instruments, rulers, microwaves and bank pens.

For example, most scissors are designed with the top blade positioned for right-handed use, which makes cutting difficult and less precise. And in microwaves, buttons and interfaces are nearly always on the right, making left-handed use more difficult.
Now, with digital products, most left-handed people tend to adapt to right-handed tools, which they use daily. Unsurprisingly, many use their right hand to navigate the mouse. However, it’s often quite different on mobile where the left hand is often preferred.
As Ruben Babu writes, we shouldn’t design a fire extinguisher that can’t be used by both hands. Think pull up and pull down, rather than swipe left or right. Minimize the distance to travel with the mouse. And when in doubt, align to the center.

A simple way to test the mobile UI is by trying to use the opposite-handed UX test. For key flows, we try to complete them with your non-dominant hand and use the opposite hand to discover UX shortcomings.
For physical products, you might try the oil test. It might be more effective than you might think.
Our aim isn’t to degrade the UX of right-handed users by meeting the needs of left-handed users. The aim is to create an accessible experience for everyone. Providing a better experience for left-handed people also benefits right-handed people who have a temporary arm disability.
And that’s an often-repeated but also often-overlooked universal principle of usability: better accessibility is better for everyone, even if it might feel that it doesn’t benefit you directly at the moment.
You can find more details on design patterns and UX in Smart Interface Design Patterns, our 15h-video course with 100s of practical examples from real-life projects — with a live UX training later this year. Everything from mega-dropdowns to complex enterprise tables — with 5 new segments added every year. Jump to a free preview. Use code BIRDIE to save 15% off.

$ 495.00 $ 699.00
Get Video + UX Training
25 video lessons (15h) + Live UX Training.
100 days money-back-guarantee.
40 video lessons (15h). Updated yearly.
Also available as a UX Bundle with 2 video courses.
Handling JavaScript Event Listeners With Parameters Handling JavaScript Event Listeners With Parameters Amejimaobari Ollornwi 2025-07-21T10:00:00+00:00 2025-07-23T15:03:27+00:00 JavaScript event listeners are very important, as they exist in almost every web application that requires interactivity. As common as they are, it is also essential for them to […]
Accessibility
2025-07-21T10:00:00+00:00
2025-07-23T15:03:27+00:00
JavaScript event listeners are very important, as they exist in almost every web application that requires interactivity. As common as they are, it is also essential for them to be managed properly. Improperly managed event listeners can lead to memory leaks and can sometimes cause performance issues in extreme cases.
Here’s the real problem: JavaScript event listeners are often not removed after they are added. And when they are added, they do not require parameters most of the time — except in rare cases, which makes them a little trickier to handle.
A common scenario where you may need to use parameters with event handlers is when you have a dynamic list of tasks, where each task in the list has a “Delete” button attached to an event handler that uses the task’s ID as a parameter to remove the task. In a situation like this, it is a good idea to remove the event listener once the task has been completed to ensure that the deleted element can be successfully cleaned up, a process known as garbage collection.
A very common mistake when adding parameters to event handlers is calling the function with its parameters inside the addEventListener() method. This is what I mean:
button.addEventListener('click', myFunction(param1, param2));
The browser responds to this line by immediately calling the function, irrespective of whether or not the click event has happened. In other words, the function is invoked right away instead of being deferred, so it never fires when the click event actually occurs.
You may also receive the following console error in some cases:

addEventListener on EventTarget: parameter is not of type Object. (Large preview)
This error makes sense because the second parameter of the addEventListener method can only accept a JavaScript function, an object with a handleEvent() method, or simply null. A quick and easy way to avoid this error is by changing the second parameter of the addEventListener method to an arrow or anonymous function.
button.addEventListener('click', (event) => {
myFunction(event, param1, param2); // Runs on click
});
The only hiccup with using arrow and anonymous functions is that they cannot be removed with the traditional removeEventListener() method; you will have to make use of AbortController, which may be overkill for simple cases. AbortController shines when you have multiple event listeners to remove at once.
For simple cases where you have just one or two event listeners to remove, the removeEventListener() method still proves useful. However, in order to make use of it, you’ll need to store your function as a reference to the listener.
There are several ways to include parameters with event handlers. However, for the purpose of this demonstration, we are going to constrain our focus to the following two:
Using arrow and anonymous functions is the fastest and easiest way to get the job done.
To add an event handler with parameters using arrow and anonymous functions, we’ll first need to call the function we’re going to create inside the arrow function attached to the event listener:
const button = document.querySelector("#myButton");
button.addEventListener("click", (event) => {
handleClick(event, "hello", "world");
});
After that, we can create the function with parameters:
function handleClick(event, param1, param2) {
console.log(param1, param2, event.type, event.target);
}
Note that with this method, removing the event listener requires the AbortController. To remove the event listener, we create a new AbortController object and then retrieve the AbortSignal object from it:
const controller = new AbortController();
const { signal } = controller;
Next, we can pass the signal from the controller as an option in the removeEventListener() method:
button.addEventListener("click", (event) => {
handleClick(event, "hello", "world");
}, { signal });
Now we can remove the event listener by calling AbortController.abort():
controller.abort()
Closures in JavaScript are another feature that can help us with event handlers. Remember the mistake that produced a type error? That mistake can also be corrected with closures. Specifically, with closures, a function can access variables from its outer scope.
In other words, we can access the parameters we need in the event handler from the outer function:
function createHandler(message, number) {
// Event handler
return function (event) {
console.log(`${message} ${number} - Clicked element:`, event.target);
};
}
const button = document.querySelector("#myButton");
button.addEventListener("click", createHandler("Hello, world!", 1));
}
This establishes a function that returns another function. The function that is created is then called as the second parameter in the addEventListener() method so that the inner function is returned as the event handler. And with the power of closures, the parameters from the outer function will be made available for use in the inner function.
Notice how the event object is made available to the inner function. This is because the inner function is what is being attached as the event handler. The event object is passed to the function automatically because it’s the event handler.
To remove the event listener, we can use the AbortController like we did before. However, this time, let’s see how we can do that using the removeEventListener() method instead.
In order for the removeEventListener method to work, a reference to the createHandler function needs to be stored and used in the addEventListener method:
function createHandler(message, number) {
return function (event) {
console.log(`${message} ${number} - Clicked element:`, event.target);
};
}
const handler = createHandler("Hello, world!", 1);
button.addEventListener("click", handler);
Now, the event listener can be removed like this:
button.removeEventListener("click", handler);
It is good practice to always remove event listeners whenever they are no longer needed to prevent memory leaks. Most times, event handlers do not require parameters; however, in rare cases, they do. Using JavaScript features like closures, AbortController, and removeEventListener, handling parameters with event handlers is both possible and well-supported.
Why Non-Native Content Designers Improve Global UX Why Non-Native Content Designers Improve Global UX Oleksii Tkachenko 2025-07-18T13:00:00+00:00 2025-07-23T15:03:27+00:00 A few years ago, I was in a design review at a fintech company, polishing the expense management flows. It was a routine session where we reviewed […]
Accessibility
2025-07-18T13:00:00+00:00
2025-07-23T15:03:27+00:00
A few years ago, I was in a design review at a fintech company, polishing the expense management flows. It was a routine session where we reviewed the logic behind content and design decisions.
While looking over the statuses for submitted expenses, I noticed a label saying ‘In approval’. I paused, re-read it again, and asked myself:
“Where is it? Are the results in? Where can I find them? Are they sending me to the app section called “Approval”?”
This tiny label made me question what was happening with my money, and this feeling of uncertainty was quite anxiety-inducing.
My team, all native English speakers, did not flinch, even for a second, and moved forward to discuss other parts of the flow. I was the only non-native speaker in the room, and while the label made perfect sense to them, it still felt off to me.
After a quick discussion, we landed on ‘Pending approval’ — the simplest and widely recognised option internationally. More importantly, this wording makes it clear that there’s an approval process, and it hasn’t taken place yet. There’s no need to go anywhere to do it.
Some might call it nitpicking, but that was exactly the moment I realised how invisible — yet powerful — the non-native speaker’s perspective can be.
In a reality where user testing budgets aren’t unlimited, designing with familiar language patterns from the start helps you prevent costly confusions in the user journey.
“
Those same confusions often lead to:
Global products are often designed with English as their primary language. This seems logical, but here’s the catch:
Roughly 75% of English-speaking users are not native speakers, which means 3 out of every 4 users.
Native speakers often write on instinct, which works much like autopilot. This can often lead to overconfidence in content that, in reality, is too culturally specific, vague, or complex. And that content may not be understood by 3 in 4 people who read it.
If your team shares the same native language, content clarity remains assumed by default rather than proven through pressure testing.
The price for that is the accessibility of your product. A study by National Library of Medicine found that US adults who had proficiency in English but did not use it as their primary language were significantly less likely to be insured, even when provided with the same level of service as everyone else.
In other words, they did not finish the process of securing a healthcare provider — a process that’s vital to their well-being, in part, due to unclear or inaccessible communication.
If people abandon the process of getting something as vital as healthcare insurance, it’s easy to imagine them dropping out during checkout, account setup, or app onboarding.

Non-native content designers, by contrast, do not write on autopilot. Because of their experience learning English, they’re much more likely to tune into nuances, complexity, and cultural exclusions that natives often overlook. That’s the key to designing for everyone rather than 1 in 4.
When a non-native speaker has to pause, re-read something, or question the meaning of what’s written, they quickly identify it as a friction point in the user experience.
Why it’s important: Every extra second users have to spend understanding your content makes them more likely to abandon the task. This is a high price that companies pay for not prioritising clarity.
Cognitive load is not just about complex sentences but also about the speed. There’s plenty of research confirming that non-native speakers read more slowly than native speakers. This is especially important when you work on the visibility of system status — time-sensitive content that the user needs to scan and understand quickly.
One example you can experience firsthand is an ATM displaying a number of updates and instructions. Even when they’re quite similar, it still overwhelms you when you realise that you missed one, not being able to finish reading.
This kind of rapid-fire updates can increase frustration and the chances of errors.

They tend to review and rewrite things more often to find the easiest way to communicate the message. What a native speaker may consider clear enough might be dense or difficult for a non-native to understand.
Why it’s important: Simple content better scales across countries, languages, and cultures.
When things do not make sense, non-native speakers challenge them. Besides the idioms and other obvious traps, native speakers tend to fall into considering their life experience to be shared with most English-speaking users.
Cultural differences might even exist within one globally shared language. Have you tried saying ‘soccer’ instead of ‘football’ in a conversation with someone from the UK? These details may not only cause confusion but also upset people.
Why it’s important: Making sure your product is free from culture-specific references makes your product more inclusive and safeguards you from alienating your users.
Being a non-native speaker themselves, they have experience with products that do not speak clearly to them. They’ve been in the global user’s shoes and know how it impacts the experience.
Why it’s important: Empathy is a key driver towards design decisions that take into account the diverse cultural and linguistic background of the users.
Your product won’t become better overnight simply because you read an inspiring article telling you that you need to have a more diverse team. I get it. So here are concrete changes that you can make in your design workflows and hiring routines to make sure your content is accessible globally.
When you launch a new feature or product, it’s a standard practice to run QA sessions to review visuals and interactions. When your team does not include the non-native perspective, the content is usually overlooked and considered fine as long as it’s grammatically correct.
I know, having a dedicated localisation team to pressure-test your content for clarity is a privilege, but you can always start small.
At one of my previous companies, we established a ‘clarity heroes council’ — a small team of non-native English speakers with diverse cultural and linguistic backgrounds. During our reviews, they often asked questions that surprised us and highlighted where clarity was missing:
These questions flag potential problems and help you save both money and reputation by avoiding thousands of customer service tickets.
Even if your product does not have major releases regularly, it accumulates small changes over time. They’re often plugged in as fixes or small improvements, and can be easily overlooked from a QA perspective.
A good start will be a regular look at the flows that are critical to your business metrics: onboarding, checkout, and so on. Fence off some time for your team quarterly or even annually, depending on your product size, to come together and check whether your key content pieces serve the global audience well.
Usually, a proper review is conducted by a team: a product designer, a content designer, an engineer, a product manager, and a researcher. The idea is to go over the flows, research insights, and customer feedback together. For that, having a non-native speaker on the audit task force will be essential.
If you’ve never done an audit before, try this template as it covers everything you need to start.
If you haven’t done it already, make sure your voice & tone documentation includes details about the level of English your company is catering to.
This might mean working with the brand team to find ways to make sure your brand voice comes through to all users without sacrificing clarity and comprehension. Use examples and showcase the difference between sounding smart or playful vs sounding clear.
Leaning too much towards brand personality is where cultural differences usually shine through. As a user, you might’ve seen it many times. Here’s a banking app that wanted to seem relaxed and relatable by introducing ‘Dang it’ as the only call-to-action on the screen.

However, users with different linguistic backgrounds might not be familiar with this expression. Worse, they might see it as an action, leaving them unsure of what will actually happen after tapping it.
Considering how much content is generated with AI today, your guidelines have to account for both tone and clarity. This way, when you feed these requirements to the AI, you’ll see the output that will not just be grammatically correct but also easy to understand.
Basic heuristic principles are often documented as a part of overarching guidelines to help UX teams do a better job. The Nielsen Norman Group usability heuristics cover the essential ones, but it doesn’t mean you shouldn’t introduce your own. To complement this list, add this principle:
Aim for global understanding: Content and design should communicate clearly to any user regardless of cultural or language background.
You can suggest criteria to ensure it’s clear how to evaluate this:
This one is often overlooked, but collaboration between the research team and non-native speaking writers is super helpful. If your research involves a survey or interview, they can help you double-check whether there is complex or ambiguous language used in the questions unintentionally.
In a study by the Journal of Usability Studies, 37% of non-native speakers did not manage to answer the question that included a word they did not recognise or could not recall the meaning of. The question was whether they found the system to be “cumbersome to use”, and the consequences of getting unreliable data and measurements on this would have a negative impact on the UX of your product.
Another study by UX Journal of User Experience highlights how important clarity is in surveys. While most people in their study interpreted the question “How do you feel about … ?” as “What’s your opinion on …?”, some took it literally and proceeded to describe their emotions instead.
This means that even familiar terms can be misinterpreted. To get precise research results, it’s worth defining key terms and concepts to ensure common understanding with participants.
At Klarna, we often ran into a challenge of inconsistent translation for key terms. A well-defined English term could end up having from three to five different versions in Italian or German. Sometimes, even the same features or app sections could be referred to differently depending on the market — this led to user confusion.
To address this, we introduced a shared term base — a controlled vocabulary that included:
Importantly, the term selection was dictated by user research, not by assumption or personal preferences of the team.

If you’re unsure where to begin, use this product content vocabulary template for Notion. Duplicate it for free and start adding your terms.
We used a similar setup. Our new glossary was shared internally across teams, from product to customer service. Results? Reducing the support tickets related to unclear language used in UI (or directions in the user journey) by 18%. This included tasks like finding instructions on how to make a payment (especially with the least popular payment methods like bank transfer), where the late fee details are located, or whether it’s possible to postpone the payment. And yes, all of these features were available, and the team believed they were quite easy to find.
A glossary like this can live as an add-on to your guidelines. This way, you will be able to quickly get up to speed new joiners, keep product copy ready for localisation, and defend your decisions with stakeholders.
‘Looking for a native speaker’ still remains a part of the job listing for UX Writers and content designers. There’s no point in assuming it’s intentional discrimination. It’s just a misunderstanding that stems from not fully accepting that our job is more about building the user experience than writing texts that are grammatically correct.
Here are a few tips to make sure you hire the best talent and treat your applicants fairly:
Instead, focus on the core part of our job: add ‘clear communicator’, ‘ability to simplify’, or ‘experience writing for a global audience’.
Over the years, there have been plenty of studies confirming that the accent bias is real — people having an unusual or foreign accent are considered less hirable. While some may argue that it can have an impact on the efficiency of internal communications, it’s not enough to justify the reason to overlook the good work of the applicant.
My personal experience with the accent is that it mostly depends on the situation you’re in. When I’m in a friendly environment and do not feel anxiety, my English flows much better as I do not overthink how I sound. Ironically, sometimes when I’m in a room with my team full of British native speakers, I sometimes default to my Slavic accent. The question is: does it make my content design expertise or writing any worse? Not in the slightest.
Therefore, make sure you judge the portfolios, the ideas behind the interview answers, and whiteboard challenge presentations, instead of focusing on whether the candidate’s accent implies that they might not be good writers.
Non-native content designers do not have a negative impact on your team’s writing. They sharpen it by helping you look at your content through the lens of your real user base. In the globalised world, linguistic purity no longer benefits your product’s user experience.
Try these practical steps and leverage the non-native speaking lens of your content designers to design better international products.
Unmasking The Magic: The Wizard Of Oz Method For UX Research Unmasking The Magic: The Wizard Of Oz Method For UX Research Victor Yocco 2025-07-10T10:00:00+00:00 2025-07-16T16:32:47+00:00 New technologies and innovative concepts frequently enter the product development lifecycle, promising to revolutionize user experiences. However, even the […]
Accessibility
2025-07-10T10:00:00+00:00
2025-07-16T16:32:47+00:00
New technologies and innovative concepts frequently enter the product development lifecycle, promising to revolutionize user experiences. However, even the most ingenious ideas risk failure without a fundamental grasp of user interaction with these new experiences.
Consider the plight of the Nintendo Power Glove. Despite being a commercial success (selling over 1 million units), its release in late 1989 was followed by its discontinuation less than a full year later in 1990. The two games created solely for the Power Glove sold poorly, and there was little use for the Glove with Nintendo’s already popular traditional console games.
A large part of the failure was due to audience reaction once the product (which allegedly was developed in 8 weeks) was cumbersome and unintuitive. Users found syncing the glove to the moves in specific games to be extremely frustrating, as it required a process of coding the moves into the glove’s preset move buttons and then remembering which buttons would generate which move. With the more modern success of Nintendo’s WII and other movement-based controller consoles and games, we can see the Power Glove was a concept ahead of its time.

If Power Glove’s developers wanted to conduct effective research prior to building it out, they would have needed to look beyond traditional methods, such as surveys and interviews, to understand how a user might truly interact with the Glove. How could this have been done without a functional prototype and slowing down the overall development process?
Enter the Wizard of Oz method, a potent tool for bridging the chasm between abstract concepts and tangible user understanding, as one potential option. This technique simulates a fully functional system, yet a human operator (“the Wizard”) discreetly orchestrates the experience. This allows researchers to gather authentic user reactions and insights without the prerequisite of a fully built product.
The Wizard of Oz (WOZ) method is named in tribute to the similarly named book by Frank L. Baum. In the book, the Wizard is simply a man hidden behind a curtain, manipulating the reality of those who travel the land of Oz. Dorothy, the protagonist, exposes the Wizard for what he is, essentially an illusion or a con who is deceiving those who believe him to be omnipotent. Similarly, WOZ takes technologies that may or may not currently exist and emulates them in a way that should convince a research participant they are using an existing system or tool.
WOZ enables the exploration of user needs, validation of nascent concepts, and mitigation of development risks, particularly with complex or emerging technologies.
The product team in our above example might have used this method to have users simulate the actions of wearing the glove, programming moves into the glove, and playing games without needing a fully functional system. This could have uncovered the illogical situation of asking laypeople to code their hardware to be responsive to a game, show the frustration one encounters when needing to recode the device when changing out games, and also the cumbersome layout of the controls on the physical device (even if they’d used a cardboard glove with simulated controls drawn in crayon on the appropriate locations.
Jeff Kelley credits himself (PDF) with coining the term WOZ method in 1980 to describe the research method he employed in his dissertation. However, Paula Roe credits Don Norman and Allan Munro for using the method as early as 1973 to conduct testing on an airport automated travel assistant. Regardless of who originated the method, both parties agree that it gained prominence when IBM later used it to conduct studies on a speech-to-text tool known as The Listening Typewriter (see Image below).

In this article, I’ll cover the core principles of the WOZ method, explore advanced applications taken from practical experience, and demonstrate its unique value through real-world examples, including its application to the field of agentic AI. UX practitioners can use the WOZ method as another tool to unlock user insights and craft human-centered products and experiences.
The WOZ method operates on the premise that users believe they are interacting with an autonomous system while a human wizard manages the system’s responses behind the scenes. This individual, often positioned remotely (or off-screen), interprets user inputs and generates outputs that mimic the anticipated functionality of the experience.
A successful WOZ study involves several key roles:
Creating a convincing illusion is key to the success of a WOZ study. This necessitates careful planning of the research environment and the tasks users will undertake. Consider a study evaluating a new voice command system for smart home devices. The research setup might involve a physical mock-up of a smart speaker and predefined scenarios like “Play my favorite music” or “Dim the living room lights.” The wizard, listening remotely, would then trigger the appropriate responses (e.g., playing a song, verbally confirming the lights are dimmed).
Or perhaps it is a screen-based experience testing a new AI-powered chatbot. You have users entering commands into a text box, with another member of the product team providing responses simultaneously using a tool like Figma/Figjam, Miro, Mural, or other cloud-based software that allows multiple users to collaborate simultaneously (the author has no affiliation with any of the mentioned products).
Maintaining the illusion of a genuine system requires the following:
Transparency is crucial, even in a method that involves a degree of deception. Participants should always be debriefed after the session, with a clear explanation of the Wizard of Oz technique and the reasons for its use. Data privacy must be maintained as with any study, and participants should feel comfortable and respected throughout the process.
The WOZ method occupies a unique space within the UX research toolkit:
This method proves particularly valuable when exploring truly novel interactions or complex systems where building a fully functional prototype is premature or resource-intensive. It allows researchers to answer fundamental questions about user needs and expectations before committing significant development efforts.
Let’s move beyond the foundational aspects of the WOZ method and explore some more advanced techniques and critical considerations that can elevate its effectiveness.
It’s a fair question to ask whether WOZ is truly a time-saver compared to even cruder prototyping methods like paper prototypes or static digital mockups.
While paper prototypes are incredibly fast to create and test for basic flow and layout, they fundamentally lack dynamic responsiveness. Static mockups offer visual fidelity but cannot simulate complex interactions or personalized outputs.
The true time-saving advantage of the WOZ emerges when testing novel, complex, or AI-driven concepts. It allows researchers to evaluate genuine user interactions and mental models in a seemingly live environment, collecting rich behavioral data that simpler prototypes cannot. This fidelity in simulating a dynamic experience, even with a human behind the curtain, often reveals critical usability or conceptual flaws far earlier and more comprehensively than purely static representations, ultimately preventing costly reworks down the development pipeline.
While the core principle of the WOZ method is straightforward, its true power lies in nuanced application and thoughtful execution. Seasoned practitioners may leverage several advanced techniques to extract richer insights and address more complex research questions.
The WOZ method isn’t necessarily a one-off endeavor. Employing it in iterative cycles can yield significant benefits. Initial rounds might focus on broad concept validation and identifying fundamental user reactions. Subsequent iterations can then refine the simulated functionality based on previous findings.
For instance, after an initial study reveals user confusion with a particular interaction flow, the simulation can be adjusted, and a follow-up study can assess the impact of those changes. This iterative approach allows for a more agile and user-centered exploration of complex experiences.
Simulating complex systems can be difficult for one wizard. Breaking complex interactions into smaller, manageable steps is crucial. Consider researching a multi-step onboarding process for a new software application. Instead of one person trying to simulate the entire flow, different aspects could be handled sequentially or even by multiple team members coordinating their responses.
Clear communication protocols and well-defined responsibilities are essential in such scenarios to maintain a seamless user experience.
While qualitative observation is a cornerstone of the WOZ method, defining clear metrics can add a layer of rigor to the findings. These metrics should match research goals. For example, if the goal is to assess the intuitiveness of a new navigation pattern, you might track the number of times users express confusion or the time it takes them to complete specific tasks.
Combining these quantitative measures with qualitative insights provides a more comprehensive understanding of the user experience.
The WOZ method isn’t an island. Its effectiveness can be amplified by integrating it with other research techniques. Preceding a WOZ study with user interviews can help establish a deeper understanding of user needs and mental models, informing the design of the simulated experience. Following a WOZ study, surveys can gather broader quantitative feedback on the concepts explored. For example, after observing users interact with a simulated AI-powered scheduling tool, a survey could gauge their overall trust and perceived usefulness of such a system.
WOZ, as with all methods, has limitations. A few examples of scenarios where other methods would likely yield more reliable findings would be:
The wizard’s skill is critical to the method’s success. Training the individual(s) who will be simulating the system is essential. This training should cover:
All of this suggests the need for practice in advance of running the actual session. We shouldn’t forget to have a number of dry runs in which we ask our colleagues or those who are willing to assist to not only participate but also think about possible responses that could stump the wizard or throw things off if the user might provide them during a live session.
I suggest having a believable prepared error statement ready to go for when a user throws a curveball. A simple response from the wizard of “I’m sorry, I am unable to perform that task at this time” might be enough to move the session forward while also capturing a potentially unexpected situation your team can address in the final product design.
The debriefing session following the WOZ interaction is an additional opportunity to gather rich qualitative data. Beyond asking “What did you think?” effective debriefing involves sharing the purpose of the study and the fact that the experience was simulated.
Researchers should then conduct psychological probing to understand the reasons behind user behavior and reactions. Asking open-ended questions like “Why did you try that?” or “What were you expecting to happen when you clicked that button?” can reveal valuable insights into user mental models and expectations.
Exploring moments of confusion, frustration, or delight in detail can uncover key areas for design improvement. Think about the potential information the Power Gloves’ development team could have uncovered if they’d asked participants what the experience of programming the glove and trying to remember what they’d programmed into which set of keys had been.
The value of the WOZ method becomes apparent when examining its application in real-world research scenarios. Here is an in-depth review of one scenario and a quick summary of another study involving WOZ, where this technique proved invaluable in shaping user experiences.
A significant challenge in the realm of emerging technologies lies in user comprehension. This was particularly evident when our team began exploring the potential of Agentic AI for enterprise HR software.
Agentic AI refers to artificial intelligence systems that can autonomously pursue goals by making decisions, taking actions, and adapting to changing environments with minimal human intervention. Unlike generative AI that primarily responds to direct commands or generates content, Agentic AI is designed to understand user intent, independently plan and execute multi-step tasks, and learn from its interactions to improve performance over time. These systems often combine multiple AI models and can reason through complex problems. For designers, this signifies a shift towards creating experiences where AI acts more like a proactive collaborator or assistant, capable of anticipating needs and taking the initiative to help users achieve their objectives rather than solely relying on explicit user instructions for every step.
Preliminary research, including surveys and initial interviews, suggested that many HR professionals, while intrigued by the concept of AI assistance, struggled to grasp the potential functionality and practical implications of truly agentic systems — those capable of autonomous action and proactive decision-making. We saw they had no reference point for what agentic AI was, even after we attempted relevant analogies to current examples.
Building a fully functional agentic AI prototype at this exploratory stage was impractical. The underlying algorithms and integrations were complex and time-consuming to develop. Moreover, we risked building a solution based on potentially flawed assumptions about user needs and understanding. The WOZ method offered a solution.
We designed a scenario where HR employees interacted with what they believed was an intelligent AI assistant capable of autonomously handling certain tasks. The facilitator presented users with a web interface where they could request assistance with tasks like “draft a personalized onboarding plan for a new marketing hire” or “identify employees who might benefit from proactive well-being resources based on recent activity.”
Behind the scenes, a designer acted as the wizard. Based on the user’s request and the (simulated) available data, the designer would craft a response that mimicked the output of an agentic AI. For the onboarding plan, this involved assembling pre-written templates and personalizing them with details provided by the user. For the well-being resource identification, the wizard would select a plausible list of employees based on the general indicators discussed in the scenario.
Crucially, the facilitator encouraged users to interact naturally, asking follow-up questions and exploring the system’s perceived capabilities. For instance, a user might ask, “Can the system also schedule the initial team introductions?” The wizard, guided by pre-defined rules and the overall research goals, would respond accordingly, perhaps with a “Yes, I can automatically propose meeting times based on everyone’s calendars” (again, simulated).
As recommended, we debriefed participants following each session. We began with transparency, explaining the simulation and that we had another live human posting the responses to the queries based on what the participant was saying. Open-ended questions explored initial reactions and envisioned use. Task-specific probing, like “Why did you expect that?” revealed underlying assumptions. We specifically addressed trust and control (“How much trust…? What level of control…?”). To understand mental models, we asked how users thought the “AI” worked. We also solicited improvement suggestions (“What features…?”).
By focusing on the “why” behind user actions and expectations, these debriefings provided rich qualitative data that directly informed subsequent design decisions, particularly around transparency, human oversight, and prioritizing specific, high-value use cases. We also had a research participant who understood agentic AI and could provide additional insight based on that understanding.
This WOZ study yielded several crucial insights into user mental models of agentic AI in an HR context:
Based on these findings, we made several key design decisions:
In another project, we used the WOZ method to evaluate user interaction with a voice interface for controlling in-car functions. Our research question focused on the naturalness and efficiency of voice commands for tasks like adjusting climate control, navigating to points of interest, and managing media playback.
We set up a car cabin simulator with a microphone and speakers. The wizard, located in an adjacent room, listened to the user’s voice commands and triggered the corresponding actions (simulated through visual changes on a display and audio feedback). This allowed us to identify ambiguous commands, areas of user frustration with voice recognition (even though it was human-powered), and preferences for different phrasing and interaction styles before investing in complex speech recognition technology.
These examples illustrate the versatility and power of the method in addressing a wide range of UX research questions across diverse product types and technological complexities. By simulating functionality, we can gain invaluable insights into user behavior and expectations early in the design process, leading to more user-centered and ultimately more successful products.
The WOZ method, far from being a relic of simpler technological times, retains relevance as we navigate increasingly sophisticated and often opaque emerging technologies.
The WOZ method’s core strength, the ability to simulate complex functionality with human ingenuity, makes it uniquely suited for exploring user interactions with systems that are still in their nascent stages.
“
WOZ In The Age Of AI
Consider the burgeoning field of AI-powered experiences. Researching user interaction with generative AI, for instance, can be effectively done through WOZ. A wizard could curate and present AI-generated content (text, images, code) in response to user prompts, allowing researchers to assess user perceptions of quality, relevance, and trust without needing a fully trained and integrated AI model.
Similarly, for personalized recommendation systems, a human could simulate the recommendations based on a user’s stated preferences and observed behavior, gathering valuable feedback on the perceived accuracy and helpfulness of such suggestions before algorithmic development.
Even autonomous systems, seemingly the antithesis of human control, can benefit from WOZ studies. By simulating the autonomous behavior in specific scenarios, researchers can explore user comfort levels, identify needs for explainability, and understand how users might want to interact with or override such systems.
Virtual And Augmented Reality
Immersive environments like virtual and augmented reality present new frontiers for user experience research. WOZ can be particularly powerful here.
Imagine testing a novel gesture-based interaction in VR. A researcher tracking the user’s hand movements could trigger corresponding virtual events, allowing for rapid iteration on the intuitiveness and comfort of these interactions without the complexities of fully programmed VR controls. Similarly, in AR, a wizard could remotely trigger the appearance and behavior of virtual objects overlaid onto the real world, gathering user feedback on their placement, relevance, and integration with the physical environment.
The Human Factor Remains Central
Despite the rapid advancements in artificial intelligence and immersive technologies, the fundamental principles of human-centered design remain as relevant as ever. Technology should serve human needs and enhance human capabilities.
The WOZ method inherently focuses on understanding user reactions and behaviors and acts as a crucial anchor in ensuring that technological progress aligns with human values and expectations.
“
It allows us to inject the “human factor” into the design process of even the most advanced technologies. Doing this may help ensure these innovations are not only technically feasible but also truly usable, desirable, and beneficial.
The WOZ method stands as a powerful and versatile tool in the UX researcher’s toolkit. The WOZ method’s ability to bypass limitations of early-stage development and directly elicit user feedback on conceptual experiences offers invaluable advantages. We’ve explored its core mechanics and covered ways of maximizing its impact. We’ve also examined its practical application through real-world case studies, including its crucial role in understanding user interaction with nascent technologies like agentic AI.
The strategic implementation of the WOZ method provides a potent means of de-risking product development. By validating assumptions, uncovering unexpected user behaviors, and identifying potential usability challenges early on, teams can avoid costly rework and build products that truly resonate with their intended audience.
I encourage all UX practitioners, digital product managers, and those who collaborate with research teams to consider incorporating the WOZ method into their research toolkit. Experiment with its application in diverse scenarios, adapt its techniques to your specific needs and don’t be afraid to have fun with it. Scarecrow costume optional.