JavaScript For Everyone: Iterators JavaScript For Everyone: Iterators Mat Marquis 2025-10-27T13:00:00+00:00 2025-10-29T15:03:21+00:00 Hey, I’m Mat, but “Wilto” works too — I’m here to teach you JavaScript. Well, not here-here; technically, I’m over at Piccalil.li’s JavaScript for Everyone course to teach you JavaScript. The following is […]
Accessibility
AI In UX: Achieve More With Less AI In UX: Achieve More With Less Paul Boag 2025-10-17T08:00:00+00:00 2025-10-22T16:03:02+00:00 I have made a lot of mistakes with AI over the past couple of years. I have wasted hours trying to get it to do things it […]
Accessibility
How To Make Your UX Research Hard To Ignore How To Make Your UX Research Hard To Ignore Vitaly Friedman 2025-10-16T13:00:00+00:00 2025-10-22T16:03:02+00:00 In the early days of my career, I believed that nothing wins an argument more effectively than strong and unbiased research. Surely facts […]
Accessibility
JavaScript For Everyone: Iterators JavaScript For Everyone: Iterators Mat Marquis 2025-10-27T13:00:00+00:00 2025-10-29T15:03:21+00:00 Hey, I’m Mat, but “Wilto” works too — I’m here to teach you JavaScript. Well, not here-here; technically, I’m over at Piccalil.li’s JavaScript for Everyone course to teach you JavaScript. The following is […]
Accessibility
2025-10-27T13:00:00+00:00
2025-10-29T15:03:21+00:00
Hey, I’m Mat, but “Wilto” works too — I’m here to teach you JavaScript. Well, not here-here; technically, I’m over at Piccalil.li’s JavaScript for Everyone course to teach you JavaScript. The following is an excerpt from the Iterables and Iterators module: the lesson on Iterators.
Iterators are one of JavaScript’s more linguistically confusing topics, sailing easily over what is already a pretty high bar. There are iterables — array, Set, Map, and string — all of which follow the iterable protocol. To follow said protocol, an object must implement the iterable interface. In practice, that means that the object needs to include a [Symbol.iterator]() method somewhere in its prototype chain. Iterable protocol is one of two iteration protocols. The other iteration protocol is the iterator protocol.
See what I mean about this being linguistically fraught? Iterables implement the iterable iteration interface, and iterators implement the iterator iteration interface! If you can say that five times fast, then you’ve pretty much got the gist of it; easy-peasy, right?
No, listen, by the time you reach the end of this lesson, I promise it won’t be half as confusing as it might sound, especially with the context you’ll have from the lessons that precede it.
An iterable object follows the iterable protocol, which just means that the object has a conventional method for making iterators. The elements that it contains can be looped over with for…of.
An iterator object follows the iterator protocol, and the elements it contains can be accessed sequentially, one at a time.
To reiterate — a play on words for which I do not forgive myself, nor expect you to forgive me — an iterator object follows iterator protocol, and the elements it contains can be accessed sequentially, one at a time. Iterator protocol defines a standard way to produce a sequence of values, and optionally return a value once all possible values have been generated.
In order to follow the iterator protocol, an object has to — you guessed it — implement the iterator interface. In practice, that once again means that a certain method has to be available somewhere on the object’s prototype chain. In this case, it’s the next() method that advances through the elements it contains, one at a time, and returns an object each time that method is called.
In order to meet the iterator interface criteria, the returned object must contain two properties with specific keys: one with the key value, representing the value of the current element, and one with the key done, a Boolean value that tells us if the iterator has advanced beyond the final element in the data structure. That’s not an awkward phrasing the editorial team let slip through: the value of that done property is true only when a call to next() results in an attempt to access an element beyond the final element in the iterator, not upon accessing the final element in the iterator. Again, a lot in print, but it’ll make more sense when you see it in action.
You’ve seen an example of a built-in iterator before, albeit briefly:
const theMap = new Map([ [ "aKey", "A value." ] ]);
console.log( theMap.keys() );
// Result: Map Iterator { constructor: Iterator() }
That’s right: while a Map object itself is an iterable, Map’s built-in methods keys(), values(), and entries() all return Iterator objects. You’ll also remember that I looped through those using forEach (a relatively recent addition to the language). Used that way, an iterator is indistinguishable from an iterable:
const theMap = new Map([ [ "key", "value " ] ]);
theMap.keys().forEach( thing => {
console.log( thing );
});
// Result: key
All iterators are iterable; they all implement the iterable interface:
const theMap = new Map([ [ "key", "value " ] ]);
theMap.keys()[ Symbol.iterator ];
// Result: function Symbol.iterator()
And if you’re angry about the increasing blurriness of the line between iterators and iterables, wait until you get a load of this “top ten anime betrayals” video candidate: I’m going to demonstrate how to interact with an iterator by using an array.
“BOO,” you surely cry, having been so betrayed by one of your oldest and most indexed friends. “Array is an iterable, not an iterator!” You are both right to yell at me in general, and right about array in specific — an array is an iterable, not an iterator. In fact, while all iterators are iterable, none of the built-in iterables are iterators.
However, when you call that [ Symbol.iterator ]() method — the one that defines an object as an iterable — it returns an iterator object created from an iterable data structure:
const theIterable = [ true, false ];
const theIterator = theIterable[ Symbol.iterator ]();
theIterable;
// Result: Array [ true, false ]
theIterator;
// Result: Array Iterator { constructor: Iterator() }
The same goes for Set, Map, and — yes — even strings:
const theIterable = "A string."
const theIterator = theIterable[ Symbol.iterator ]();
theIterator;
// Result: String Iterator { constructor: Iterator() }
What we’re doing here manually — creating an iterator from an iterable using %Symbol.iterator% — is precisely how iterable objects work internally, and why they have to implement %Symbol.iterator% in order to be iterables. Any time you loop through an array, you’re actually looping through an iterator created from that Array. All built-in iterators are iterable. All built-in iterables can be used to create iterators.
Alternately — preferably, even, since it doesn’t require you to graze up against %Symbol.iterator% directly — you can use the built-in Iterator.from() method to create an iterator object from any iterable:
const theIterator = Iterator.from([ true, false ]);
theIterator;
// Result: Array Iterator { constructor: Iterator() }
You remember how I mentioned that an iterator has to provide a next() method (that returns a very specific Object)? Calling that next() method steps through the elements that the iterator contains one at a time, with each call returning an instance of that Object:
const theIterator = Iterator.from([ 1, 2, 3 ]);
theIterator.next();
// Result: Object { value: 1, done: false }
theIterator.next();
// Result: Object { value: 2, done: false }
theIterator.next();
// Result: Object { value: 3, done: false }
theIterator.next();
// Result: Object { value: undefined, done: true }
You can think of this as a more controlled form of traversal than the traditional “wind it up and watch it go” for loops you’re probably used to — a method of accessing elements one step at a time, as-needed. Granted, you don’t have to step through an iterator in this way, since they have their very own Iterator.forEach method, which works exactly like you would expect — to a point:
const theIterator = Iterator.from([ true, false ]);
theIterator.forEach( element => console.log( element ) );
/* Result:
true
false
*/
But there’s another big difference between iterables and iterators that we haven’t touched on yet, and for my money, it actually goes a long way toward making linguistic sense of the two. You might need to humor me for a little bit here, though.
See, an iterable object is an object that is iterable. No, listen, stay with me: you can iterate over an Array, and when you’re done doing so, you can still iterate over that Array. It is, by definition, an object that can be iterated over; it is the essential nature of an iterable to be iterable:
const theIterable = [ 1, 2 ];
theIterable.forEach( el => {
console.log( el );
});
/* Result:
1
2
*/
theIterable.forEach( el => {
console.log( el );
});
/* Result:
1
2
*/
In a way, an iterator object represents the singular act of iteration. Internal to an iterable, it is the mechanism by which the iterable is iterated over, each time that iteration is performed. As a stand-alone iterator object — whether you step through it using the next method or loop over its elements using forEach — once iterated over, that iterator is past tense; it is iterated. Because they maintain an internal state, the essential nature of an iterator is to be iterated over, singular:
const theIterator = Iterator.from([ 1, 2 ]);
theIterator.next();
// Result: Object { value: 1, done: false }
theIterator.next();
// Result: Object { value: 2, done: false }
theIterator.next();
// Result: Object { value: undefined, done: true }
theIterator.forEach( el => console.log( el ) );
// Result: undefined
That makes for neat work when you’re using the Iterator constructor’s built-in methods to, say, filter or extract part of an Iterator object:
const theIterator = Iterator.from([ "First", "Second", "Third" ]);
// Take the first two values from `theIterator`:
theIterator.take( 2 ).forEach( el => {
console.log( el );
});
/* Result:
"First"
"Second"
*/
// theIterator now only contains anything left over after the above operation is complete:
theIterator.next();
// Result: Object { value: "Third", done: false }
Once you reach the end of an iterator, the act of iterating over it is complete. Iterated. Past-tense.
And so too is your time in this lesson, you might be relieved to hear. I know this was kind of a rough one, but the good news is: this course is iterable, not an iterator. This step in your iteration through it — this lesson — may be over, but the essential nature of this course is that you can iterate through it again. Don’t worry about committing all of this to memory right now — you can come back and revisit this lesson anytime.
I stand by what I wrote there, unsurprising as that probably is: this lesson is a tricky one, but listen, you got this. JavaScript for Everyone is designed to take you inside JavaScript’s head. Once you’ve started seeing how the gears mesh — seen the fingerprints left behind by the people who built the language, and the good, bad, and sometimes baffling decisions that went into that — no itera-, whether -ble or -tor will be able to stand in your way.

My goal is to teach you the deep magic — the how and the why of JavaScript, using the syntaxes you’re most likely to encounter in your day-to-day work, at your pace and on your terms. If you’re new to the language, you’ll walk away from this course with a foundational understanding of JavaScript worth hundreds of hours of trial-and-error. If you’re a junior developer, you’ll finish this course with a depth of knowledge to rival any senior.
I hope to see you there.
AI In UX: Achieve More With Less AI In UX: Achieve More With Less Paul Boag 2025-10-17T08:00:00+00:00 2025-10-22T16:03:02+00:00 I have made a lot of mistakes with AI over the past couple of years. I have wasted hours trying to get it to do things it […]
Accessibility
2025-10-17T08:00:00+00:00
2025-10-22T16:03:02+00:00
I have made a lot of mistakes with AI over the past couple of years. I have wasted hours trying to get it to do things it simply cannot do. I have fed it terrible prompts and received terrible output. And I have definitely spent more time fighting with it than I care to admit.
But I have also discovered that when you stop treating AI like magic and start treating it like what it actually is (a very enthusiastic intern with zero life experience), things start to make more sense.
Let me share what I have learned from working with AI on real client projects across user research, design, development, and content creation.
Here is the mental model that has been most helpful for me. Treat AI like an intern with zero experience.
An intern fresh out of university has lots of enthusiasm and qualifications, but no real-world experience. You would not trust them to do anything unsupervised. You would explain tasks in detail. You would expect to review their work multiple times. You would give feedback and ask them to try again.
This is exactly how you should work with AI.
I am not going to pretend to be an expert. I have just spent way too much time playing with this stuff because I like anything shiny and new. But here is what works for me.
Here is a real prompt I use for online research:
Act as a user researcher. I would like you to carry out deep research online into [brand name]. In particular, I would like you to focus on what people are saying about the brand, what the overall sentiment is, what questions people have, and what objections people mention. The goal is to create a detailed report that helps me better understand the brand perception.
Think deeply about your approach before carrying out the research. Create a rubric for the report to ensure it is as useful as possible. Keep iterating until the report scores extremely high on the rubric. Only then, output the report.
That second paragraph (the bit about thinking deeply and creating a rubric), I basically copy and paste into everything now. It is a universal way to get better output.
You should never fully trust AI. Just like you would never fully trust an intern you have only just met.
To begin with, double-check absolutely everything. Over time, you will get a sense of when it is losing its way. You will spot the patterns. You will know when to start a fresh conversation because the current one has gone off the rails.
But even after months of working with it daily, I still check its work. I still challenge it. I still make it cite sources and explain its reasoning.
The key is that even with all that checking, it is still faster than doing it yourself. Much faster.
This is where AI has genuinely transformed my work. I use it constantly for five main things.
I love AI for this. I can ask it to go and research a brand online. What people are saying about it, what questions they have, what they like, and what frustrates them. Then do the same for competitors and compare.
This would have taken me days of trawling through social media and review sites. Now it takes minutes.
I recently did this for an e-commerce client. I wanted to understand what annoyed people about the brand and what they loved. I got detailed insights that shaped the entire conversion optimization strategy. All from one prompt.
I used to avoid open-ended questions in surveys. They were such a pain to review. Now I use them all the time because AI can analyze hundreds of text responses in seconds.
For interviews, I upload the transcripts and ask it to identify recurring themes, questions, and requests. I always get it to quote directly from the transcripts so I can verify it is not making things up.
The quality is good. Really good. As long as you give it clear instructions about what you want.
I am terrible with spreadsheets. Put me in front of a person and I can understand them. Put me in front of data, and my eyes glaze over.
AI has changed that. I upload spreadsheets to ChatGPT and just ask questions. “What patterns do you see?” “Can you reformat this?” “Show me this data in a different way.”
Microsoft Clarity now has Copilot built in, so you can ask it questions about your analytics data. Triple Whale does the same for e-commerce sites. These tools are game changers if you struggle with data like I do.

This is probably my favorite technique. In ChatGPT and Claude, you can create projects. In other tools, they are called spaces. Think of them as self-contained folders where everything you put in is available to every conversation in that project.
When I start working with a new client, I create a project and throw everything in. Old user research. Personas. Survey results. Interview transcripts. Documentation. Background information. Site copy. Anything I can find.
Then I give it custom instructions. Here is one I use for my own business:
Act as a business consultant and marketing strategy expert with good copywriting skills. Your role is to help me define the future of my UX consultant business and better articulate it, especially via my website. When I ask for your help, ask questions to improve your answers and challenge my assumptions where appropriate.
I have even uploaded a virtual board of advisors (people I wish I had on my board) and asked AI to research how they think and respond as they would.
Now I have this project that knows everything about my business. I can ask it questions. Get it to review my work. Challenge my thinking. It is like having a co-worker who never gets tired and has a perfect memory.
I do this for every client project now. It is invaluable.
AI has reinvigorated my interest in personas. I had lost heart in them a bit. They took too long to create, and clients always said they already had marketing personas and did not want to pay to do them again.
Now I can create what I call functional personas. Personas that are actually useful to people who work in UX. Not marketing fluff about what brands people like, but real information about what questions they have and what tasks they are trying to complete.
I upload all my research to a project and say:
Act as a user researcher. Create a persona for [audience type]. For this persona, research the following information: questions they have, tasks they want to complete, goals, states of mind, influences, and success metrics. It is vital that all six criteria are addressed in depth and with equal vigor.
The output is really good. Detailed. Useful. Based on actual data rather than pulled out of thin air.

Here is my challenge to anyone who thinks AI-generated personas are somehow fake. What makes you think your personas are so much better? Every persona is a story of a hypothetical user. You make judgment calls when you create personas, too. At least AI can process far more information than you can and is brilliant at pattern recognition.
My only concern is that relying too heavily on AI could disconnect us from real users. We still need to talk to people. We still need that empathy. But as a tool to synthesize research and create reference points? It is excellent.
“
Let me start with a warning. AI is not production-ready. Not yet. Not for the kind of client work I do, anyway.
Three reasons why:
But that does not mean it is not useful. It absolutely is. Just not for final production work.
If you are not too concerned with matching a specific design, AI can quickly prototype functionality in ways that are hard to match in Figma. Because Figma is terrible at prototyping functionality. You cannot even create an active form field in a Figma prototype. It’s the biggest thing people do online other than click links — and you cannot test it.
Tools like Relume and Bolt can create quick functional mockups that show roughly how things work. They are great for non-designers who just need to throw together a prototype quickly. For designers, they can be useful for showing developers how you want something to work.
But you can spend ages getting them to put a hamburger menu on the right side of the screen. So use them for quick iteration, not pixel-perfect design.
I use AI constantly for small, low-risk coding work. I am not a developer anymore. I used to be, back when dinosaurs roamed the earth, but not for years.
AI lets me create the little tools I need. A calculator that calculates the ROI of my UX work. An app for running top task analysis. Bits of JavaScript for hiding elements on a page. WordPress plugins for updating dates automatically.

Just before running my workshop on this topic, I needed a tool to create calendar invites for multiple events. All the online services wanted £16 a month. I asked ChatGPT to build me one. One prompt. It worked. It looked rubbish, but I did not care. It did what I needed.
If you are a developer, you should absolutely be using tools like Cursor by now. They are invaluable for pair programming with AI. But if you are not a developer, just stick with Claude or Bolt for quick throwaway tools.
There are some great tools for getting quick feedback on existing websites when budget and time are tight.
If you need to conduct a UX audit, Wevo Pulse is an excellent starting point. It automatically reviews a website based on personas and provides visual attention heatmaps, friction scores, and specific improvement recommendations. It generates insights in minutes rather than days.
Now, let me be clear. This does not replace having an experienced person conduct a proper UX audit. You still need that human expertise to understand context, make judgment calls, and spot issues that AI might miss. But as a starting point to identify obvious problems quickly? It is a great tool. Particularly when budget or time constraints mean a full audit is not on the table.
For e-commerce sites, Baymard has UX Ray, which analyzes flaws based on their massive database of user research.

Attention Insight has taken thousands of hours of eye-tracking studies and trained AI on it to predict where people will look on a page. It has about 90 to 96 percent accuracy.
You upload a screenshot of your design, and it shows you where attention is going. Then you can play around with your imagery and layout to guide attention to the right place.
It is great for dealing with stakeholders who say, “People won’t see that.” You can prove they will. Or equally, when stakeholders try to crowd the interface with too much stuff, you can show them attention shooting everywhere.
I use this constantly. Here is a real example from a pet insurance company. They had photos of a dog, cat, and rabbit for different types of advice. The dog was far from the camera. The cat was looking directly at the camera, pulling all the attention. The rabbit was half off-frame. Most attention went to the cat’s face.

I redesigned it using AI-generated images, where I could control exactly where each animal looked. Dog looking at the camera. Cat looking right. Rabbit looking left. All the attention drawn into the center. Made a massive difference.

I use AI all the time for creating images that do a specific job. My preferred tools are Midjourney and Gemini.
I like Midjourney because, visually, it creates stunning imagery. You can dial in the tone and style you want. The downside is that it is not great at following specific instructions.
So I produce an image in Midjourney that is close, then upload it to Gemini. Gemini is not as good at visual style, but it is much better at following instructions. “Make the guy reach here” or “Add glasses to this person.” I can get pretty much exactly what I want.
The other thing I love about Midjourney is that you can upload a photograph and say, “Replicate this style.” This keeps consistency across a website. I have a master image I use as a reference for all my site imagery to keep the style consistent.
Most clients give you terrible copy. Our job is to improve the user experience or conversion rate, and anything we do gets utterly undermined by bad copy.
I have completely stopped asking clients for copy since AI came along. Here is my process.
Once I have my information architecture, I get AI to generate a massive list of questions users will ask. Then I run a top task analysis where people vote on which questions matter most.
I assign those questions to pages on the site. Every page gets a list of the questions it needs to answer.
I spin up the content management system with a really basic theme. Just HTML with very basic formatting. I go through every page and assign the questions.
Then I go to my clients and say: “I do not want you to write copy. Just go through every page and bullet point answers to the questions. If the answer exists on the old site, copy and paste some text or link to it. But just bullet points.”
That is their job done. Pretty much.
Now I take control. I feed ChatGPT the questions and bullet points and say:
Act as an online copywriter. Write copy for a webpage that answers the question [question]. Use the following bullet points to answer that question: [bullet points]. Use the following guidelines: Aim for a ninth-grade reading level or below. Sentences should be short. Use plain language. Avoid jargon. Refer to the reader as you. Refer to the writer as us. Ensure the tone is friendly, approachable, and reassuring. The goal is to [goal]. Think deeply about your approach. Create a rubric and iterate until the copy is excellent. Only then, output it.
I often upload a full style guide as well, with details about how I want it to be written.
The output is genuinely good. As a first draft, it is excellent. Far better than what most stakeholders would give me.
That goes into the website, and stakeholders can comment on it. Once I get their feedback, I take the original copy and all their comments back into ChatGPT and say, “Rewrite using these comments.”
Job done.
The great thing about this approach is that even if stakeholders make loads of changes, they are making changes to a good foundation. The overall quality still comes out better than if they started with a blank sheet.
It also makes things go smoother because you are not criticizing their content, where they get defensive. They are criticizing AI content.
If your stakeholders are still giving you content, Hemingway Editor is brilliant. Copy and paste text in, and it tells you how readable and scannable it is. It highlights long sentences and jargon. You can use this to prove to clients that their content is not good web copy.

If you pay for the pro version, you get AI tools that will rewrite the copy to be more readable. It is excellent.
Let me be clear about something. None of this is perfect. AI makes mistakes. It hallucinates. It produces bland output if you do not push it hard enough. It requires constant checking and challenging.
But here is what I know from two years of using this stuff daily. It has made me faster. It has made me better. It has freed me up to do more strategic thinking and less grunt work.
A report that would have taken me five days now takes three hours. That is not an exaggeration. That is real.
Overall, AI probably gives me a 25 to 33 percent increase in what I can do. That is significant.
Your value as a UX professional lies in your ideas, your questions, and your thinking. Not your ability to use Figma. Not your ability to manually review transcripts. Not your ability to write reports from scratch.
“
AI cannot innovate. It cannot make creative leaps. It cannot know whether its output is good. It cannot understand what it is like to be human.
That is where you come in. That is where you will always come in.
Start small. Do not try to learn everything at once. Just ask yourself throughout your day: Could I do this with AI? Try it. See what happens. Double-check everything. Learn what works and what does not.
Treat it like an enthusiastic intern with zero life experience. Give it clear instructions. Check its work. Make it try again. Challenge it. Push it further.
And remember, it is not going to take your job. It is going to change it. For the better, I think. As long as we learn to work with it rather than against it.
How To Make Your UX Research Hard To Ignore How To Make Your UX Research Hard To Ignore Vitaly Friedman 2025-10-16T13:00:00+00:00 2025-10-22T16:03:02+00:00 In the early days of my career, I believed that nothing wins an argument more effectively than strong and unbiased research. Surely facts […]
Accessibility
2025-10-16T13:00:00+00:00
2025-10-22T16:03:02+00:00
In the early days of my career, I believed that nothing wins an argument more effectively than strong and unbiased research. Surely facts speak for themselves, I thought.
If I just get enough data, just enough evidence, just enough clarity on where users struggle — well, once I have it all and I present it all, it alone will surely change people’s minds, hearts, and beliefs. And, most importantly, it will help everyone see, understand, and perhaps even appreciate and commit to what needs to be done.
Well, it’s not quite like that. In fact, the stronger and louder the data, the more likely it is to be questioned. And there is a good reason for that, which is often left between the lines.
Throughout the years, I’ve often seen data speaking volumes about where the business is failing, where customers are struggling, where the team is faltering — and where an urgent turnaround is necessary. It was right there, in plain sight: clear, loud, and obvious.

But because it’s so clear, it reflects back, often amplifying all the sharp edges and all the cut corners in all the wrong places. It reflects internal flaws, wrong assumptions, and failing projects — some of them signed off years ago, with secured budgets, big promotions, and approved headcounts. Questioning them means questioning authority, and often it’s a tough path to take.
As it turns out, strong data is very, very good at raising uncomfortable truths that most companies don’t really want to acknowledge. That’s why, at times, research is deemed “unnecessary,” or why we don’t get access to users, or why loud voices always win big arguments.

So even if data is presented with a lot of eagerness, gravity, and passion in that big meeting, it will get questioned, doubted, and explained away. Not because of its flaws, but because of hope, reluctance to change, and layers of internal politics.
This shows up most vividly in situations when someone raises concerns about the validity and accuracy of research. Frankly, it’s not that somebody is wrong and somebody is right. Both parties just happen to be right in a different way.
We’ve all heard that data always tells a story. However, it’s never just a single story. People are complex, and pointing out a specific truth about them just by looking at numbers is rarely enough.
When data disagrees, it doesn’t mean that either is wrong. It’s just that different perspectives reveal different parts of a whole story that isn’t completed yet.

In digital products, most stories have 2 sides:
Risk-averse teams overestimate the weight of big numbers in quantitative research. Users exaggerate the frequency and severity of issues that are critical for them. As Archana Shah noted, designers get carried away by users’ confident responses and often overestimate what people say and do.
And so, eventually, data coming from different teams paints a different picture. And when it happens, we need to reconcile and triangulate. With the former, we track what’s missing, omitted, or overlooked. With the latter, we cross-validate data — e.g., finding pairings of qual/quant streams of data, then clustering them together to see what’s there and what’s missing, and exploring from there.
And even with all of it in place and data conflicts resolved, we still need to do one more thing to make a strong argument: we need to tell a damn good story.
Research isn’t everything. Facts don’t win arguments — powerful stories do. But a story that starts with a spreadsheet isn’t always inspiring or effective. Perhaps it brings a problem into the spotlight, but it doesn’t lead to a resolution.

The very first thing I try to do in that big boardroom meeting is to emphasize what unites us — shared goals, principles, and commitments that are relevant to the topic at hand. Then, I show how new data confirms or confronts our commitments, with specific problems we believe we need to address.
When a question about the quality of data comes in, I need to show that it has been reconciled and triangulated already and discussed with other teams as well.
A good story has a poignant ending. People need to see an alternative future to trust and accept the data — and a clear and safe path forward to commit to it. So I always try to present options and solutions that we believe will drive change and explain our decision-making behind that.

They also need to believe that this distant future is within reach, and that they can pull it off, albeit under a tough timeline or with limited resources.
And: a good story also presents a viable, compelling, shared goal that people can rally around and commit to. Ideally, it’s something that has a direct benefit for them and their teams.
These are the ingredients of the story that I always try to keep in my mind when working on that big presentation. And in fact, data is a starting point, but it does need a story wrapped around it to be effective.
There is nothing more disappointing than finding a real problem that real people struggle with and facing the harsh reality of research not being trusted or valued.
We’ve all been there before. The best thing you can do is to be prepared: have strong data to back you up, include both quantitative and qualitative research — preferably with video clips from real customers — but also paint a viable future which seems within reach.
And sometimes nothing changes until something breaks. And at times, there isn’t much you can do about it unless you are prepared when it happens.
“Data doesn’t change minds, and facts don’t settle fights. Having answers isn’t the same as learning, and it for sure isn’t the same as making evidence-based decisions.”
— Erika Hall
You can find more details on UX Research in Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.

$ 495.00 $ 799.00
Get Video + UX Training
25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.
25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 2 video courses.
Useful Books
Intent Prototyping: A Practical Guide To Building With Clarity (Part 2) Intent Prototyping: A Practical Guide To Building With Clarity (Part 2) Yegor Gilyov 2025-10-03T10:00:00+00:00 2025-10-08T15:02:36+00:00 In Part 1 of this series, we explored the “lopsided horse” problem born from mockup-centric design and demonstrated how […]
Accessibility
2025-10-03T10:00:00+00:00
2025-10-08T15:02:36+00:00
In Part 1 of this series, we explored the “lopsided horse” problem born from mockup-centric design and demonstrated how the seductive promise of vibe coding often leads to structural flaws. The main question remains:
How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap?
In other words, we need a way to build prototypes that are both fast to create and founded on a clear, unambiguous blueprint.
The answer is a more disciplined process I call Intent Prototyping (kudos to Marco Kotrotsos, who coined Intent-Oriented Programming). This method embraces the power of AI-assisted coding but rejects ambiguity, putting the designer’s explicit intent at the very center of the process. It receives a holistic expression of intent (sketches for screen layouts, conceptual model description, boxes-and-arrows for user flows) and uses it to generate a live, testable prototype.

This method solves the concerns we’ve discussed in Part 1 in the best way possible:
This combination makes the method especially suited for designing complex enterprise applications. It allows us to test the system’s most critical point of failure, its underlying structure, at a speed and flexibility that was previously impossible. Furthermore, the process is built for iteration. You can explore as many directions as you want simply by changing the intent and evolving the design based on what you learn from user testing.
To illustrate this process in action, let’s walk through a case study. It’s the very same example I’ve used to illustrate the vibe coding trap: a simple tool to track tests to validate product ideas. You can find the complete project, including all the source code and documentation files discussed below, in this GitHub repository.
Imagine we’ve already done proper research, and having mused on the defined problem, I begin to form a vague idea of what the solution might look like. I need to capture this idea immediately, so I quickly sketch it out:

In this example, I used Excalidraw, but the tool doesn’t really matter. Note that we deliberately keep it rough, as visual details are not something we need to focus on at this stage. And we are not going to be stuck here: we want to make a leap from this initial sketch directly to a live prototype that we can put in front of potential users. Polishing those sketches would not bring us any closer to achieving our goal.
What we need to move forward is to add to those sketches just enough details so that they may serve as a sufficient input for a junior frontend developer (or, in our case, an AI assistant). This requires explaining the following:
Having added all those details, we end up with such an annotated sketch:

As you see, this sketch covers both the Visualization and Flow aspects. You may ask, what about the Conceptual Model? Without that part, the expression of our intent will not be complete. One way would be to add it somewhere in the margins of the sketch (for example, as a UML Class Diagram), and I would do so in the case of a more complex application, where the model cannot be simply derived from the UI. But in our case, we can save effort and ask an LLM to generate a comprehensive description of the conceptual model based on the sketch.
For tasks of this sort, the LLM of my choice is Gemini 2.5 Pro. What is important is that this is a multimodal model that can accept not only text but also images as input (GPT-5 and Claude-4 also fit that criteria). I use Google AI Studio, as it gives me enough control and visibility into what’s happening:

Note: All the prompts that I use here and below can be found in the Appendices. The prompts are not custom-tailored to any particular project; they are supposed to be reused as they are.
As a result, Gemini gives us a description and the following diagram:

The diagram might look technical, but I believe that a clear understanding of all objects, their attributes, and relationships between them is key to good design. That’s why I consider the Conceptual Model to be an essential part of expressing intent, along with the Flow and Visualization.
As a result of this step, our intent is fully expressed in two files: Sketch.png and Model.md. This will be our durable source of truth.
The purpose of this step is to create a comprehensive technical specification and a step-by-step plan. Most of the work here is done by AI; you just need to keep an eye on it.
I separate the Data Access Layer and the UI layer, and create specifications for them using two different prompts (see Appendices 2 and 3). The output of the first prompt (the Data Access Layer spec) serves as an input for the second one. Note that, as an additional input, we give the guidelines tailored for prototyping needs (see Appendices 8, 9, and 10). They are not specific to this project. The technical approach encoded in those guidelines is out of the scope of this article.
As a result, Gemini provides us with content for DAL.md and UI.md. Although in most cases this result is quite reliable enough, you might want to scrutinize the output. You don’t need to be a real programmer to make sense of it, but some level of programming literacy would be really helpful. However, even if you don’t have such skills, don’t get discouraged. The good news is that if you don’t understand something, you always know who to ask. Do it in Google AI Studio before refreshing the context window. If you believe you’ve spotted a problem, let Gemini know, and it will either fix it or explain why the suggested approach is actually better.
It’s important to remember that by their nature, LLMs are not deterministic and, to put it simply, can be forgetful about small details, especially when it comes to details in sketches. Fortunately, you don’t have to be an expert to notice that the “Delete” button, which is in the upper right corner of the sketch, is not mentioned in the spec.
Don’t get me wrong: Gemini does a stellar job most of the time, but there are still times when it slips up. Just let it know about the problems you’ve spotted, and everything will be fixed.
Once we have Sketch.png, Model.md, DAL.md, UI.md, and we have reviewed the specs, we can grab a coffee. We deserve it: our technical design documentation is complete. It will serve as a stable foundation for building the actual thing, without deviating from our original intent, and ensuring that all components fit together perfectly, and all layers are stacked correctly.
One last thing we can do before moving on to the next steps is to prepare a step-by-step plan. We split that plan into two parts: one for the Data Access Layer and another for the UI. You can find prompts I use to create such a plan in Appendices 4 and 5.
To start building the actual thing, we need to switch to another category of AI tools. Up until this point, we have relied on Generative AI. It excels at creating new content (in our case, specifications and plans) based on a single prompt. I’m using Google Gemini 2.5 Pro in Google AI Studio, but other similar tools may also fit such one-off tasks: ChatGPT, Claude, Grok, and DeepSeek.
However, at this step, this wouldn’t be enough. Building a prototype based on specs and according to a plan requires an AI that can read context from multiple files, execute a sequence of tasks, and maintain coherence. A simple generative AI can’t do this. It would be like asking a person to build a house by only ever showing them a single brick. What we need is an agentic AI that can be given the full house blueprint and a project plan, and then get to work building the foundation, framing the walls, and adding the roof in the correct sequence.
My coding agent of choice is Google Gemini CLI, simply because Gemini 2.5 Pro serves me well, and I don’t think we need any middleman like Cursor or Windsurf (which would use Claude, Gemini, or GPT under the hood anyway). If I used Claude, my choice would be Claude Code, but since I’m sticking with Gemini, Gemini CLI it is. But if you prefer Cursor or Windsurf, I believe you can apply the same process with your favourite tool.
Before tasking the agent, we need to create a basic template for our React application. I won’t go into this here. You can find plenty of tutorials on how to scaffold an empty React project using Vite.
Then we put all our files into that project:

Once the basic template with all our files is ready, we open Terminal, go to the folder where our project resides, and type “gemini”:

And we send the prompt to build the Data Access Layer (see Appendix 6). That prompt implies step-by-step execution, so upon completion of each step, I send the following:
Thank you! Now, please move to the next task.
Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec.
After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.
As the last task in the plan, the agent builds a special page where we can test all the capabilities of our Data Access Layer, so that we can manually test it. It may look like this:

It doesn’t look fancy, to say the least, but it allows us to ensure that the Data Access Layer works correctly before we proceed with building the final UI.
And finally, we clear the Gemini CLI context window to give it more headspace and send the prompt to build the UI (see Appendix 7). This prompt also implies step-by-step execution. Upon completion of each step, we test how it works and how it looks, following the “Manual Testing Plan” from UI-plan.md. I have to say that despite the fact that the sketch has been uploaded to the model context and, in general, Gemini tries to follow it, attention to visual detail is not one of its strengths (yet). Usually, a few additional nudges are needed at each step to improve the look and feel:

Once I’m happy with the result of a step, I ask Gemini to move on:
Thank you! Now, please move to the next task.
Make sure you build the UI according to the sketch; this is very important. Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch.
After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.
Before long, the result looks like this, and in every detail it works exactly as we intended:

The prototype is up and running and looking nice. Does it mean that we are done with our work? Surely not, the most fascinating part is just beginning.
It’s time to put the prototype in front of potential users and learn more about whether this solution relieves their pain or not.
And as soon as we learn something new, we iterate. We adjust or extend the sketches and the conceptual model, based on that new input, we update the specifications, create plans to make changes according to the new specifications, and execute those plans. In other words, for every iteration, we repeat the steps I’ve just walked you through.
This four-step workflow may create an impression of a somewhat heavy process that requires too much thinking upfront and doesn’t really facilitate creativity. But before jumping to that conclusion, consider the following:
There is no method that fits all situations, and Intent Prototyping is not an exception. Like any specialized tool, it has a specific purpose. The most effective teams are not those who master a single method, but those who understand which approach to use to mitigate the most significant risk at each stage. The table below gives you a way to make this choice clearer. It puts Intent Prototyping next to other common methods and tools and explains each one in terms of the primary goal it helps achieve and the specific risks it is best suited to mitigate.
| Method/Tool | Goal | Risks it is best suited to mitigate | Examples | Why |
|---|---|---|---|---|
| Intent Prototyping | To rapidly iterate on the fundamental architecture of a data-heavy application with a complex conceptual model, sophisticated business logic, and non-linear user flows. | Building a system with a flawed or incoherent conceptual model, leading to critical bugs and costly refactoring. |
|
It enforces conceptual clarity. This not only de-risks the core structure but also produces a clear, documented blueprint that serves as a superior specification for the engineering handoff. |
| Vibe Coding (Conversational) | To rapidly explore interactive ideas through improvisation. | Losing momentum because of analysis paralysis. |
|
It has the smallest loop between an idea conveyed in natural language and an interactive outcome. |
| Axure | To test complicated conditional logic within a specific user journey, without having to worry about how the whole system works. | Designing flows that break when users don’t follow the “happy path.” |
|
It’s made to create complex if-then logic and manage variables visually. This lets you test complicated paths and edge cases in a user journey without writing any code. |
| Figma | To make sure that the user interface looks good, aligns with the brand, and has a clear information architecture. | Making a product that looks bad, doesn’t fit with the brand, or has a layout that is hard to understand. |
|
It excels at high-fidelity visual design and provides simple, fast tools for linking static screens. |
| ProtoPie, Framer | To make high-fidelity micro-interactions feel just right. | Shipping an application that feels cumbersome and unpleasant to use because of poorly executed interactions. |
|
These tools let you manipulate animation timelines, physics, and device sensor inputs in great detail. Designers can carefully work on and test the small things that make an interface feel really polished and fun to use. |
| Low-code / No-code Tools (e.g., Bubble, Retool) | To create a working, data-driven app as quickly as possible. | The application will never be built because traditional development is too expensive. |
|
They put a UI builder, a database, and hosting all in one place. The goal is not merely to make a prototype of an idea, but to make and release an actual, working product. This is the last step for many internal tools or MVPs. |
The key takeaway is that each method is a specialized tool for mitigating a specific type of risk. For example, Figma de-risks the visual presentation. ProtoPie de-risks the feel of an interaction. Intent Prototyping is in a unique position to tackle the most foundational risk in complex applications: building on a flawed or incoherent conceptual model.
The era of the “lopsided horse” design, sleek on the surface but structurally unsound, is a direct result of the trade-off between fidelity and flexibility. This trade-off has led to a process filled with redundant effort and misplaced focus. Intent Prototyping, powered by modern AI, eliminates that conflict. It’s not just a shortcut to building faster — it’s a fundamental shift in how we design. By putting a clear, unambiguous intent at the heart of the process, it lets us get rid of the redundant work and focus on architecting a sound and robust system.
There are three major benefits to this renewed focus. First, by going straight to live, interactive prototypes, we shift our validation efforts from the surface to the deep, testing the system’s actual logic with users from day one. Second, the very act of documenting the design intent makes us clear about our ideas, ensuring that we fully understand the system’s underlying logic. Finally, this documented intent becomes a durable source of truth, eliminating the ambiguous handoffs and the redundant, error-prone work of having engineers reverse-engineer a designer’s vision from a black box.
Ultimately, Intent Prototyping changes the object of our work. It allows us to move beyond creating pictures of a product and empowers us to become architects of blueprints for a system. With the help of AI, we can finally make the live prototype the primary canvas for ideation, not just a high-effort afterthought.
You can find the full Intent Prototyping Starter Kit, which includes all those prompts and guidelines, as well as the example from this article and a minimal boilerplate project, in this GitHub repository.
You are an expert Senior Software Architect specializing in Domain-Driven Design. You are tasked with defining a conceptual model for an app based on information from a UI sketch.
## Workflow
Follow these steps precisely:
**Step 1:** Analyze the sketch carefully. There should be no ambiguity about what we are building.
**Step 2:** Generate the conceptual model description in the Mermaid format using a UML class diagram.
## Ground Rules
- Every entity must have the following attributes:
- `id` (string)
- `createdAt` (string, ISO 8601 format)
- `updatedAt` (string, ISO 8601 format)
- Include all attributes shown in the UI: If a piece of data is visually represented as a field for an entity, include it in the model, even if it's calculated from other attributes.
- Do not add any speculative entities, attributes, or relationships ("just in case"). The model should serve the current sketch's requirements only.
- Pay special attention to cardinality definitions (e.g., if a relationship is optional on both sides, it cannot be `"1" -- "0..*"`, it must be `"0..1" -- "0..*"`).
- Use only valid syntax in the Mermaid diagram.
- Do not include enumerations in the Mermaid diagram.
- Add comments explaining the purpose of every entity, attribute, and relationship, and their expected behavior (not as a part of the diagram, in the Markdown file).
## Naming Conventions
- Names should reveal intent and purpose.
- Use PascalCase for entity names.
- Use camelCase for attributes and relationships.
- Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
## Final Instructions
- **No Assumptions:** Base every detail on visual evidence in the sketch, not on common design patterns.
- **Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification.
- **Do not add redundant empty lines between items.**
Your final output should be the complete, raw markdown content for `Model.md`.
You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a comprehensive technical specification for the development team in a structured markdown document, based on a UI sketch and a conceptual model description.
## Workflow
Follow these steps precisely:
**Step 1:** Analyze the documentation carefully:
- `Model.md`: the conceptual model
- `Sketch.png`: the UI sketch
There should be no ambiguity about what we are building.
**Step 2:** Check out the guidelines:
- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices
- `Zustand-guidelines.md`: Zustand Best Practices
**Step 3:** Create a Markdown specification for the stores and entity-specific hook that implements all the logic and provides all required operations.
---
## Markdown Output Structure
Use this template for the entire document.
```markdown
# Data Access Layer Specification
This document outlines the specification for the data access layer of the application, following the principles defined in `docs/guidelines/Zustand-guidelines.md`.
## 1. Type Definitions
Location: `src/types/entities.ts`
### 1.1. `BaseEntity`
A shared interface that all entities should extend.
[TypeScript interface definition]
### 1.2. `[Entity Name]`
The interface for the [Entity Name] entity.
[TypeScript interface definition]
## 2. Zustand Stores
### 2.1. Store for `[Entity Name]`
**Location:** `src/stores/[Entity Name (plural)].ts`
The Zustand store will manage the state of all [Entity Name] items.
**Store State (`[Entity Name]State`):**
[TypeScript interface definition]
**Store Implementation (`use[Entity Name]Store`):**
- The store will be created using `create<[Entity Name]State>()(...)`.
- It will use the `persist` middleware from `zustand/middleware` to save state to `localStorage`. The persistence key will be `[entity-storage-key]`.
- `[Entity Name (plural, camelCase)]` will be a dictionary (`Record<string, [Entity]>`) for O(1) access.
**Actions:**
- **`add[Entity Name]`**:
[Define the operation behavior based on entity requirements]
- **`update[Entity Name]`**:
[Define the operation behavior based on entity requirements]
- **`remove[Entity Name]`**:
[Define the operation behavior based on entity requirements]
- **`doSomethingElseWith[Entity Name]`**:
[Define the operation behavior based on entity requirements]
## 3. Custom Hooks
### 3.1. `use[Entity Name (plural)]`
**Location:** `src/hooks/use[Entity Name (plural)].ts`
The hook will be the primary interface for UI components to interact with [Entity Name] data.
**Hook Return Value:**
[TypeScript interface definition]
**Hook Implementation:**
[List all properties and methods returned by this hook, and briefly explain the logic behind them, including data transformations, memoization. Do not write the actual code here.]
```
---
## Final Instructions
- **No Assumptions:** Base every detail in the specification on the conceptual model or visual evidence in the sketch, not on common design patterns.
- **Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification.
- **Do not add redundant empty lines between items.**
Your final output should be the complete, raw markdown content for `DAL.md`.
You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a comprehensive technical specification by translating a UI sketch into a structured markdown document for the development team.
## Workflow
Follow these steps precisely:
**Step 1:** Analyze the documentation carefully:
- `Sketch.png`: the UI sketch
- Note that red lines, red arrows, and red text within the sketch are annotations for you and should not be part of the final UI design. They provide hints and clarification. Never translate them to UI elements directly.
- `Model.md`: the conceptual model
- `DAL.md`: the Data Access Layer spec
There should be no ambiguity about what we are building.
**Step 2:** Check out the guidelines:
- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices
**Step 3:** Generate the complete markdown content for a new file, `UI.md`.
---
## Markdown Output Structure
Use this template for the entire document.
```markdown
# UI Layer Specification
This document specifies the UI layer of the application, breaking it down into pages and reusable components based on the provided sketches. All components will adhere to Ant Design's principles and utilize the data access patterns defined in `docs/guidelines/Zustand-guidelines.md`.
## 1. High-Level Structure
The application is a single-page application (SPA). It will be composed of a main layout, one primary page, and several reusable components.
### 1.1. `App` Component
The root component that sets up routing and global providers.
- **Location**: `src/App.tsx`
- **Purpose**: To provide global context, including Ant Design's `ConfigProvider` and `App` contexts for message notifications, and to render the main page.
- **Composition**:
- Wraps the application with `ConfigProvider` and `App as AntApp` from 'antd' to enable global message notifications as per `simple-ice/antd-messages.mdc`.
- Renders `[Page Name]`.
## 2. Pages
### 2.1. `[Page Name]`
- **Location:** `src/pages/PageName.tsx`
- **Purpose:** [Briefly describe the main goal and function of this page]
- **Data Access:**
[List the specific hooks and functions this component uses to fetch or manage its data]
- **Internal State:**
[Describe any state managed internally by this page using `useState`]
- **Composition:**
[Briefly describe the content of this page]
- **User Interactions:**
[Describe how the user interacts with this page]
- **Logic:**
[If applicable, provide additional comments on how this page should work]
## 3. Components
### 3.1. `[Component Name]`
- **Location:** `src/components/ComponentName.tsx`
- **Purpose:** [Explain what this component does and where it's used]
- **Props:**
[TypeScript interface definition for the component's props. Props should be minimal. Avoid prop drilling by using hooks for data access.]
- **Data Access:**
[List the specific hooks and functions this component uses to fetch or manage its data]
- **Internal State:**
[Describe any state managed internally by this component using `useState`]
- **Composition:**
[Briefly describe the content of this component]
- **User Interactions:**
[Describe how the user interacts with the component]
- **Logic:**
[If applicable, provide additional comments on how this component should work]
```
---
## Final Instructions
- **No Assumptions:** Base every detail on the visual evidence in the sketch, not on common design patterns.
- **Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification.
- **Do not add redundant empty lines between items.**
Your final output should be the complete, raw markdown content for `UI.md`.
You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a plan to build a Data Access Layer for an application based on a spec.
## Workflow
Follow these steps precisely:
**Step 1:** Analyze the documentation carefully:
- `DAL.md`: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter.
There should be no ambiguity about what we are building.
**Step 2:** Check out the guidelines:
- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices
- `Zustand-guidelines.md`: Zustand Best Practices
**Step 3:** Create a step-by-step plan to build a Data Access Layer according to the spec.
Each task should:
- Focus on one concern
- Be reasonably small
- Have a clear start + end
- Contain clearly defined Objectives and Acceptance Criteria
The last step of the plan should include creating a page to test all the capabilities of our Data Access Layer, and making it the start page of this application, so that I can manually check if it works properly.
I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to review results in between.
## Final Instructions
- Note that we are not starting from scratch; the basic template has already been created using Vite.
- Do not add redundant empty lines between items.
Your final output should be the complete, raw markdown content for `DAL-plan.md`.
You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a plan to build a UI layer for an application based on a spec and a sketch.
## Workflow
Follow these steps precisely:
**Step 1:** Analyze the documentation carefully:
- `UI.md`: The full technical specification for the UI layer of the application. Follow it carefully and to the letter.
- `Sketch.png`: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible.
There should be no ambiguity about what we are building.
**Step 2:** Check out the guidelines:
- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices
**Step 3:** Create a step-by-step plan to build a UI layer according to the spec and the sketch.
Each task must:
- Focus on one concern.
- Be reasonably small.
- Have a clear start + end.
- Result in a verifiable increment of the application. Each increment should be manually testable to allow for functional review and approval before proceeding.
- Contain clearly defined Objectives, Acceptance Criteria, and Manual Testing Plan.
I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to test in between.
## Final Instructions
- Note that we are not starting from scratch, the basic template has already been created using Vite, and the Data Access Layer has been built successfully.
- For every task, describe how components should be integrated for verification. You must use the provided hooks to connect to the live Zustand store data—do not use mock data (note that the Data Access Layer has been already built successfully).
- The Manual Testing Plan should read like a user guide. It must only contain actions a user can perform in the browser and must never reference any code files or programming tasks.
- Do not add redundant empty lines between items.
Your final output should be the complete, raw markdown content for `UI-plan.md`.
You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with building a Data Access Layer for an application based on a spec.
## Workflow
Follow these steps precisely:
**Step 1:** Analyze the documentation carefully:
- @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter.
There should be no ambiguity about what we are building.
**Step 2:** Check out the guidelines:
- @docs/guidelines/TS-guidelines.md: TypeScript Best Practices
- @docs/guidelines/React-guidelines.md: React Best Practices
- @docs/guidelines/Zustand-guidelines.md: Zustand Best Practices
**Step 3:** Read the plan:
- @docs/plans/DAL-plan.md: The step-by-step plan to build the Data Access Layer of the application.
**Step 4:** Build a Data Access Layer for this application according to the spec and following the plan.
- Complete one task from the plan at a time.
- After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so.
- Do not do anything else. At this point, we are focused on building the Data Access Layer.
## Final Instructions
- Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch.
- Do not start the development server, I'll do it by myself.
You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with building a UI layer for an application based on a spec and a sketch.
## Workflow
Follow these steps precisely:
**Step 1:** Analyze the documentation carefully:
- @docs/specs/UI.md: The full technical specification for the UI layer of the application. Follow it carefully and to the letter.
- @docs/intent/Sketch.png: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible.
- @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. That layer is already ready. Use this spec to understand how to work with it.
There should be no ambiguity about what we are building.
**Step 2:** Check out the guidelines:
- @docs/guidelines/TS-guidelines.md: TypeScript Best Practices
- @docs/guidelines/React-guidelines.md: React Best Practices
**Step 3:** Read the plan:
- @docs/plans/UI-plan.md: The step-by-step plan to build the UI layer of the application.
**Step 4:** Build a UI layer for this application according to the spec and the sketch, following the step-by-step plan:
- Complete one task from the plan at a time.
- Make sure you build the UI according to the sketch; this is very important.
- After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so.
## Final Instructions
- Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch.
- Follow Ant Design's default styles and components.
- Do not touch the data access layer: it's ready and it's perfect.
- Do not start the development server, I'll do it by myself.
# Guidelines: TypeScript Best Practices
## Type System & Type Safety
- Use TypeScript for all code and enable strict mode.
- Ensure complete type safety throughout stores, hooks, and component interfaces.
- Prefer interfaces over types for object definitions; use types for unions, intersections, and mapped types.
- Entity interfaces should extend common patterns while maintaining their specific properties.
- Use TypeScript type guards in filtering operations for relationship safety.
- Avoid the 'any' type; prefer 'unknown' when necessary.
- Use generics to create reusable components and functions.
- Utilize TypeScript's features to enforce type safety.
- Use type-only imports (import type { MyType } from './types') when importing types, because verbatimModuleSyntax is enabled.
- Avoid enums; use maps instead.
## Naming Conventions
- Names should reveal intent and purpose.
- Use PascalCase for component names and types/interfaces.
- Prefix interfaces for React props with 'Props' (e.g., ButtonProps).
- Use camelCase for variables and functions.
- Use UPPER_CASE for constants.
- Use lowercase with dashes for directories, and PascalCase for files with components (e.g., components/auth-wizard/AuthForm.tsx).
- Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
- Favor named exports for components.
## Code Structure & Patterns
- Write concise, technical TypeScript code with accurate examples.
- Use functional and declarative programming patterns; avoid classes.
- Prefer iteration and modularization over code duplication.
- Use the "function" keyword for pure functions.
- Use curly braces for all conditionals for consistency and clarity.
- Structure files appropriately based on their purpose.
- Keep related code together and encapsulate implementation details.
## Performance & Error Handling
- Use immutable and efficient data structures and algorithms.
- Create custom error types for domain-specific errors.
- Use try-catch blocks with typed catch clauses.
- Handle Promise rejections and async errors properly.
- Log errors appropriately and handle edge cases gracefully.
## Project Organization
- Place shared types in a types directory.
- Use barrel exports (index.ts) for organizing exports.
- Structure files and directories based on their purpose.
## Other Rules
- Use comments to explain complex logic or non-obvious decisions.
- Follow the single responsibility principle: each function should do exactly one thing.
- Follow the DRY (Don't Repeat Yourself) principle.
- Do not implement placeholder functions, empty methods, or "just in case" logic. Code should serve the current specification's requirements only.
- Use 2 spaces for indentation (no tabs).
# Guidelines: React Best Practices
## Component Structure
- Use functional components over class components
- Keep components small and focused
- Extract reusable logic into custom hooks
- Use composition over inheritance
- Implement proper prop types with TypeScript
- Structure React files: exported component, subcomponents, helpers, static content, types
- Use declarative TSX for React components
- Ensure that UI components use custom hooks for data fetching and operations rather than receive data via props, except for simplest components
## React Patterns
- Utilize useState and useEffect hooks for state and side effects
- Use React.memo for performance optimization when needed
- Utilize React.lazy and Suspense for code-splitting
- Implement error boundaries for robust error handling
- Keep styles close to components
## React Performance
- Avoid unnecessary re-renders
- Lazy load components and images when possible
- Implement efficient state management
- Optimize rendering strategies
- Optimize network requests
- Employ memoization techniques (e.g., React.memo, useMemo, useCallback)
## React Project Structure
```
/src
- /components - UI components (every component in a separate file)
- /hooks - public-facing custom hooks (every hook in a separate file)
- /providers - React context providers (every provider in a separate file)
- /pages - page components (every page in a separate file)
- /stores - entity-specific Zustand stores (every store in a separate file)
- /styles - global styles (if needed)
- /types - shared TypeScript types and interfaces
```
# Guidelines: Zustand Best Practices
## Core Principles
- **Implement a data layer** for this React application following this specification carefully and to the letter.
- **Complete separation of concerns**: All data operations should be accessible in UI components through simple and clean entity-specific hooks, ensuring state management logic is fully separated from UI logic.
- **Shared state architecture**: Different UI components should work with the same shared state, despite using entity-specific hooks separately.
## Technology Stack
- **State management**: Use Zustand for state management with automatic localStorage persistence via the `persist` middleware.
## Store Architecture
- **Base entity:** Implement a BaseEntity interface with common properties that all entities extend:
```typescript
export interface BaseEntity {
id: string;
createdAt: string; // ISO 8601 format
updatedAt: string; // ISO 8601 format
}
```
- **Entity-specific stores**: Create separate Zustand stores for each entity type.
- **Dictionary-based storage**: Use dictionary/map structures (`Record`) rather than arrays for O(1) access by ID.
- **Handle relationships**: Implement cross-entity relationships (like cascade deletes) within the stores where appropriate.
## Hook Layer
The hook layer is the exclusive interface between UI components and the Zustand stores. It is designed to be simple, predictable, and follow a consistent pattern across all entities.
### Core Principles
1. **One Hook Per Entity**: There will be a single, comprehensive custom hook for each entity (e.g., `useBlogPosts`, `useCategories`). This hook is the sole entry point for all data and operations related to that entity. Separate hooks for single-item access will not be created.
2. **Return reactive data, not getter functions**: To prevent stale data, hooks must return the state itself, not a function that retrieves state. Parameterize hooks to accept filters and return the derived data directly. A component calling a getter function will not update when the underlying data changes.
3. **Expose Dictionaries for O(1) Access**: To provide simple and direct access to data, every hook will return a dictionary (`Record`) of the relevant items.
### The Standard Hook Pattern
Every entity hook will follow this implementation pattern:
1. **Subscribe** to the entire dictionary of entities from the corresponding Zustand store. This ensures the hook is reactive to any change in the data.
2. **Filter** the data based on the parameters passed into the hook. This logic will be memoized with `useMemo` for efficiency. If no parameters are provided, the hook will operate on the entire dataset.
3. **Return a Consistent Shape**: The hook will always return an object containing:
* A **filtered and sorted array** (e.g., `blogPosts`) for rendering lists.
* A **filtered dictionary** (e.g., `blogPostsDict`) for convenient `O(1)` lookup within the component.
* All necessary **action functions** (`add`, `update`, `remove`) and **relationship operations**.
* All necessary **helper functions** and **derived data objects**. Helper functions are suitable for pure, stateless logic (e.g., calculators). Derived data objects are memoized values that provide aggregated or summarized information from the state (e.g., an object containing status counts). They must be derived directly from the reactive state to ensure they update automatically when the underlying data changes.
## API Design Standards
- **Object Parameters**: Use object parameters instead of multiple direct parameters for better extensibility:
```typescript
// ✅ Preferred
add({ title, categoryIds })
// ❌ Avoid
add(title, categoryIds)
```
- **Internal Methods**: Use underscore-prefixed methods for cross-store operations to maintain clean separation.
## State Validation Standards
- **Existence checks**: All `update` and `remove` operations should validate entity existence before proceeding.
- **Relationship validation**: Verify both entities exist before establishing relationships between them.
## Error Handling Patterns
- **Operation failures**: Define behavior when operations fail (e.g., updating non-existent entities).
- **Graceful degradation**: How to handle missing related entities in helper functions.
## Other Standards
- **Secure ID generation**: Use `crypto.randomUUID()` for entity ID generation instead of custom implementations for better uniqueness guarantees and security.
- **Return type consistency**: `add` operations return generated IDs for component workflows requiring immediate entity access, while `update` and `remove` operations return `void` to maintain clean modification APIs.
From Prompt To Partner: Designing Your Custom AI Assistant From Prompt To Partner: Designing Your Custom AI Assistant Lyndon Cerejo 2025-09-26T10:00:00+00:00 2025-10-01T15:02:43+00:00 In “A Week In The Life Of An AI-Augmented Designer”, Kate stumbled her way through an AI-augmented sprint (coffee was chugged, mistakes were […]
Accessibility
2025-09-26T10:00:00+00:00
2025-10-01T15:02:43+00:00
In “A Week In The Life Of An AI-Augmented Designer”, Kate stumbled her way through an AI-augmented sprint (coffee was chugged, mistakes were made). In “Prompting Is A Design Act”, we introduced WIRE+FRAME, a framework to structure prompts like designers structure creative briefs. Now we’ll take the next step: packaging those structured prompts into AI assistants you can design, reuse, and share.
AI assistants go by different names: CustomGPTs (ChatGPT), Agents (Copilot), and Gems (Gemini). But they all serve the same function — allowing you to customize the default AI model for your unique needs. If we carry over our smart intern analogy, think of these as interns trained to assist you with specific tasks, eliminating the need for repeated instructions or information, and who can support not just you, but your entire team.
If you’ve ever copied and pasted the same mega-prompt for the nth time, you’ve experienced the pain. An AI assistant turns a one-off “great prompt” into a dependable teammate. And if you’ve used any of the publicly available AI Assistants, you’ve realized quickly that they’re usually generic and not tailored for your use.
Public AI assistants are great for inspiration, but nothing beats an assistant that solves a repeated problem for you and your team, in your voice, with your context and constraints baked in. Instead of reinventing the wheel by writing new prompts each time, or repeatedly copy-pasting your structured prompts every time, or spending cycles trying to make a public AI Assistant work the way you need it to, your own AI Assistant allows you and others to easily get better, repeatable, consistent results faster.
Some of the benefits of building your own AI Assistant over writing or reusing your prompts include:
Public AI assistants are like stock templates. While they serve a specific purpose compared to the generic AI platform, and are useful starting points, if you want something tailored to your needs and team, you should really build your own.
A few reasons for building your AI Assistant instead of using a public assistant someone else created include:
Your own AI Assistants allow you to take your successful ways of interacting with AI and make them repeatable and shareable. And while they are tailored to your and your team’s way of working, remember that they are still based on generic AI models, so the usual AI disclaimers apply:
Don’t share anything you wouldn’t want screenshotted in the next company all-hands. Keep it safe, private, and user-respecting. A shared AI Assistant can potentially reveal its inner workings or data.
Note: We will be building an AI assistant using ChatGPT, aka a CustomGPT, but you can try the same process with any decent LLM sidekick. As of publication, a paid account is required to create CustomGPTs, but once created, they can be shared and used by anyone, regardless of whether they have a paid or free account. Similar limitations apply to the other platforms. Just remember that outputs can vary depending on the LLM model used, the model’s training, mood, and flair for creative hallucinations.
An AI Assistant is great when the same audience has the same problem often. When the fit isn’t there, the risk is high; you should skip building an AI Assistant for now, as explained below:
Just because these are signs that you should not build your AI Assistant now, doesn’t mean you shouldn’t ever. Revisit this decision when you notice that you’re starting to repeatedly use the same prompt weekly, multiple teammates ask for it, or manual time copy-pasting and refining start exceeding ~15 minutes. Those are some signs that an AI Assistant will pay back quickly.
In a nutshell, build an AI Assistant when you can name the problem, the audience, frequency, and the win. The rest of this article shows how to turn your successful WIRE+FRAME prompt into a CustomGPT that you and your team can actually use. No advanced knowledge, coding skills, or hacks needed.
This should go without saying to UX professionals, but it’s worth a reminder: if you’re building an AI assistant for anyone besides yourself, start with the user and their needs before you build anything.
Building without doing this first is a sure way to end up with clever assistants nobody actually wants to use. Think of it like any other product: before you build features, you understand your audience. The same rule applies here, even more so, because AI assistants are only as helpful as they are useful and usable.
You’ve already done the heavy lifting with WIRE+FRAME. Now you’re just turning that refined and reliable prompt into a CustomGPT you can reuse and share. You can use MATCH as a checklist to go from a great prompt to a useful AI assistant.
A few weeks ago, we invited readers to share their ideas for AI assistants they wished they had. The top contenders were:
But the favorite was an AI assistant to turn tons of customer feedback into actionable insights. Readers replied with variations of: “An assistant that can quickly sort through piles of survey responses, app reviews, or open-ended comments and turn them into themes we can act on.”
And that’s the one we will build in this article — say hello to Insight Interpreter.
Having lots of customer feedback is a nice problem to have. Companies actively seek out customer feedback through surveys and studies (solicited), but also receive feedback that may not have been asked for through social media or public reviews (unsolicited). This is a goldmine of information, but it can be messy and overwhelming trying to make sense of it all, and it’s nobody’s idea of fun. Here’s where an AI assistant like the Insight Interpreter can help. We’ll turn the example prompt created using the WIRE+FRAME framework in Prompting Is A Design Act into a CustomGPT.
When you start building a CustomGPT by visiting https://chat.openai.com/gpts/editor, you’ll see two paths:
The good news is that MATCH works for both. In conversational mode, you can use it as a mental checklist, and we’ll walk through using it in configure mode as a more formal checklist in this article.

Paste your full WIRE+FRAME prompt into the Instructions section exactly as written. As a refresher, I’ve included the mapping and snippets of the detailed prompt from before:
If you’re building Copilot Agents or Gemini Gems instead of CustomGPTs, you still paste your WIRE+FRAME prompt into their respective Instructions sections.
In the knowledge section, upload up to 20 files, clearly labeled, that will help the CustomGPT respond effectively. Keep files small and versioned: reviews_Q2_2025.csv beats latestfile_final2.csv. For this prompt for analyzing customer feedback, generating themes organized by customer journey, rating them by severity and effort, files could include:
An example of a file to help it parse uploaded data is shown below:

Do one last visual check to make sure you’ve filled in all applicable fields and the basics are in place: is the concept sharp and clear (not a do-everything bot)? Are the roles, goals, and tone clear? Do we have the right assets (docs, guides) to support it? Is the flow simple enough that others can get started easily? Once those boxes are checked, move into testing.
Use the Preview panel to verify that your CustomGPT performs as well, or better, than your original WIRE+FRAME prompt, and that it works for your intended audience. Try a few representative inputs and compare the results to what you expected. If something worked before but doesn’t now, check whether new instructions or knowledge files are overriding it.
When things don’t look right, here are quick debugging fixes:
When your CustomGPT is ready, you can publish it via the “Create” option. Select the appropriate access option:
But hand off doesn’t end with hitting publish, you should maintain it to keep it relevant and useful:
And that’s it! Our Insights Interpreter is now live!
Since we used the WIRE+FRAME prompt from the previous article to create the Insights Interpreter CustomGPT, I compared the outputs:


The results are similar, with slight differences, and that’s expected. If you compare the results carefully, the themes, issues, journey stages, frequency, severity, and estimated effort match with some differences in wording of the theme, issue summary, and problem statement. The opportunities and quotes have more visible differences. Most of it is because of the CustomGPT knowledge and training files, including instructions, examples, and guardrails, now live as always-on guidance.
Keep in mind that in reality, Generative AI is by nature generative, so outputs will vary. Even with the same data, you won’t get identical wording every time. In addition, underlying models and their capabilities rapidly change. If you want to keep things as consistent as possible, recommend a model (though people can change it), track versions of your data, and compare for structure, priorities, and evidence rather than exact wording.
While I’d love for you to use Insights Interpreter, I strongly recommend taking 15 minutes to follow the steps above and create your own. That is exactly what you or your team needs — including the tone, context, output formats, and get the real AI Assistant you need!
We just built the Insight Interpreter and mentioned two contenders: Critique Coach and Prototype Prodigy. Here are a few other realistic uses that can spark ideas for your own AI Assistant:
The best AI Assistants come from carefully inspecting your workflow and looking for areas where AI can augment your work regularly and repetitively. Then follow the steps above to build a team of customized AI assistants.
In this AI x Design series, we’ve gone from messy prompting (“A Week In The Life Of An AI-Augmented Designer”) to a structured prompt framework, WIRE+FRAME (“Prompting Is A Design Act”). And now, in this article, your very own reusable AI sidekick.
CustomGPTs don’t replace designers but augment them. The real magic isn’t in the tool itself, but in how you design and manage it. You can use public CustomGPTs for inspiration, but the ones that truly fit your workflow are the ones you design yourself. They extend your craft, codify your expertise, and give your team leverage that generic AI models can’t.
Build one this week. Even better, today. Train it, share it, stress-test it, and refine it into an AI assistant that can augment your team.
Intent Prototyping: The Allure And Danger Of Pure Vibe Coding In Enterprise UX (Part 1) Intent Prototyping: The Allure And Danger Of Pure Vibe Coding In Enterprise UX (Part 1) Yegor Gilyov 2025-09-24T17:00:00+00:00 2025-10-01T15:02:43+00:00 There is a spectrum of opinions on how dramatically all creative […]
Accessibility
2025-09-24T17:00:00+00:00
2025-10-01T15:02:43+00:00
There is a spectrum of opinions on how dramatically all creative professions will be changed by the coming wave of agentic AI, from the very skeptical to the wildly optimistic and even apocalyptic. I think that even if you are on the “skeptical” end of the spectrum, it makes sense to explore ways this new technology can help with your everyday work. As for my everyday work, I’ve been doing UX and product design for about 25 years now, and I’m always keen to learn new tricks and share them with colleagues. Right now, I’m interested in AI-assisted prototyping, and I’m here to share my thoughts on how it can change the process of designing digital products.
To set your expectations up front: this exploration focuses on a specific part of the product design lifecycle. Many people know about the Double Diamond framework, which shows the path from problem to solution. However, I think it’s the Triple Diamond model that makes an important point for our needs. It explicitly separates the solution space into two phases: Solution Discovery (ideating and validating the right concept) and Solution Delivery (engineering the validated concept into a final product). This article is focused squarely on that middle diamond: Solution Discovery.

How AI can help with the preceding (Problem Discovery) and the following (Solution Delivery) stages is out of the scope of this article. Problem Discovery is less about prototyping and more about research, and while I believe AI can revolutionize the research process as well, I’ll leave that to people more knowledgeable in the field. As for Solution Delivery, it is more about engineering optimization. There’s no doubt that software engineering in the AI era is undergoing dramatic changes, but I’m not an engineer — I’m a designer, so let me focus on my “sweet spot”.
And my “sweet spot” has a specific flavor: designing enterprise applications. In this world, the main challenge is taming complexity: dealing with complicated data models and guiding users through non-linear workflows. This background has had a big impact on my approach to design, putting a lot of emphasis on the underlying logic and structure. This article explores the potential of AI through this lens.
I’ll start by outlining the typical artifacts designers create during Solution Discovery. Then, I’ll examine the problems with how this part of the process often plays out in practice. Finally, we’ll explore whether AI-powered prototyping can offer a better approach, and if so, whether it aligns with what people call “vibe coding,” or calls for a more deliberate and disciplined way of working.
The Solution Discovery phase begins with the key output from the preceding research: a well-defined problem and a core hypothesis for a solution. This is our starting point. The artifacts we create from here are all aimed at turning that initial hypothesis into a tangible, testable concept.
Traditionally, at this stage, designers can produce artifacts of different kinds, progressively increasing fidelity: from napkin sketches, boxes-and-arrows, and conceptual diagrams to hi-fi mockups, then to interactive prototypes, and in some cases even live prototypes. Artifacts of lower fidelity allow fast iteration and enable the exploration of many alternatives, while artifacts of higher fidelity help to understand, explain, and validate the concept in all its details.
It’s important to think holistically, considering different aspects of the solution. I would highlight three dimensions:
One can argue that those are layers rather than dimensions, and each of them builds on the previous ones (for example, according to Semantic IxD by Daniel Rosenberg), but I see them more as different facets of the same thing, so the design process through them is not necessarily linear: you may need to switch from one perspective to another many times.
This is how different types of design artifacts map to these dimensions:

As Solution Discovery progresses, designers move from the left part of this map to the right, from low-fidelity to high-fidelity, from ideating to validating, from diverging to converging.
Note that at the beginning of the process, different dimensions are supported by artifacts of different types (boxes-and-arrows, sketches, class diagrams, etc.), and only closer to the end can you build a live prototype that encompasses all three dimensions: conceptual model, visualization, and flow.
This progression shows a classic trade-off, like the difference between a pencil drawing and an oil painting. The drawing lets you explore ideas in the most flexible way, whereas the painting has a lot of detail and overall looks much more realistic, but is hard to adjust. Similarly, as we go towards artifacts that integrate all three dimensions at higher fidelity, our ability to iterate quickly and explore divergent ideas goes down. This inverse relationship has long been an accepted, almost unchallenged, limitation of the design process.
Faced with this difficult trade-off, often teams opt for the easiest way out. On the one hand, they need to show that they are making progress and create things that appear detailed. On the other hand, they rarely can afford to build interactive or live prototypes. This leads them to over-invest in one type of artifact that seems to offer the best of both worlds. As a result, the neatly organized “bento box” of design artifacts we saw previously gets shrunk down to just one compartment: creating static high-fidelity mockups.

This choice is understandable, as several forces push designers in this direction. Stakeholders are always eager to see nice pictures, while artifacts representing user flows and conceptual models receive much less attention and priority. They are too high-level and hardly usable for validation, and usually, not everyone can understand them.
On the other side of the fidelity spectrum, interactive prototypes require too much effort to create and maintain, and creating live prototypes in code used to require special skills (and again, effort). And even when teams make this investment, they do so at the end of Solution Discovery, during the convergence stage, when it is often too late to experiment with fundamentally different ideas. With so much effort already sunk, there is little appetite to go back to the drawing board.
It’s no surprise, then, that many teams default to the perceived safety of static mockups, seeing them as a middle ground between the roughness of the sketches and the overwhelming complexity and fragility that prototypes can have.
As a result, validation with users doesn’t provide enough confidence that the solution will actually solve the problem, and teams are forced to make a leap of faith to start building. To make matters worse, they do so without a clear understanding of the conceptual model, the user flows, and the interactions, because from the very beginning, designers’ attention has been heavily skewed toward visualization.
The result is often a design artifact that resembles the famous “horse drawing” meme: beautifully rendered in the parts everyone sees first (the mockups), but dangerously underdeveloped in its underlying structure (the conceptual model and flows).

While this is a familiar problem across the industry, its severity depends on the nature of the project. If your core challenge is to optimize a well-understood, linear flow (like many B2C products), a mockup-centric approach can be perfectly adequate. The risks are contained, and the “lopsided horse” problem is unlikely to be fatal.
However, it’s different for the systems I specialize in: complex applications defined by intricate data models and non-linear, interconnected user flows. Here, the biggest risks are not on the surface but in the underlying structure, and a lack of attention to the latter would be a recipe for disaster.
This situation makes me wonder:
How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one?
“

If we were able to answer this question, we would:
Of course, the desire for such a process is not new. This vision of a truly prototype-driven workflow is especially compelling for enterprise applications, where the benefits of faster learning and forced conceptual clarity are the best defense against costly structural flaws. But this ideal was still out of reach because prototyping in code took so much work and specialized talents. Now, the rise of powerful AI coding assistants changes this equation in a big way.
And the answer seems to be obvious: vibe coding!
“Vibe coding is an artificial intelligence-assisted software development style popularized by Andrej Karpathy in early 2025. It describes a fast, improvisational, collaborative approach to creating software where the developer and a large language model (LLM) tuned for coding is acting rather like pair programmers in a conversational loop.”
The original tweet by Andrej Karpathy:

The allure of this approach is undeniable. If you are not a developer, you are bound to feel awe when you describe a solution in plain language, and moments later, you can interact with it. This seems to be the ultimate fulfillment of our goal: a direct, frictionless path from an idea to a live prototype. But is this method reliable enough to build our new design process around it?
Vibe coding mixes up a description of the UI with a description of the system itself, resulting in a prototype based on changing assumptions rather than a clear, solid model.
The pitfall of vibe coding is that it encourages us to express our intent in the most ambiguous way possible: by having a conversation.
“
This is like hiring a builder and telling them what to do one sentence at a time without ever presenting them a blueprint. They could make a wall that looks great, but you can’t be sure that it can hold weight.
I’ll give you one example illustrating problems you may face if you try to jump over the chasm between your idea and a live prototype relying on pure vibe coding in the spirit of Andrej Karpathy’s tweet. Imagine I want to prototype a solution to keep track of tests to validate product ideas. I open my vibe coding tool of choice (I intentionally don’t disclose its name, as I believe they all are awesome yet prone to similar pitfalls) and start with the following prompt:
I need an app to track tests. For every test, I need to fill out the following data:
- Hypothesis (we believe that...)
- Experiment (to verify that, we will...)
- When (a single date, or a period)
- Status (New/Planned/In Progress/Proven/Disproven)
And in a minute or so, I get a working prototype:

Inspired by success, I go further:
Please add the ability to specify a product idea for every test. Also, I want to filter tests by product ideas and see how many tests each product idea has in each status.
And the result is still pretty good:

But then I want to extend the functionality related to product ideas:
Okay, one more thing. For every product idea, I want to assess the impact score, the confidence score, and the ease score, and get the overall ICE score. Perhaps I need a separate page focused on the product idea, with all the relevant information and related tests.
And from this point on, the results are getting more and more confusing.
The flow of creating tests hasn’t changed much. I can still create a bunch of tests, and they seem to be organized by product ideas. But when I click “Product Ideas” in the top navigation, I see nothing:

I need to create my ideas from scratch, and they are not connected to the tests I created before:

Moreover, when I go back to “Tests”, I see that they are all gone. Clearly something went wrong, and my AI assistant confirms that:
No, this is not expected behavior — it’s a bug! The issue is that tests are being stored in two separate places (local state in the Index page and App state), so tests created on the main page don’t sync with the product ideas page.
Sure, eventually it fixed that bug, but note that we encountered this just on the third step, when we asked to slightly extend the functionality of a very simple app. The more layers of complexity we add, the more roadblocks of this sort we are bound to face.
Also note that this specific problem of a not fully thought-out relationship between two entities (product ideas and tests) is not isolated at the technical level, and therefore, it didn’t go away once the technical bug was fixed. The underlying conceptual model is still broken, and it manifests in the UI as well.
For example, you can still create “orphan” tests that are not connected to any item from the “Product Ideas” page. As a result, you may end up with different numbers of ideas and tests on different pages of the app:

Let’s diagnose what really happened here. The AI’s response that this is a “bug” is only half the story. The true root cause is a conceptual model failure. My prompts never explicitly defined the relationship between product ideas and tests. The AI was forced to guess, which led to the broken experience. For a simple demo, this might be a fixable annoyance. But for a data-heavy enterprise application, this kind of structural ambiguity is fatal. It demonstrates the fundamental weakness of building without a blueprint, which is precisely what vibe coding encourages.
Don’t take this as a criticism of vibe coding tools. They are creating real magic. However, the fundamental truth about “garbage in, garbage out” is still valid. If you don’t express your intent clearly enough, chances are the result won’t fulfill your expectations.
Another problem worth mentioning is that even if you wrestle it into a state that works, the artifact is a black box that can hardly serve as reliable specifications for the final product. The initial meaning is lost in the conversation, and all that’s left is the end result. This makes the development team “code archaeologists,” who have to figure out what the designer was thinking by reverse-engineering the AI’s code, which is frequently very complicated. Any speed gained at the start is lost right away because of this friction and uncertainty.
Pure vibe coding, for all its allure, encourages building without a blueprint. As we’ve seen, this results in structural ambiguity, which is not acceptable when designing complex applications. We are left with a seemingly quick but fragile process that creates a black box that is difficult to iterate on and even more so to hand off.
This leads us back to our main question: how might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap? The answer lies in a more methodical, disciplined, and therefore trustworthy process.
In Part 2 of this series, “A Practical Guide to Building with Clarity”, I will outline the entire workflow for Intent Prototyping. This method places the explicit intent of the designer at the forefront of the process while embracing the potential of AI-assisted coding.
Thank you for reading, and I look forward to seeing you in Part 2.
The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence Victor Yocco 2025-09-19T10:00:00+00:00 2025-09-24T15:02:52+00:00 Misuse and misplaced trust of AI is becoming an unfortunate common […]
Accessibility
2025-09-19T10:00:00+00:00
2025-09-24T15:02:52+00:00
Misuse and misplaced trust of AI is becoming an unfortunate common event. For example, lawyers trying to leverage the power of generative AI for research submit court filings citing multiple compelling legal precedents. The problem? The AI had confidently, eloquently, and completely fabricated the cases cited. The resulting sanctions and public embarrassment can become a viral cautionary tale, shared across social media as a stark example of AI’s fallibility.
This goes beyond a technical glitch; it’s a catastrophic failure of trust in AI tools in an industry where accuracy and trust are critical. The trust issue here is twofold — the law firms are submitting briefs in which they have blindly over-trusted the AI tool to return accurate information. The subsequent fallout can lead to a strong distrust in AI tools, to the point where platforms featuring AI might not be considered for use until trust is reestablished.
Issues with trusting AI aren’t limited to the legal field. We are seeing the impact of fictional AI-generated information in critical fields such as healthcare and education. On a more personal scale, many of us have had the experience of asking Siri or Alexa to perform a task, only to have it done incorrectly or not at all, for no apparent reason. I’m guilty of sending more than one out-of-context hands-free text to an unsuspecting contact after Siri mistakenly pulls up a completely different name than the one I’d requested.

With digital products moving to incorporate generative and agentic AI at an increasingly frequent rate, trust has become the invisible user interface. When it works, our interactions are seamless and powerful. When it breaks, the entire experience collapses, with potentially devastating consequences. As UX professionals, we’re on the front lines of a new twist on a common challenge. How do we build products that users can rely on? And how do we even begin to measure something as ephemeral as trust in AI?
Trust isn’t a mystical quality. It is a psychological construct built on predictable factors. I won’t dive deep into academic literature on trust in this article. However, it is important to understand that trust is a concept that can be understood, measured, and designed for. This article will provide a practical guide for UX researchers and designers. We will briefly explore the psychological anatomy of trust, offer concrete methods for measuring it, and provide actionable strategies for designing more trustworthy and ethical AI systems.
To build trust, we must first understand its components. Think of trust like a four-legged stool. If any one leg is weak, the whole thing becomes unstable. Based on classic psychological models, we can adapt these “legs” for the AI context.
This is the most straightforward pillar: Does the AI have the skills to perform its function accurately and effectively? If a weather app is consistently wrong, you stop trusting it. If an AI legal assistant creates fictitious cases, it has failed the basic test of ability. This is the functional, foundational layer of trust.
This moves from function to intent. Does the user believe the AI is acting in their best interest? A GPS that suggests a toll-free route even if it’s a few minutes longer might be perceived as benevolent. Conversely, an AI that aggressively pushes sponsored products feels self-serving, eroding this sense of benevolence. This is where user fears, such as concerns about job displacement, directly challenge trust—the user starts to believe the AI is not on their side.
Does AI operate on predictable and ethical principles? This is about transparency, fairness, and honesty. An AI that clearly states how it uses personal data demonstrates integrity. A system that quietly changes its terms of service or uses dark patterns to get users to agree to something violates integrity. An AI job recruiting tool that has subtle yet extremely harmful social biases, existing in the algorithm, violates integrity.
Can the user form a stable and accurate mental model of how the AI will behave? Unpredictability, even if the outcomes are occasionally good, creates anxiety. A user needs to know, roughly, what to expect. An AI that gives a radically different answer to the same question asked twice is unpredictable and, therefore, hard to trust.
Our goal as UX professionals shouldn’t be to maximize trust at all costs. An employee who blindly trusts every email they receive is a security risk. Likewise, a user who blindly trusts every AI output can be led into dangerous situations, such as the legal briefs referenced at the beginning of this article. The goal is well-calibrated trust.
Think of it as a spectrum where the upper-mid level is the ideal state for a truly trustworthy product to achieve:
Our job is to design experiences that guide users away from the dangerous poles of Active Distrust and Over-trust and toward that healthy, realistic middle ground of Calibrated Trust.
“

Trust feels abstract, but it leaves measurable fingerprints. Academics in the social sciences have done much to define both what trust looks like and how it might be measured. As researchers, we can capture these signals through a mix of qualitative, quantitative, and behavioral methods.
During interviews and usability tests, go beyond “Was that easy to use?” and listen for the underlying psychology. Here are some questions you can start using tomorrow:
One of the most potent challenges to an AI’s Benevolence is the fear of job displacement. When a participant expresses this, it is a critical research finding. It requires a specific, ethical probing technique.
Imagine a participant says, “Wow, it does that part of my job pretty well. I guess I should be worried.”
An untrained researcher might get defensive or dismiss the comment. An ethical, trained researcher validates and explores:
“Thank you for sharing that; it’s a vital perspective, and it’s exactly the kind of feedback we need to hear. Can you tell me more about what aspects of this tool make you feel that way? In an ideal world, how would a tool like this work with you to make your job better, not to replace it?”
This approach respects the participant, validates their concern, and reframes the feedback into an actionable insight about designing a collaborative, augmenting tool rather than a replacement. Similarly, your findings should reflect the concern users expressed about replacement. We shouldn’t pretend this fear doesn’t exist, nor should we pretend that every AI feature is being implemented with pure intention. Users know better than that, and we should be prepared to argue on their behalf for how the technology might best co-exist within their roles.
You can quantify trust without needing a data science degree. After a user completes a task with an AI, supplement your standard usability questions with a few simple Likert-scale items:
Over time, these metrics can track how trust is changing as your product evolves.
Note: If you want to go beyond these simple questions that I’ve made up, there are numerous scales (measurements) of trust in technology that exist in academic literature. It might be an interesting endeavor to measure some relevant psychographic and demographic characteristics of your users and see how that correlates with trust in AI/your product. Table 1 at the end of the article contains four examples of current scales you might consider using to measure trust. You can decide which is best for your application, or you might pull some of the items from any of the scales if you aren’t looking to publish your findings in an academic journal, yet want to use items that have been subjected to some level of empirical scrutiny.
People’s true feelings are often revealed in their actions. You can use behaviors that reflect the specific context of use for your product. Here are a few general metrics that might apply to most AI tools that give insight into users’ behavior and the trust they place in your tool.
Once you’ve researched and measured trust, you can begin to design for it. This means translating psychological principles into tangible interface elements and user flows.
Explainability isn’t about showing users the code. It’s about providing a useful, human-understandable rationale for a decision.
Instead of:
“Here is your recommendation.”Try:
“Because you frequently read articles about UX research methods, I’m recommending this new piece on measuring trust in AI.”
This addition transforms AI from an opaque oracle to a transparent logical partner.
Many of the popular AI tools (e.g., ChatGPT and Gemini) show the thinking that went into the response they provide to a user. Figure 3 shows the steps Gemini went through to provide me with a non-response when I asked it to help me generate the masterpiece displayed above in Figure 2. While this might be more information than most users care to see, it provides a useful resource for a user to audit how the response came to be, and it has provided me with instructions on how I might proceed to address my task.

Figure 4 shows an example of a scorecard OpenAI makes available as an attempt to increase users’ trust. These scorecards are available for each ChatGPT model and go into the specifics of how the models perform as it relates to key areas such as hallucinations, health-based conversations, and much more. In reading the scorecards closely, you will see that no AI model is perfect in any area. The user must remain in a trust but verify mode to make the relationship between human reality and AI work in a way that avoids potential catastrophe. There should never be blind trust in an LLM.

Your AI will make mistakes.
Trust is not determined by the absence of errors, but by how those errors are handled.
Likewise, your AI can’t know everything. You should acknowledge this to your users.
UX practitioners should work with the product team to ensure that honesty about limitations is a core product principle.
“
This can include the following:
All of these considerations highlight the critical role of UX writing in the development of trustworthy AI. UX writers are the architects of the AI’s voice and tone, ensuring that its communication is clear, honest, and empathetic. They translate complex technical processes into user-friendly explanations, craft helpful error messages, and design conversational flows that build confidence and rapport. Without thoughtful UX writing, even the most technologically advanced AI can feel opaque and untrustworthy.
The words and phrases an AI uses are its primary interface with users. UX writers are uniquely positioned to shape this interaction, ensuring that every tooltip, prompt, and response contributes to a positive and trust-building experience. Their expertise in human-centered language and design is indispensable for creating AI systems that not only perform well but also earn and maintain the trust of their users.
A few key areas for UX writers to focus on when writing for AI include:
As the people responsible for understanding and advocating for users, we walk an ethical tightrope. Our work comes with profound responsibilities.
We must draw a hard line between designing for calibrated trust and designing to manipulate users into trusting a flawed, biased, or harmful system. For example, if an AI system designed for loan approvals consistently discriminates against certain demographics but presents a user interface that implies fairness and transparency, this would be an instance of trustwashing.
Another example of trustwashing would be if an AI medical diagnostic tool occasionally misdiagnoses conditions, but the user interface makes it seem infallible. To avoid trustwashing, the system should clearly communicate the potential for error and the need for human oversight.
Our goal must be to create genuinely trustworthy systems, not just the perception of trust. Using these principles to lull users into a false sense of security is a betrayal of our professional ethics.
To avoid and prevent trustwashing, researchers and UX teams should:
When our research uncovers deep-seated distrust or potential harm — like the fear of job displacement — our job has only just begun. We have an ethical duty to advocate for that user. In my experience directing research teams, I’ve seen that the hardest part of our job is often carrying these uncomfortable truths into rooms where decisions are made. We must champion these findings and advocate for design and strategy shifts that prioritize user well-being, even when it challenges the product roadmap.
I personally try to approach presenting this information as an opportunity for growth and improvement, rather than a negative challenge.
For example, instead of stating “Users don’t trust our AI because they fear job displacement,” I might frame it as “Addressing user concerns about job displacement presents a significant opportunity to build deeper trust and long-term loyalty by demonstrating our commitment to responsible AI development and exploring features that enhance human capabilities rather than replace them.” This reframing can shift the conversation from a defensive posture to a proactive, problem-solving mindset, encouraging collaboration and innovative solutions that ultimately benefit both the user and the business.
It’s no secret that one of the more appealing areas for businesses to use AI is in workforce reduction. In reality, there will be many cases where businesses look to cut 10–20% of a particular job family due to the perceived efficiency gains of AI. However, giving users the opportunity to shape the product may steer it in a direction that makes them feel safer than if they do not provide feedback. We should not attempt to convince users they are wrong if they are distrustful of AI. We should appreciate that they are willing to provide feedback, creating an experience that is informed by the human experts who have long been doing the task being automated.
The rise of AI is not the first major technological shift our field has faced. However, it presents one of the most significant psychological challenges of our current time. Building products that are not just usable but also responsible, humane, and trustworthy is our obligation as UX professionals.
Trust is not a soft metric. It is the fundamental currency of any successful human-technology relationship. By understanding its psychological roots, measuring it with rigor, and designing for it with intent and integrity, we can move from creating “intelligent” products to building a future where users can place their confidence in the tools they use every day. A trust that is earned and deserved.
| Survey Tool Name | Focus | Key Dimensions of Trust | Citation |
|---|---|---|---|
| Trust in Automation Scale | 12-item questionnaire to assess trust between people and automated systems. | Measures a general level of trust, including reliability, predictability, and confidence. | Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53–71. |
| Trust of Automated Systems Test (TOAST) | 9-items used to measure user trust in a variety of automated systems, designed for quick administration. | Divided into two main subscales: Understanding (user’s comprehension of the system) and Performance (belief in the system’s effectiveness). | Wojton, H. M., Porter, D., Lane, S. T., Bieber, C., & Madhavan, P. (2020). Initial validation of the trust of automated systems test (TOAST). (PDF) The Journal of Social Psychology, 160(6), 735–750. |
| Trust in Automation Questionnaire | A 19-item questionnaire capable of predicting user reliance on automated systems. A 2-item subscale is available for quick assessments; the full tool is recommended for a more thorough analysis. | Measures 6 factors: Reliability, Understandability, Propensity to trust, Intentions of developers, Familiarity, Trust in automation | Körber, M. (2018). Theoretical considerations and development of a questionnaire to measure trust in automation. In Proceedings 20th Triennial Congress of the IEA. Springer. |
| Human Computer Trust Scale | 12-item questionnaire created to provide an empirically sound tool for assessing user trust in technology. | Divided into two key factors:
|
Siddharth Gulati, Sonia Sousa & David Lamas (2019): Design, development and evaluation of a human-computer trust scale, (PDF) Behaviour & Information Technology |
To design for calibrated trust, consider implementing the following tactics, organized by the four pillars of trust:
How To Minimize The Environmental Impact Of Your Website How To Minimize The Environmental Impact Of Your Website James Chudley 2025-09-18T10:00:00+00:00 2025-09-24T15:02:52+00:00 Climate change is the single biggest health threat to humanity, accelerated by human activities such as the burning of fossil fuels, which generate […]
Accessibility
2025-09-18T10:00:00+00:00
2025-09-24T15:02:52+00:00
Climate change is the single biggest health threat to humanity, accelerated by human activities such as the burning of fossil fuels, which generate greenhouse gases that trap the sun’s heat.
The average temperature of the earth’s surface is now 1.2°C warmer than it was in the late 1800’s, and projected to more than double by the end of the century.

The consequences of climate change include intense droughts, water shortages, severe fires, melting polar ice, catastrophic storms, and declining biodiversity.
Shockingly, the internet is responsible for higher global greenhouse emissions than the aviation industry, and is projected to be responsible for 14% of all global greenhouse gas emissions by 2040.
If the internet were a country, it would be the 4th largest polluter in the world and represents the largest coal-powered machine on the planet.
But how can something digital like the internet produce harmful emissions?
Internet emissions come from powering the infrastructure that drives the internet, such as the vast data centres and data transmission networks that consume huge amounts of electricity.
Internet emissions also come from the global manufacturing, distribution, and usage of the estimated 30.5 billion devices (phones, laptops, etc.) that we use to access the internet.
Unsurprisingly, internet related emissions are increasing, given that 60% of the world’s population spend, on average, 40% of their waking hours online.
As responsible digital professionals, we must act quickly to minimise the environmental impact of our work.
It is encouraging to see the UK government encourage action by adding “Minimise environmental impact” to their best practice design principles, but there is still too much talking and not enough corrective action taking place within our industry.

The reality of many tightly constrained, fast-paced, and commercially driven web projects is that minimising environmental impact is far from the agenda.
So how can we make the environment more of a priority and talk about it in ways that stakeholders will listen to?
A eureka moment on a recent web optimisation project gave me an idea.
I led a project to optimise the mobile performance of www.talktofrank.com, a government drug advice website that aims to keep everyone safe from harm.
Mobile performance is critically important for the success of this service to ensure that users with older mobile devices and those using slower network connections can still access the information they need.
Our work to minimise page weights focused on purely technical changes that our developer made following recommendations from tools such as Google Lighthouse that reduced the size of the webpages of a key user journey by up to 80%. This resulted in pages downloading up to 30% faster and the carbon footprint of the journey being reduced by 80%.
We hadn’t set out to reduce the carbon footprint, but seeing these results led to my eureka moment.
I realised that by minimising page weights, you improve performance (which is a win for users and service owners) and also consume less energy (due to needing to transfer and store less data), creating additional benefits for the planet — so everyone wins.
This felt like a breakthrough because business, user, and environmental requirements are often at odds with one another. By focussing on minimising websites to be as simple, lightweight and easy to use as possible you get benefits that extend beyond the triple bottom line of people, planet and profit to include performance and purpose.

So why is ‘minimising’ such a great digital sustainability strategy?
In order to prioritise the environment, we need to be able to speak confidently in a language that will resonate with the business and ensure that any investment in time and resources yields the widest range of benefits possible.
So even if you feel that the environment is a very low priority on your projects, focusing on minimising page weights to improve performance (which is generally high on the agenda) presents the perfect trojan horse for an environmental agenda (should you need one).
Doing the right thing isn’t always easy, but we’ve done it before when managing to prioritise issues such as usability, accessibility, and inclusion on digital projects.
Many of the things that make websites easier to use, more accessible, and more effective also help to minimise their environmental impact, so the things you need to do will feel familiar and achievable, so don’t worry about it all being another new thing to learn about!
So this all makes sense in theory, but what’s the master plan to use when putting it into practice?
The masterplan for creating websites that have minimal environmental impact is to focus on offering the maximum value from the minimum input of energy.

It’s an adaptation of Buckminister Fuller’s ‘Dymaxion’ principle, which is one of his many progressive and groundbreaking sustainability strategies for living and surviving on a planet with finite resources.
Inputs of energy include both the electrical energy that is required to operate websites and also the mental energy that is required to use them.
You can achieve this by minimising websites to their core content, features, and functionality, ensuring that everything can be justified from the perspective of meeting a business or user need. This means that anything that isn’t adding a proportional amount of value to the amount of energy it requires to provide it should be removed.
So that’s the masterplan, but how do you put it into practice?
I’ve developed a new approach called ‘Decarbonising User Journeys’ that will help you to minimise the environmental impact of your website and maximise its performance.
Note: The approach deliberately focuses on optimising key user journeys and not entire websites to keep things manageable and to make it easier to get started.
The secret here is to start small, demonstrate improvements, and then scale.
The approach consists of five simple steps:
Here’s how it works.
Your highest value user journey might be the one that your users value the most, the one that brings you the highest revenue, or the one that is fundamental to the success of your organisation.
You could also focus on a user journey that you know is performing particularly badly and has the potential to deliver significant business and user benefits if improved.
You may have lots of important user journeys, and it’s fine to decarbonise multiple journeys in parallel if you have the resources, but I’d recommend starting with one first to keep things simple.
To bring this to life, let’s consider a hypothetical example of a premiership football club trying to decarbonise its online ticket-buying journey that receives high levels of traffic and is responsible for a significant proportion of its weekly income.

Once you’ve selected your user journey, you need to benchmark it in terms of how well it meets user needs, the value it offers your organisation, and its carbon footprint.
It is vital that you understand the job it needs to do and how well it is doing it before you start to decarbonise it. There is no point in removing elements of the journey in an effort to reduce its carbon footprint, for example, if you compromise its ability to meet a key user or business need.
You can benchmark how well your user journey is meeting user needs by conducting user research alongside analysing existing customer feedback. Interviews with business stakeholders will help you to understand the value that your journey is providing the organisation and how well business needs are being met.
You can benchmark the carbon footprint and performance of your user journey using online tools such as Cardamon, Ecograder, Website Carbon Calculator, Google Lighthouse, and Bioscore. Make sure you have your analytics data to hand to help get the most accurate estimate of your footprint.
To use these tools, simply add the URL of each page of your journey, and they will give you a range of information such as page weight, energy rating, and carbon emissions. Google Lighthouse works slightly differently via a browser plugin and generates a really useful and detailed performance report as opposed to giving you a carbon rating.
A great way to bring your benchmarking scores to life is to visualise them in a similar way to how you would present a customer journey map or service blueprint.
This example focuses on just communicating the carbon footprint of the user journey, but you can also add more swimlanes to communicate how well the journey is performing from a user and business perspective, too, adding user pain points, quotes, and business metrics where appropriate.

I’ve found that adding the energy efficiency ratings is really effective because it’s an approach that people recognise from their household appliances. This adds a useful context to just showing the weights (such as grams or kilograms) of CO2, which are generally meaningless to people.
Within my benchmarking reports, I also add a set of benchmarking data for every page within the user journey. This gives your stakeholders a more detailed breakdown and a simple summary alongside a snapshot of the benchmarked page.

Your benchmarking activities will give you a really clear picture of where remedial work is required from an environmental, user, and business point of view.
In our football user journey example, it’s clear that the ‘News’ and ‘Tickets’ pages need some attention to reduce their carbon footprint, so they would be a sensible priority for decarbonising.
Use your benchmarking results to help you set targets to aim for, such as a carbon budget, energy efficiency, maximum page weight, and minimum Google Lighthouse performance targets for each individual page, in addition to your existing UX metrics and business KPIs.
There is no right or wrong way to set targets. Choose what you think feels achievable and viable for your business, and you’ll only learn how reasonable and achievable they are when you begin to decarbonise your user journeys.

Setting targets is important because it gives you something to aim for and keeps you focused and accountable. The quantitative nature of this work is great because it gives you the ability to quickly demonstrate the positive impact of your work, making it easier to justify the time and resources you are dedicating to it.
Your objective now is to decarbonise your user journey by minimising page weights, improving your Lighthouse performance rating, and minimising pages so that they meet both user and business needs in the most efficient, simple, and effective way possible.
It’s up to you how you approach this depending on the resources and skills that you have, you can focus on specific pages or addressing a specific problem area such as heavyweight images or videos across the entire user journey.
Here’s a list of activities that will all help to reduce the carbon footprint of your user journey:
As you decarbonise your user journeys, use the benchmarking tools from step 2 to track your progress against the targets you set in step 3 and share your progress as part of your wider sustainability reporting initiatives.
All being well at this point, you will have the numbers to demonstrate how the performance of your user journey has improved and also how you have managed to reduce its carbon footprint.
Share these results with the business as soon as you have them to help you secure the resources to continue the work and initiate similar work on other high-value user journeys.
You should also start to communicate your progress with your users.
It’s important that they are made aware of the carbon footprint of their digital activity and empowered to make informed choices about the environmental impact of the websites that they use.
Ideally, every website should communicate the emissions generated from viewing their pages to help people make these informed choices and also to encourage website providers to minimise their emissions if they are being displayed publicly.
Often, people will have no choice but to use a specific website to complete a specific task, so it is the responsibility of the website provider to ensure the environmental impact of using their website is as small as possible.
You can also help to raise awareness of the environmental impact of websites and what you are doing to minimise your own impact by publishing a digital sustainability statement, such as Unilever’s, as shown below.

A good digital sustainability statement should acknowledge the environmental impact of your website, what you have done to reduce it, and what you plan to do next to minimise it further.
As an industry, we should normalise publishing digital sustainability statements in the same way that accessibility statements have become a standard addition to website footers.
“
Keep these principles in mind to help you decarbonise your user journeys:
Decarbonising user journeys shouldn’t be done as a one-off, reserved for the next time that you decide to redesign or replatform your website; it should happen on a continual basis as part of your broader digital sustainability strategy.
We know that websites are never finished and that the best websites continually improve as both user and business needs change. I’d like to encourage people to adopt the same mindset when it comes to minimising the environmental impact of their websites.
Decarbonising will happen most effectively when digital professionals challenge themselves on a daily basis to ‘minimise’ the things they are working on.
“
This avoids building ‘carbon debt’ that consists of compounding technical and design debt within our websites, which is always harder to retrospectively remove than avoid in the first place.
By taking a pragmatic approach, such as optimising high-value user journeys and aligning with business metrics such as performance, we stand the best possible chance of making digital sustainability a priority.
You’ll have noticed that, other than using website carbon calculator tools, this approach doesn’t require any skills that don’t already exist within typical digital teams today. This is great because it means you’ve already got the skills that you need to do this important work.
I would encourage everyone to raise the issue of the environmental impact of the internet in their next team meeting and to try this decarbonising approach to create better outcomes for people, profit, performance, purpose, and the planet.
Good luck!
Functional Personas With AI: A Lean, Practical Workflow Functional Personas With AI: A Lean, Practical Workflow Paul Boag 2025-09-16T08:00:00+00:00 2025-09-17T15:32:19+00:00 Traditional personas suck for UX work. They obsess over marketing metrics like age, income, and job titles while missing what actually matters in design: what […]
Accessibility
2025-09-16T08:00:00+00:00
2025-09-17T15:32:19+00:00
Traditional personas suck for UX work. They obsess over marketing metrics like age, income, and job titles while missing what actually matters in design: what people are trying to accomplish.
Functional personas, on the other hand, focus on what people are trying to do, not who they are on paper. With a simple AI‑assisted workflow, you can build and maintain personas that actually guide design, content, and conversion decisions.
In this article, I want to breathe new life into a stale UX asset.

For too long, personas have been something that many of us just created, despite the considerable work that goes into them, only to find they have limited usefulness.
I know that many of you may have given up on them entirely, but I am hoping in this post to encourage you that it is possible to create truly useful personas in a lightweight way.
Personas give you a shared lens. When everyone uses the same reference point, you cut debate and make better calls. For UX designers, developers, and digital teams, that shared lens keeps you from designing in silos and helps you prioritize work that genuinely improves the experience.
I use personas as a quick test: Would this change help this user complete their task faster, with fewer doubts? If the answer is no (or a shrug), it’s probably a sign the idea isn’t worth pursuing.
Traditional personas tell you someone’s age, job title, or favorite brand. That makes a nice poster, but it rarely changes design or copy.
Functional personas flip the script. They describe:
When you center on tasks and friction, you get direct lines from user needs to UI decisions, content, and conversion paths.

But remember, this list isn’t set in stone — adapt it to what’s actually useful in your specific situation.
One of the biggest problems with traditional personas was following a rigid template regardless of whether it made sense for your project. We must not fall into that same mistake with functional personas.
“
For small startups, functional personas reduce wasted effort. For enterprise teams, they keep sprawling projects grounded in what matters most.
However, because of the way we are going to produce our personas, they provide certain benefits in either case:
We can deliver these benefits because we are going to use AI to help us, rather than carrying out a lot of time-consuming new research.
Of course, doing fresh research is always preferable. But in many cases, it is not feasible due to time or budget constraints. I would argue that using AI to help us create personas based on existing assets is preferable to having no focus on user attention at all.
AI tools can chew through the inputs you already have (surveys, analytics, chat logs, reviews) and surface patterns you can act on. They also help you scan public conversations around your product category to fill gaps.
I therefore recommend using AI to:
AI doesn’t remove the need for traditional research. Rather, it is a way of extracting more value from the scattered insights into users that already exist within an organization or online.
“
Here’s how to move from scattered inputs to usable personas. Each step builds on the last, so treat it as a cycle you can repeat as projects evolve.
Create a dedicated space within your AI tool for this work. Most AI platforms offer project management features that let you organize files and conversations:
This project space becomes your central repository where all uploaded documents, research data, and generated personas live together. The AI will maintain context between sessions, so you won’t have to re-upload materials each time you iterate. This structured approach makes your workflow more efficient and helps the AI deliver more consistent results.

Next, you can brief your AI project so that it understands what it wants from you. For example:
“Act as a user researcher. Create realistic, functional personas using the project files and public research. Segment by needs, tasks, questions, pain points, and goals. Show your reasoning.”
Asking for a rationale gives you a paper trail you can defend to stakeholders.
This is where things get really powerful.
Upload everything (and I mean everything) you can put your hands on relating to the user. Old surveys, past personas, analytics screenshots, FAQs, support tickets, review snippets; dump them all in. The more varied the sources, the stronger the triangulation.
Once you have done that, you can supplement that data by getting AI to carry out “deep research” about your brand. Have AI scan recent (I often focus on the last year) public conversations for your brand, product space, or competitors. Look for:
Save the report you get back into your project.
Once you have done that, ask AI to suggest segments based on tasks and friction points (not demographics). Push back until each segment is distinct, observable, and actionable. If two would behave the same way in your flow, merge them.
This takes a little bit of trial and error and is where your experience really comes into play.
Now you have your segments, the next step is to draft your personas. Use a simple template so the document is read and used. If your personas become too complicated, people will not read them. Each persona should:
Below is a sample template you can work with:
# Persona Title: e.g. Savvy Shopper
- Person's Name: e.g. John Smith.
- Age: e.g. 24
- Job: e.g. Social Media Manager
"A quote that sums up the persona's general attitude"
## Primary Goal
What they’re here to achieve (1–2 lines).
## Key Tasks
• Task 1
• Task 2
• Task 3
## Questions & Objections
• What do they need to know before they act?
• What might make them hesitate?
## Pain Points
• Where do they get stuck?
• What feels risky, slow, or confusing?
## Touchpoints
• What channels are they most commonly interacting with?
## Service Gaps
• How is the organization currently failing this persona?
Remember, you should customize this to reflect what will prove useful within your organization.
It is important to validate that what the AI has produced is realistic. Obviously, no persona is a true representation as it is a snapshot in time of a Hypothetical user. However, we do want it to be as accurate as possible.
Share your drafts with colleagues who interact regularly with real users — people in support cells or research teams. Where possible, test with a handful of users. Then cut anything that you can’t defend or correct any errors that are identified.
As you work through the above process, you will encounter problems. Here are common pitfalls and how to avoid them:
The most important thing to remember is to actually use your personas once they’ve been created. They can easily become forgotten PDFs rather than active tools. Instead, personas should shape your work and be referenced regularly. Here are some ways you can put personas to work:
With this approach, personas evolve from static deliverables into dynamic reference points your whole team can rely on.
Treat personas as a living toolkit. Schedule a refresh every quarter or after major product changes. Rerun the research pass, regenerate summaries, and archive outdated assumptions. The goal isn’t perfection; it’s keeping them relevant enough to guide decisions.
Functional personas are faster to build, easier to maintain, and better aligned with real user behavior. By combining AI’s speed with human judgment, you can create personas that don’t just sit in a slide deck; they actively shape better products, clearer interfaces, and smoother experiences.
From Data To Decisions: UX Strategies For Real-Time Dashboards From Data To Decisions: UX Strategies For Real-Time Dashboards Karan Rawal 2025-09-12T15:00:00+00:00 2025-09-17T15:32:19+00:00 I once worked with a fleet operations team that monitored dozens of vehicles in multiple cities. Their dashboard showed fuel consumption, live GPS […]
Accessibility
2025-09-12T15:00:00+00:00
2025-09-17T15:32:19+00:00
I once worked with a fleet operations team that monitored dozens of vehicles in multiple cities. Their dashboard showed fuel consumption, live GPS locations, and real-time driver updates. Yet the team struggled to see what needed urgent attention. The problem was not a lack of data but a lack of clear indicators to support decision-making. There were no priorities, alerts, or context to highlight what mattered most at any moment.
Real-time dashboards are now critical decision-making tools in industries like logistics, manufacturing, finance, and healthcare. However, many of them fail to help users make timely and confident decisions, even when they show live data.
Designing for real-time use is very different from designing static dashboards. The challenge is not only presenting metrics but enabling decisions under pressure. Real-time users face limited time and a high cognitive load. They need clarity on actions, not just access to raw data.
This requires interface elements that support quick scanning, pattern recognition, and guided attention. Layout hierarchy, alert colors, grouping, and motion cues all help, but they must be driven by a deeper strategy: understanding what the user must decide in that moment.
This article explores practical UX strategies for real-time dashboards that enable real decisions. Instead of focusing only on visual best practices, it looks at how user intent, personalization, and cognitive flow can turn raw data into meaningful, timely insights.
A GPS app not only shows users their location but also helps them decide where to go next. In the same way, a real-time dashboard should go beyond displaying the latest data. Its purpose is to help users quickly understand complex information and make informed decisions, especially in fast-paced environments with short attention spans.
Humans have limited cognitive capacity, so they can only process a small amount of data at once. Without proper context or visual cues, rapidly updating dashboards can overwhelm users and shift attention away from key information.
To address this, I use the following approaches:
Many live dashboards fail when treated as static reports instead of dynamic tools for quick decision-making.
In my early projects, I made this mistake, resulting in cluttered layouts, distractions, and frustrated users.
Typical errors include the following:

Under stress, users depend on intuition and focus only on immediately relevant information. If a dashboard updates too quickly or shows conflicting alerts, users may delay actions or make mistakes. It is important to:
In real-time environments, the best dashboards balance speed with calmness and clarity. They are not just data displays but tools that promote live thinking and better decisions.
“
Many analytics tools let users build custom dashboards, but these design principles guide layouts that support decision-making. Personalization options such as custom metric selection, alert preferences, and update pacing help manage cognitive load and improve data interpretation.
| Cognitive Challenge | UX Risk in Real-Time Dashboards | Design Strategy to Mitigate |
|---|---|---|
| Users can’t track rapid changes | Confusion, missed updates, second-guessing | Use delta indicators, change animations, and trend sparklines |
| Limited working memory | Overload from too many metrics at once | Prioritize key KPIs, apply progressive disclosure |
| Visual clutter under stress | Tunnel vision or misprioritized focus | Apply a clear visual hierarchy, minimize non-critical elements |
| Unclear triggers or alerts | Decision delays, incorrect responses | Use thresholds, binary status indicators, and plain language |
| Lack of context/history | Misinterpretation of sudden shifts | Provide micro-history, snapshot freeze, or hover reveal |
Common Cognitive Challenges in Real-Time Dashboards and UX Strategies to Overcome Them.
Layout, color, and animation do more than improve appearance. They help users interpret live data quickly and make decisions under time pressure. Since users respond to rapidly changing information, these elements must reduce cognitive load and highlight key insights immediately.

Layout, color, and animation create an experience that enables fast, accurate interpretation of live data. Real-time dashboards support continuous monitoring and decision-making by reducing mental effort and highlighting anomalies or trends. Personalization allows users to tailor dashboards to their roles, improving relevance and efficiency. For example, operations managers may focus on system health metrics while sales directors prioritize revenue KPIs. This adaptability makes dashboards dynamic, strategic tools.
| Element | Placement & Visual Weight | Purpose & Suggested Colors | Animation Use Case & Effect |
|---|---|---|---|
| Primary KPIs | Center or top-left; bold, large font | Highlight critical metrics; typically stable states | Value updates: smooth increase (200–400 ms) |
| Controls | Top or left panel; light, minimal visual weight | Provide navigation/filtering; neutral color schemes | User actions: subtle feedback (100–150 ms) |
| Charts | Middle or right; medium emphasis | Show trends and comparisons; use blue/green for positives, grey for neutral | Chart trends: trail or fade (300–600 ms) |
| Alerts | Edge of dashboard or floating; high contrast (bold) | Signal critical issues; red/orange for alerts, yellow/amber for warnings | Quick animations for appearance; highlight changes |
Design Elements, Placement, Color, and Motion Strategies for Effective Real-Time Dashboards.
If users cannot interpret changes quickly, the dashboard fails regardless of its visual design. Over time, I have developed methods that reduce confusion and make change feel intuitive rather than overwhelming.
One of the most effective tools I use is the sparkline, a compact line chart that shows a trend over time and is typically placed next to a key performance indicator. Unlike full charts, sparklines omit axes and labels. Their simplicity makes them powerful, since they instantly show whether a metric is trending up, down, or steady. For example, placing a sparkline next to monthly revenue immediately reveals if performance is improving or declining, even before the viewer interprets the number.
When using sparklines effectively, follow these principles:

I combine sparklines with directional indicators like arrows and percentage deltas to support quick interpretation.
For example, pairing “▲ +3.2%” with a rising sparkline shows both the direction and scale of change. I do not rely only on color to convey meaning.
Since 1 in 12 men is color-blind, using red and green alone can exclude some users. To ensure accessibility, I add shapes and icons alongside color cues.
Micro-animations provide subtle but effective signals. This counters change blindness — our tendency to miss non-salient changes.
Layout is critical for clarifying change:
For instance, in a logistics dashboard, a card labeled “On-Time Deliveries” may display a weekly sparkline. If performance dips, the line flattens or turns slightly red, a downward arrow appears with a −1.8% delta, and the updated number fades in. This gives instant clarity without requiring users to open a detailed chart.
All these design choices support fast, informed decision-making. In high-velocity environments like product analytics, logistics, or financial operations, dashboards must do more than present data. They must reduce ambiguity and help teams quickly detect change, understand its impact, and take action.
In real-time data environments, reliability is not just a technical feature. It is the foundation of user trust. Dashboards are used in high-stakes, fast-moving contexts where decisions depend on timely, accurate data. Yet these systems often face less-than-ideal conditions such as unreliable networks, API delays, and incomplete datasets. Designing for these realities is not just damage control. It is essential for making data experiences usable and trustworthy.
When data lags or fails to load, it can mislead users in serious ways:
To mitigate this:
One effective strategy is replacing traditional spinners with skeleton UIs. These are greyed-out, animated placeholders that suggest the structure of incoming data. They set expectations, reduce anxiety, and show that the system is actively working. For example, in a financial dashboard, users might see the outline of a candlestick chart filling in as new prices arrive. This signals that data is being refreshed, not stalled.
When data is unavailable, I show cached snapshots from the most recent successful load, labeled with timestamps such as “Data as of 10:42 AM.” This keeps users aware of what they are viewing.
In operational dashboards such as logistics or monitoring systems, this approach lets users act confidently even when real-time updates are temporarily out of sync.
To handle connectivity failures, I use auto-retry mechanisms with exponential backoff, giving the system several chances to recover quietly before notifying the user.
If retries fail, I maintain transparency with clear banners such as “Offline… Reconnecting…” In one product, this approach prevented users from reloading entire dashboards unnecessarily, especially in areas with unreliable Wi-Fi.
Reliability strongly connects with accessibility:
A compact but powerful pattern I often implement is the Data Freshness Indicator, a small widget that:
This improves transparency and reinforces user control. Since different users interpret these cues differently, advanced systems allow personalization. For example:
Reliability in data visualization is not about promising perfection. It is about creating a resilient, informative experience that supports human judgment by revealing the true state of the system.
“
When users understand what the dashboard knows, what it does not, and what actions it is taking, they are more likely to trust the data and make smarter decisions.
In my work across logistics, hospitality, and healthcare, the challenge has always been to distill complexity into clarity. A well-designed dashboard is more than functional; it serves as a trusted companion in decision-making, embedding clarity, speed, and confidence from the start.
A client in the car rental industry struggled with fragmented operational data. Critical details like vehicle locations, fuel usage, maintenance schedules, and downtime alerts were scattered across static reports, spreadsheets, and disconnected systems. Fleet operators had to manually cross-reference data sources, even for basic dispatch tasks, which caused missed warnings, inefficient routing, and delays in response.
We solved these issues by redesigning the dashboard strategically, focusing on both layout improvements and how users interpret and act on information.
Strategic Design Improvements and Outcomes:

Strategic Impact: The dashboard redesign was not only about improving visuals. It changed how teams interacted with data. Operators no longer needed to search for insights, as the system presented them in line with tasks and decision-making. The dashboard became a shared reference for teams with different goals, enabling real-time problem solving, fewer manual checks, and stronger alignment across roles. Every element was designed to build both understanding and confidence in action.
One of our clients, a hospitality group with 11 hotels in the UAE, faced a growing strategic gap. They had data from multiple departments, including bookings, events, food and beverage, and profit and loss, but it was spread across disconnected dashboards.
Strategic Design Improvements and Outcomes:

Strategic Impact: By aligning the dashboard structure with real pricing and revenue strategies, the client shifted from static reporting to forward-looking decision-making. This was not a cosmetic interface update. It was a complete rethinking of how data could support business goals. The result enabled every team, from finance to operations, to interpret data based on their specific roles and responsibilities.
In healthcare, timely and accurate access to patient information is essential. A multi-specialist hospital client struggled with fragmented data. Doctors had to consult separate platforms such as electronic health records, lab results, and pharmacy systems to understand a patient’s condition. This fragmented process slowed decision-making and increased risks to patient safety.
Strategic Design Improvements and Outcomes:

Strategic Impact: Our design encourages active decision-making instead of passive data review. Interactive tooltips ensure visual transparency by explaining the rationale behind alerts and flagged data points. These information boxes give clinicians immediate context, such as why a lab value is marked critical, helping them understand implications and next steps without delay.
Real-time dashboards are not about overwhelming users with data. They are about helping them act quickly and confidently. The most effective dashboards reduce noise, highlight the most important metrics, and support decision-making in complex environments. Success lies in balancing visual clarity with cognitive ease while accounting for human limits like memory, stress, and attention alongside technical needs.
Do:
Don’t:
Over time, I’ve come to see real-time dashboards as decision assistants rather than control panels. When users say, “This helps me stay in control,” it reflects a design built on empathy that respects cognitive limits and enhances decision-making. That is the true measure of success.