{"id":606,"date":"2025-09-26T10:00:00","date_gmt":"2025-09-26T10:00:00","guid":{"rendered":"http:\/\/guupon.com\/?p=606"},"modified":"2025-10-01T15:36:52","modified_gmt":"2025-10-01T15:36:52","slug":"from-prompt-to-partner-designing-your-custom-ai-assistant","status":"publish","type":"post","link":"http:\/\/guupon.com\/index.php\/2025\/09\/26\/from-prompt-to-partner-designing-your-custom-ai-assistant\/","title":{"rendered":"From Prompt To Partner: Designing Your Custom AI Assistant"},"content":{"rendered":"<p>              <title>From Prompt To Partner: Designing Your Custom AI Assistant<\/title><\/p>\n<article>\n<header>\n<h1>From Prompt To Partner: Designing Your Custom AI Assistant<\/h1>\n<address>Lyndon Cerejo<\/address>\n<p>                  2025-09-26T10:00:00+00:00<br \/>\n                  2025-10-01T15:02:43+00:00<br \/>\n                <\/header>\n<p>In \u201c<a href=\"https:\/\/www.smashingmagazine.com\/2025\/08\/week-in-life-ai-augmented-designer\/\">A Week In The Life Of An AI-Augmented Designer<\/a>\u201d, Kate stumbled her way through an AI-augmented sprint (coffee was chugged, mistakes were made). In \u201c<a href=\"https:\/\/www.smashingmagazine.com\/2025\/08\/prompting-design-act-brief-guide-iterate-ai\/\">Prompting Is A Design Act<\/a>\u201d, we introduced WIRE+FRAME, a framework to structure prompts like designers structure creative briefs. Now we\u2019ll take the next step: packaging those structured prompts into AI assistants you can design, reuse, and share.<\/p>\n<p>AI assistants go by different names: CustomGPTs (ChatGPT), Agents (Copilot), and Gems (Gemini). But they all serve the same function &mdash; allowing you to customize the default AI model for your unique needs. If we carry over our smart intern analogy, think of these as interns trained to assist you with specific tasks, eliminating the need for repeated instructions or information, and who can support not just you, but your entire team.<\/p>\n<h2 id=\"why-build-your-own-assistant\">Why Build Your Own Assistant?<\/h2>\n<p>If you\u2019ve ever copied and pasted the same mega-prompt for the n<sup>th<\/sup> time, you\u2019ve experienced the pain. An AI assistant turns a one-off \u201cgreat prompt\u201d into a dependable teammate. And if you\u2019ve used any of the publicly available AI Assistants, you\u2019ve realized quickly that they\u2019re usually generic and not tailored for your use.<\/p>\n<p>Public AI assistants are great for inspiration, but nothing beats an assistant that solves a repeated problem for you and your team, in <strong>your voice<\/strong>, with <strong>your context and constraints<\/strong> baked in. Instead of reinventing the wheel by writing new prompts each time, or repeatedly copy-pasting your structured prompts every time, or spending cycles trying to make a public AI Assistant work the way you need it to, your own AI Assistant allows you and others to easily get better, repeatable, consistent results faster.<\/p>\n<h3 id=\"benefits-of-reusing-prompts-even-your-own\">Benefits Of Reusing Prompts, Even Your Own<\/h3>\n<p>Some of the benefits of building your own AI Assistant over writing or reusing your prompts include:<\/p>\n<ul>\n<li><strong>Focused on a real repeating problem<\/strong><br \/>\nA good AI Assistant isn\u2019t a general-purpose \u201cdo everything\u201d bot that you need to keep tweaking. It focuses on a single, recurring problem that takes a long time to complete manually and often results in varying quality depending on who\u2019s doing it (e.g., analyzing customer feedback).<\/li>\n<li><strong>Customized for your context<\/strong><br \/>\nMost large language models (LLMs, such as ChatGPT) are designed to be everything to everyone. An AI Assistant changes that by allowing you to customize it to automatically work like you want it to, instead of a generic AI.<\/li>\n<li><strong>Consistency at scale<\/strong><br \/>\nYou can use the <a href=\"https:\/\/www.smashingmagazine.com\/2025\/08\/prompting-design-act-brief-guide-iterate-ai\/#anatomy-structure-it-like-a-designer\">WIRE+FRAME prompt framework<\/a> to create structured, reusable prompts. An AI Assistant is the next logical step: instead of copy-pasting that fine-tuned prompt and sharing contextual information and examples each time, you can bake it into the assistant itself, allowing you and others achieve the same consistent results every time.<\/li>\n<li><strong>Codifying expertise<\/strong><br \/>\nEvery time you turn a great prompt into an AI Assistant, you\u2019re essentially bottling your expertise. Your assistant becomes a living design guide that outlasts projects (and even job changes).<\/li>\n<li><strong>Faster ramp-up for teammates<\/strong><br \/>\nInstead of new designers starting from a blank slate, they can use pre-tuned assistants. Think of it as knowledge transfer without the long onboarding lecture.<\/p>\n<p><\/li>\n<\/ul>\n<div data-audience=\"non-subscriber\" data-remove=\"true\" class=\"feature-panel-container\">\n<aside class=\"feature-panel\">\n<div class=\"feature-panel-left-col\">\n<div class=\"feature-panel-description\">\n<p>Meet <strong><a data-instant href=\"https:\/\/www.smashingconf.com\/online-workshops\/\">Smashing Workshops<\/a><\/strong> on <strong>front-end, design &amp; UX<\/strong>, with practical takeaways, live sessions, <strong>video recordings<\/strong> and a friendly Q&amp;A. With Brad Frost, St\u00e9ph Walter and <a href=\"https:\/\/smashingconf.com\/online-workshops\/workshops\">so many others<\/a>.<\/p>\n<p><a data-instant href=\"smashing-workshops\" class=\"btn btn--green btn--large\">Jump to the workshops&nbsp;\u21ac<\/a><\/div>\n<\/div>\n<div class=\"feature-panel-right-col\"><a data-instant href=\"smashing-workshops\" class=\"feature-panel-image-link\"><\/p>\n<div class=\"feature-panel-image\">\n<img decoding=\"async\" loading=\"lazy\" class=\"feature-panel-image-img\" src=\"\/images\/smashing-cat\/cat-scubadiving-panel.svg\" alt=\"Feature Panel\" width=\"257\" height=\"355\" \/><\/p>\n<\/div>\n<p><\/a>\n<\/div>\n<\/aside>\n<\/div>\n<h3 id=\"reasons-for-your-own-ai-assistant-instead-of-public-ai-assistants\">Reasons For Your Own AI Assistant Instead Of Public AI Assistants<\/h3>\n<p>Public AI assistants are like stock templates. While they serve a specific purpose compared to the generic AI platform, and are useful starting points, if you want something tailored to your needs and team, you should really build your own.<\/p>\n<p>A few reasons for building your AI Assistant instead of using a public assistant someone else created include:<\/p>\n<ul>\n<li><strong>Fit<\/strong>: Public assistants are built for the masses. Your work has quirks, tone, and processes they\u2019ll never quite match.<\/li>\n<li><strong>Trust &amp; Security<\/strong>: You don\u2019t control what instructions or hidden guardrails someone else baked in. With your own assistant, you know exactly what it will (and won\u2019t) do.<\/li>\n<li><strong>Evolution<\/strong>: An AI Assistant you design and build can grow with your team. You can update files, tweak prompts, and maintain a changelog &mdash; things a public bot won\u2019t do for you.<\/li>\n<\/ul>\n<p>Your own AI Assistants allow you to take your successful ways of interacting with AI and make them repeatable and shareable. And while they are tailored to your and your team\u2019s way of working, remember that they are still based on generic AI models, so the usual AI disclaimers apply:<\/p>\n<p><em>Don\u2019t share anything you wouldn\u2019t want screenshotted in the next company all-hands. Keep it safe, private, and user-respecting. A shared AI Assistant can potentially reveal its inner workings or data.<\/em><\/p>\n<p><strong><em>Note<\/em><\/strong>: <em>We will be building an AI assistant using ChatGPT, aka a CustomGPT, but you can try the same process with any decent LLM sidekick. As of publication, a paid account is required to create CustomGPTs, but once created, they can be shared and used by anyone, regardless of whether they have a paid or free account. Similar limitations apply to the other platforms. Just remember that outputs can vary depending on the LLM model used, the model\u2019s training, mood, and flair for creative hallucinations.<\/em><\/p>\n<h3 id=\"when-not-to-build-an-ai-assistant-yet\">When Not to Build An AI Assistant (Yet)<\/h3>\n<p>An AI Assistant is great when the <em>same<\/em> audience has the <em>same<\/em> problem <em>often<\/em>. When the fit isn\u2019t there, the risk is high; you should skip building an AI Assistant for now, as explained below:<\/p>\n<ul>\n<li><strong>One-off or rare tasks<\/strong><br \/>\nIf it won\u2019t be reused at least monthly, I\u2019d recommend keeping it as a saved WIRE+FRAME prompt. For example, something for a one-time audit or creating placeholder content for a specific screen.<\/li>\n<li><strong>Sensitive or regulated data<\/strong><br \/>\nIf you need to build in personally identifiable information (PII), health, finance, legal, or trade secrets, err on the side of not building an AI Assistant. Even if the AI platform promises not to use your data, I\u2019d strongly suggest using redaction or an approved enterprise tool with necessary safeguards in place (company-approved enterprise versions of Microsoft Copilot, for instance).<\/li>\n<li><strong>Heavy orchestration or logic<\/strong><br \/>\nMulti-step workflows, API calls, database writes, and approvals go beyond the scope of an AI Assistant into Agentic territory (as of now). I\u2019d recommend not trying to build an AI Assistant for these cases.<\/li>\n<li><strong>Real-time information<\/strong><br \/>\nAI Assistants may not be able to access real-time data like prices, live metrics, or breaking news. If you need these, you can upload near-real-time data (as we do below) or connect with data sources that you or your company controls, rather than relying on the open web.<\/li>\n<li><strong>High-stakes outputs<\/strong><br \/>\nFor cases related to compliance, legal, medical, or any other area requiring auditability, consider implementing process guardrails and training to keep humans in the loop for proper review and accountability.<\/li>\n<li><strong>No measurable win<\/strong><br \/>\nIf you can\u2019t name a success metric (such as time saved, first-draft quality, or fewer re-dos), I\u2019d recommend keeping it as a saved WIRE+FRAME prompt.<\/li>\n<\/ul>\n<p>Just because these are signs that you should not build your AI Assistant now, doesn\u2019t mean you shouldn\u2019t ever. Revisit this decision when you notice that you\u2019re starting to repeatedly use the same prompt weekly, multiple teammates ask for it, or manual time copy-pasting and refining start exceeding ~15 minutes. Those are some signs that an AI Assistant will pay back quickly.<\/p>\n<p>In a nutshell, build an AI Assistant when you can name the problem, the audience, frequency, and the win. The rest of this article shows how to turn your successful WIRE+FRAME prompt into a CustomGPT that you and your team can actually use. No advanced knowledge, coding skills, or hacks needed.<\/p>\n<h2 id=\"as-always-start-with-the-user\">As Always, Start with the User<\/h2>\n<p>This should go without saying to UX professionals, but it\u2019s worth a reminder: if you\u2019re building an AI assistant for anyone besides yourself, start with the user and their needs before you build anything.<\/p>\n<ul>\n<li>Who will use this assistant?<\/li>\n<li>What\u2019s the specific pain or task they struggle with today?<\/li>\n<li>What language, tone, and examples will feel natural to them?<\/li>\n<\/ul>\n<p>Building without doing this first is a sure way to end up with clever assistants nobody actually wants to use. Think of it like any other product: before you build features, you understand your audience. The same rule applies here, even more so, because AI assistants are only as helpful as they are useful and usable.<\/p>\n<h2 id=\"from-prompt-to-assistant\">From Prompt To Assistant<\/h2>\n<p>You\u2019ve already done the heavy lifting with WIRE+FRAME. Now you\u2019re just turning that refined and reliable prompt into a CustomGPT you can reuse and share. You can use MATCH as a checklist to go from a great prompt to a useful AI assistant.<\/p>\n<ul>\n<li><strong>M: Map your prompt<\/strong><br \/>\nPort your successful WIRE+FRAME prompt into the AI assistant.<\/li>\n<li><strong>A: Add knowledge and training<\/strong><br \/>\nGround the assistant in <em>your<\/em> world. Upload knowledge files, examples, or guides that make it uniquely yours.<\/li>\n<li><strong>T: Tailor for audience<\/strong><br \/>\nMake it feel natural to the people who will use it. Give it the right capabilities, but also adjust its settings, tone, examples, and conversation starters so they land with your audience.<\/li>\n<li><strong>C: Check, test, and refine<\/strong><br \/>\nTest the preview with different inputs and refine until you get the results you expect.<\/li>\n<li><strong>H: Hand off and maintain<\/strong><br \/>\nSet sharing options and permissions, share the link, and maintain it.<\/li>\n<\/ul>\n<p>A few weeks ago, we invited readers to share their ideas for AI assistants they wished they had. The top contenders were:<\/p>\n<ul>\n<li><strong>Prototype Prodigy<\/strong>: Transform rough ideas into prototypes and export them into Figma to refine.<\/li>\n<li><strong>Critique Coach<\/strong>: Review wireframes or mockups and point out accessibility and usability gaps.<\/li>\n<\/ul>\n<p>But the favorite was an AI assistant to turn tons of customer feedback into actionable insights. Readers replied with variations of: <em>\u201cAn assistant that can quickly sort through piles of survey responses, app reviews, or open-ended comments and turn them into themes we can act on.\u201d<\/em><\/p>\n<p>And that\u2019s the one we will build in this article &mdash; say hello to <strong>Insight Interpreter.<\/strong><\/p>\n<div class=\"partners__lead-place\"><\/div>\n<h2 id=\"walkthrough-insight-interpreter\">Walkthrough: Insight Interpreter<\/h2>\n<p>Having lots of customer feedback is a nice problem to have. Companies actively seek out customer feedback through surveys and studies (solicited), but also receive feedback that may not have been asked for through social media or public reviews (unsolicited). This is a goldmine of information, but it can be messy and overwhelming trying to make sense of it all, and it\u2019s nobody\u2019s idea of fun. Here\u2019s where an AI assistant like the Insight Interpreter can help. We\u2019ll turn the example prompt created using the WIRE+FRAME framework in <a href=\"https:\/\/www.smashingmagazine.com\/2025\/08\/prompting-design-act-brief-guide-iterate-ai\/\">Prompting Is A Design Act<\/a> into a CustomGPT.<\/p>\n<p>When you start building a CustomGPT by visiting <a href=\"https:\/\/chat.openai.com\/gpts\/editor?utm_source=chatgpt.com\">https:\/\/chat.openai.com\/gpts\/editor<\/a>, you\u2019ll see two paths:<\/p>\n<ul>\n<li><strong>Conversational interface<\/strong><br \/>\nVibe-chat your way &mdash; it\u2019s easy and quick, but similar to unstructured prompts, your inputs get baked in a little messily, so you may end up with vague or inconsistent instructions.<\/li>\n<li><strong>Configure interface<\/strong><br \/>\nThe structured form where you type instructions, upload files, and toggle capabilities. Less instant gratification, less winging it, but more control. This is the option you\u2019ll want for assistants you plan to share or depend on regularly.<\/li>\n<\/ul>\n<p>The good news is that MATCH works for both. In conversational mode, you can use it as a mental checklist, and we\u2019ll walk through using it in configure mode as a more formal checklist in this article.<\/p>\n<figure class=\"\n  \n    break-out article__image\n  \n  \n  \"><\/p>\n<p>    <a href=\"https:\/\/files.smashing.media\/articles\/from-prompt-to-partner-designing-custom-ai-assistant\/1-customgpt-configure-interface.png\"><\/p>\n<p>    <img decoding=\"async\" loading=\"lazy\" width=\"800\" height=\"451\" src=\"https:\/\/res.cloudinary.com\/indysigner\/image\/fetch\/f_auto,q_80\/w_400\/https:\/\/files.smashing.media\/articles\/from-prompt-to-partner-designing-custom-ai-assistant\/1-customgpt-configure-interface.png\" alt=\"CustomGPT Configure Interface\" \/><\/p>\n<p>    <\/a><figcaption class=\"op-vertical-bottom\">\n      CustomGPT Configure Interface. (<a href=\"https:\/\/files.smashing.media\/articles\/from-prompt-to-partner-designing-custom-ai-assistant\/1-customgpt-configure-interface.png\">Large preview<\/a>)<br \/>\n    <\/figcaption><\/figure>\n<h3 id=\"m-map-your-prompt\">M: Map Your Prompt<\/h3>\n<p>Paste your full WIRE+FRAME prompt into the <em>Instructions<\/em> section exactly as written. As a refresher, I\u2019ve included the mapping and snippets of the detailed prompt from before:<\/p>\n<ul>\n<li><strong>W<\/strong>ho &amp; What: The AI persona and the core deliverable (<em>\u201c\u2026senior UX researcher and customer insights analyst\u2026 specialize in synthesizing qualitative data from diverse sources\u2026\u201d<\/em>).<\/li>\n<li><strong>I<\/strong>nput Context: Background or data scope to frame the task (<em>\u201c\u2026analyzing customer feedback uploaded from sources such as\u2026\u201d<\/em>).<\/li>\n<li><strong>R<\/strong>ules &amp; Constraints: Boundaries (<em>\u201c\u2026do not fabricate pain points, representative quotes, journey stages, or patterns\u2026\u201d<\/em>).<\/li>\n<li><strong>E<\/strong>xpected Output: Format and fields of the deliverable (<em>\u201c\u2026a structured list of themes. For each theme, include\u2026\u201d<\/em>).<\/li>\n<li><strong>F<\/strong>low: Explicit, ordered sub-tasks (<em>\u201cRecommended flow of tasks: Step 1\u2026\u201d<\/em>).<\/li>\n<li><strong>R<\/strong>eference Voice: Tone, mood, or reference (<em>\u201c\u2026concise, pattern-driven, and objective\u2026\u201d<\/em>).<\/li>\n<li><strong>A<\/strong>sk for Clarification: Ask questions if unclear (<em>\u201c\u2026if data is missing or unclear, ask before continuing\u2026\u201d<\/em>).<\/li>\n<li><strong>M<\/strong>emory: Memory to recall earlier definitions (<em>\u201cUnless explicitly instructed otherwise, keep using this process\u2026\u201d<\/em>).<\/li>\n<li><strong>E<\/strong>valuate &amp; Iterate: Have the AI self-critique outputs (<em>\u201c\u2026critically evaluate\u2026suggest improvements\u2026\u201d<\/em>).<\/li>\n<\/ul>\n<p>If you\u2019re building Copilot Agents or Gemini Gems instead of CustomGPTs, you still paste your WIRE+FRAME prompt into their respective <em>Instructions<\/em> sections.<\/p>\n<h3 id=\"a-add-knowledge-and-training\">A: Add Knowledge And Training<\/h3>\n<p>In the knowledge section, upload up to 20 files, clearly labeled, that will help the CustomGPT respond effectively. Keep files small and versioned: <em>reviews_Q2_2025.csv<\/em> beats <em>latestfile_final2.csv<\/em>. For this prompt for analyzing customer feedback, generating themes organized by customer journey, rating them by severity and effort, files could include:<\/p>\n<ul>\n<li>Taxonomy of themes;<\/li>\n<li>Instructions on parsing uploaded data;<\/li>\n<li>Examples of real UX research reports using this structure;<\/li>\n<li>Scoring guidelines for severity and effort, e.g., what makes something a 3 vs. a 5 in severity;<\/li>\n<li>Customer journey map stages;<\/li>\n<li>Customer feedback file templates (not actual data).<\/li>\n<\/ul>\n<p>An example of a file to help it parse uploaded data is shown below:<\/p>\n<figure class=\"\n  \n    break-out article__image\n  \n  \n  \"><\/p>\n<p>    <a href=\"https:\/\/files.smashing.media\/articles\/from-prompt-to-partner-designing-custom-ai-assistant\/2-gpt-file-parsing-instructions.png\"><\/p>\n<p>    <img decoding=\"async\" loading=\"lazy\" width=\"800\" height=\"447\" src=\"https:\/\/res.cloudinary.com\/indysigner\/image\/fetch\/f_auto,q_80\/w_400\/https:\/\/files.smashing.media\/articles\/from-prompt-to-partner-designing-custom-ai-assistant\/2-gpt-file-parsing-instructions.png\" alt=\"GPT file parsing instructions\" \/><\/p>\n<p>    <\/a><figcaption class=\"op-vertical-bottom\">\n      (<a href=\"https:\/\/files.smashing.media\/articles\/from-prompt-to-partner-designing-custom-ai-assistant\/2-gpt-file-parsing-instructions.png\">Large preview<\/a>)<br \/>\n    <\/figcaption><\/figure>\n<h3 id=\"t-tailor-for-audience\">T: Tailor For Audience<\/h3>\n<ul>\n<li><strong>Audience tailoring<\/strong><br \/>\nIf you are building this for others, your prompt should have addressed tone in the \u201cReference Voice\u201d section. If you didn\u2019t, do it now, so the CustomGPT can be tailored to the tone and expertise level of users who will use it. In addition, use the <em>Conversation starters<\/em> section to add a few examples or common prompts for users to start using the CustomGPT, again, worded for your users. For instance, we could use \u201cAnalyze feedback from the attached file\u201d for our Insights Interpreter to make it more self-explanatory for anyone, instead of \u201cAnalyze data,\u201d which may be good enough if you were using it alone. For my Designerly Curiosity GPT, assuming that users may not know what it could do, I use \u201cWhat are the types of curiosity?\u201d and \u201cGive me a micro-practice to spark curiosity\u201d.<\/li>\n<li><strong>Functional tailoring<\/strong><br \/>\nFill in the CustomGPT name, icon, description, and capabilities.<\/p>\n<ul>\n<li><em>Name<\/em>: Pick one that will make it clear what the CustomGPT does. Let\u2019s use \u201cInsights Interpreter &mdash; Customer Feedback Analyzer\u201d. If needed, you can also add a version number. This name will show up in the sidebar when people use it or pin it, so make the first part memorable and easily identifiable.<\/li>\n<li><em>Icon<\/em>: Upload an image or generate one. Keep it simple so it can be easily recognized in a smaller dimension when people pin it in their sidebar.<\/li>\n<li><em>Description<\/em>: A brief, yet clear description of what the CustomGPT can do. If you plan to list it in the GPT store, this will help people decide if they should pick yours over something similar.<\/li>\n<li><em>Recommended Model<\/em>: If your CustomGPT needs the capabilities of a particular model (e.g., needs GPT-5 thinking for detailed analysis), select it. In most cases, you can safely leave it up to the user or select the most common model.<\/li>\n<li><em>Capabilities<\/em>: Turn off anything you won\u2019t need. We\u2019ll turn off \u201cWeb Search\u201d to allow the CustomGPT to focus only on uploaded data, without expanding the search online, and we will turn on \u201cCode Interpreter &amp; Data Analysis\u201d to allow it to understand and process uploaded files. \u201cCanvas\u201d allows users to work on a shared canvas with the GPT to edit writing tasks; \u201cImage generation\u201d &#8211; if the CustomGPT needs to create images.<\/li>\n<li><em>Actions<\/em>: Making <a href=\"https:\/\/platform.openai.com\/docs\/actions\/introduction\">third-party APIs<\/a> available to the CustomGPT, advanced functionality we don\u2019t need.<\/li>\n<li><em>Additional Settings<\/em>: Sneakily hidden and opted in by default, I opt out of training OpenAI\u2019s models.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<h3 id=\"c-check-test-refine\">C: Check, Test &amp; Refine<\/h3>\n<p>Do one last visual check to make sure you\u2019ve filled in all applicable fields and the basics are in place: is the concept sharp and clear (not a do-everything bot)? Are the roles, goals, and tone clear? Do we have the right assets (docs, guides) to support it? Is the flow simple enough that others can get started easily? Once those boxes are checked, move into testing.<\/p>\n<p>Use the <em>Preview<\/em> panel to verify that your CustomGPT performs as well, or better, than your original WIRE+FRAME prompt, and that it works for your intended audience. Try a few representative inputs and compare the results to what you expected. If something worked before but doesn\u2019t now, check whether new instructions or knowledge files are overriding it.<\/p>\n<p>When things don\u2019t look right, here are quick debugging fixes:<\/p>\n<ul>\n<li><strong>Generic answers?<\/strong><br \/>\nTighten <em>Input Context<\/em> or update the knowledge files.<\/li>\n<li><strong>Hallucinations?<\/strong><br \/>\nRevisit your <em>Rules<\/em> section. Turn off web browsing if you don\u2019t need external data.<\/li>\n<li><strong>Wrong tone?<\/strong><br \/>\nStrengthen <em>Reference Voice<\/em> or swap in clearer examples.<\/li>\n<li><strong>Inconsistent?<\/strong><br \/>\nTest across models in preview and set the most reliable one as \u201cRecommended.\u201d<\/li>\n<\/ul>\n<h3 id=\"h-hand-off-and-maintain\">H: Hand Off And Maintain<\/h3>\n<p>When your CustomGPT is ready, you can publish it via the \u201cCreate\u201d option. Select the appropriate access option:<\/p>\n<ul>\n<li><strong>Only me<\/strong>: Private use. Perfect if you\u2019re still experimenting or keeping it personal.<\/li>\n<li><strong>Anyone with the link<\/strong>: Exactly what it means. Shareable but not searchable. Great for pilots with a team or small group. Just remember that links can be reshared, so treat them as semi-public.<\/li>\n<li><strong>GPT Store<\/strong>: Fully public. Your assistant is listed and findable by anyone browsing the store. <em>(This is the option we\u2019ll use.)<\/em><\/li>\n<li><strong>Business workspace<\/strong> (if you\u2019re on GPT Business): Share with others within your business account only &mdash; the easiest way to keep it in-house and controlled.<\/li>\n<\/ul>\n<p>But hand off doesn\u2019t end with hitting publish, you should maintain it to keep it relevant and useful:<\/p>\n<ul>\n<li><strong>Collect feedback<\/strong>: Ask teammates what worked, what didn\u2019t, and what they had to fix manually.<\/li>\n<li><strong>Iterate<\/strong>: Apply changes directly or duplicate the GPT if you want multiple versions in play. You can find all your CustomGPTs at: <a href=\"https:\/\/chatgpt.com\/gpts\/mine\">https:\/\/chatgpt.com\/gpts\/mine<\/a><\/li>\n<li><strong>Track changes<\/strong>: Keep a simple changelog (date, version, updates) for traceability.<\/li>\n<li><strong>Refresh knowledge<\/strong>: Update knowledge files and examples on a regular cadence so answers don\u2019t go stale.<\/li>\n<\/ul>\n<p>And that\u2019s it! <a href=\"https:\/\/go.cerejo.com\/insights-interpreter\">Our Insights Interpreter is now live!<\/a><\/p>\n<p>Since we used the WIRE+FRAME prompt from the previous article to create the Insights Interpreter CustomGPT, I compared the outputs:<\/p>\n<figure class=\"\n  \n    break-out article__image\n  \n  \n  \"><\/p>\n<p>    <a href=\"https:\/\/files.smashing.media\/articles\/from-prompt-to-partner-designing-custom-ai-assistant\/3-results-structured-wire-frame-prompt.png\"><\/p>\n<p>    <img decoding=\"async\" loading=\"lazy\" width=\"800\" height=\"325\" src=\"https:\/\/res.cloudinary.com\/indysigner\/image\/fetch\/f_auto,q_80\/w_400\/https:\/\/files.smashing.media\/articles\/from-prompt-to-partner-designing-custom-ai-assistant\/3-results-structured-wire-frame-prompt.png\" alt=\"Results of the structured WIRE&#043;FRAME prompt from the previous article\" \/><\/p>\n<p>    <\/a><figcaption class=\"op-vertical-bottom\">\n      Results of the structured WIRE+FRAME prompt from the previous article. (<a href=\"https:\/\/files.smashing.media\/articles\/from-prompt-to-partner-designing-custom-ai-assistant\/3-results-structured-wire-frame-prompt.png\">Large preview<\/a>)<br \/>\n    <\/figcaption><\/figure>\n<figure class=\"\n  \n    break-out article__image\n  \n  \n  \"><\/p>\n<p>    <a href=\"https:\/\/files.smashing.media\/articles\/from-prompt-to-partner-designing-custom-ai-assistant\/4-results-insights-interpreter-customgpt.png\"><\/p>\n<p>    <img decoding=\"async\" loading=\"lazy\" width=\"800\" height=\"276\" src=\"https:\/\/res.cloudinary.com\/indysigner\/image\/fetch\/f_auto,q_80\/w_400\/https:\/\/files.smashing.media\/articles\/from-prompt-to-partner-designing-custom-ai-assistant\/4-results-insights-interpreter-customgpt.png\" alt=\"Results of the Insights Interpreter CustomGPT based on the same prompt\" \/><\/p>\n<p>    <\/a><figcaption class=\"op-vertical-bottom\">\n      Results of the Insights Interpreter CustomGPT based on the same prompt. (<a href=\"https:\/\/files.smashing.media\/articles\/from-prompt-to-partner-designing-custom-ai-assistant\/4-results-insights-interpreter-customgpt.png\">Large preview<\/a>)<br \/>\n    <\/figcaption><\/figure>\n<p>The results are similar, with slight differences, and that\u2019s expected. If you compare the results carefully, the themes, issues, journey stages, frequency, severity, and estimated effort match with some differences in wording of the theme, issue summary, and problem statement. The opportunities and quotes have more visible differences. Most of it is because of the CustomGPT knowledge and training files, including instructions, examples, and guardrails, now live as always-on guidance.<\/p>\n<p>Keep in mind that in reality, Generative AI is by nature generative, so outputs will vary. Even with the same data, you won\u2019t get identical wording every time. In addition, underlying models and their capabilities rapidly change. If you want to keep things as consistent as possible, recommend a model (though people can change it), track versions of your data, and compare for structure, priorities, and evidence rather than exact wording.<\/p>\n<p>While I\u2019d love for you to use Insights Interpreter, I strongly recommend taking 15 minutes to follow the steps above and create your own. That is exactly what you or your team needs &mdash; including the tone, context, output formats, and get the real AI Assistant you need!<\/p>\n<div class=\"partners__lead-place\"><\/div>\n<h2 id=\"inspiration-for-other-ai-assistants\">Inspiration For Other AI Assistants<\/h2>\n<p>We just built the Insight Interpreter and mentioned two contenders: Critique Coach and Prototype Prodigy. Here are a few other realistic uses that can spark ideas for your own AI Assistant:<\/p>\n<ul>\n<li><strong>Workshop Wizard<\/strong>: Generates workshop agendas, produces icebreaker questions, and follows up survey drafts.<\/li>\n<li><strong>Research Roundup Buddy<\/strong>: Summarizes raw transcripts into key themes, then creates highlight reels (quotes + visuals) for team share-outs.<\/li>\n<li><strong>Persona Refresher<\/strong>: Updates stale personas with the latest customer feedback, then rewrites them in different tones (boardroom formal vs. design-team casual).<\/li>\n<li><strong>Content Checker<\/strong>: Proofs copy for tone, accessibility, and reading level before it ever hits your site.<\/li>\n<li><strong>Trend Tamer<\/strong>: Scans competitor reviews and identifies emerging patterns you can act on before they reach your roadmap.<\/li>\n<li><strong>Microcopy Provocateur<\/strong>: Tests alternate copy options by injecting different tones (sassy, calm, ironic, nurturing) and role-playing how users might react, especially useful for error states or Call to Actions.<\/li>\n<li><strong>Ethical UX Debater<\/strong>: Challenges your design decisions and deceptive designs by simulating the voice of an ethics board or concerned user.<\/li>\n<\/ul>\n<p>The best AI Assistants come from carefully inspecting your workflow and looking for areas where AI can augment your work regularly and repetitively. Then follow the steps above to build a team of customized AI assistants.<\/p>\n<h2 id=\"ask-me-anything-about-assistants\">Ask Me Anything About Assistants<\/h2>\n<ul>\n<li><strong>What are some limitations of a CustomGPT?<\/strong><br \/>\nRight now, the best parallels for AI are a very smart intern with access to a lot of information. CustomGPTs are still running on LLM models that are basically trained on a lot of information and programmed to predictively generate responses based on that data, including possible bias, misinformation, or incomplete information. Keeping that in mind, you can make that intern provide better and more relevant results by using your uploads as onboarding docs, your guardrails as a job description, and your updates as retraining.<\/li>\n<li><strong>Can I copy someone else\u2019s public CustomGPT and tweak it?<\/strong><br \/>\nNot directly, but if you get inspired by another CustomGPT, you can look at how it\u2019s framed and rebuild your own using WIRE+FRAME &amp; MATCH. That way, you make it your own and have full control of the instructions, files, and updates. But you can do that with Google\u2019s equivalent &mdash; Gemini Gems. Shared Gems behave similarly to shared Google Docs, so once shared, any Gem instructions and files that you have uploaded can be viewed by any user with access to the Gem. Any user with edit access to the Gem can also update and delete the Gem.<\/li>\n<li><strong>How private are my uploaded files?<\/strong><br \/>\nThe files you upload are stored and used to answer prompts to your CustomGPT. If your CustomGPT is not private or you didn\u2019t disable the hidden setting to allow CustomGPT conversations to improve the model, that data could be referenced. Don\u2019t upload sensitive, confidential, or personal data you wouldn\u2019t want circulating. Enterprise accounts do have some protections, so check with your company.<\/li>\n<li><strong>How many files can I upload, and does size matter?<\/strong><br \/>\nLimits vary by platform, but smaller, specific files usually perform better than giant docs. Think \u201cchapter\u201d instead of \u201centire book.\u201d At the time of publishing, CustomGPTs allow up to 20 files, Copilot Agents up to 200 (if you need anywhere near that many, chances are your agent is not focused enough), and Gemini Gems up to 10.<\/li>\n<li><strong>What\u2019s the difference between a CustomGPT and a Project?<\/strong><br \/>\nA CustomGPT is a focused assistant, like an intern trained to do one role well (like \u201cInsight Interpreter\u201d). A Project is more like a workspace where you can group multiple prompts, files, and conversations together for a broader effort. CustomGPTs are specialists. Projects are containers. If you want something reusable, shareable, and role-specific, go to CustomGPT. If you want to organize broader work with multiple tools and outputs, and shared knowledge, Projects are the better fit.<\/li>\n<\/ul>\n<h2 id=\"from-reading-to-building\">From Reading To Building<\/h2>\n<p>In this AI x Design series, we\u2019ve gone from messy prompting (\u201c<a href=\"https:\/\/www.smashingmagazine.com\/2025\/08\/week-in-life-ai-augmented-designer\/\">A Week In The Life Of An AI-Augmented Designer<\/a>\u201d) to a structured prompt framework, WIRE+FRAME (\u201c<a href=\"https:\/\/www.smashingmagazine.com\/2025\/08\/prompting-design-act-brief-guide-iterate-ai\/\">Prompting Is A Design Act<\/a>\u201d). And now, in this article, your very own reusable AI sidekick.<\/p>\n<p>CustomGPTs don\u2019t replace designers but augment them. The real magic isn\u2019t in the tool itself, but in <em>how<\/em> you design and manage it. You can use public CustomGPTs for inspiration, but the ones that truly fit your workflow are the ones you design yourself. They <strong>extend your craft<\/strong>, <strong>codify your expertise<\/strong>, and give your team leverage that generic AI models can\u2019t.<\/p>\n<p>Build one this week. Even better, today. Train it, share it, stress-test it, and refine it into an AI assistant that can augment your team.<\/p>\n<div class=\"signature\">\n  <img decoding=\"async\" src=\"https:\/\/www.smashingmagazine.com\/images\/logo\/logo--red.png\" alt=\"Smashing Editorial\" width=\"35\" height=\"46\" loading=\"lazy\" \/><br \/>\n  <span>(yk)<\/span>\n<\/div>\n<\/article>\n","protected":false},"excerpt":{"rendered":"<p class=\"text-justify mb-2\" >From Prompt To Partner: Designing Your Custom AI Assistant From Prompt To Partner: Designing Your Custom AI Assistant Lyndon Cerejo 2025-09-26T10:00:00+00:00 2025-10-01T15:02:43+00:00 In \u201cA Week In The Life Of An AI-Augmented Designer\u201d, Kate stumbled her way through an AI-augmented sprint (coffee was chugged, mistakes were [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[],"class_list":["post-606","post","type-post","status-publish","format-standard","hentry","category-accessibility"],"_links":{"self":[{"href":"http:\/\/guupon.com\/index.php\/wp-json\/wp\/v2\/posts\/606","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/guupon.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/guupon.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/guupon.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/guupon.com\/index.php\/wp-json\/wp\/v2\/comments?post=606"}],"version-history":[{"count":1,"href":"http:\/\/guupon.com\/index.php\/wp-json\/wp\/v2\/posts\/606\/revisions"}],"predecessor-version":[{"id":607,"href":"http:\/\/guupon.com\/index.php\/wp-json\/wp\/v2\/posts\/606\/revisions\/607"}],"wp:attachment":[{"href":"http:\/\/guupon.com\/index.php\/wp-json\/wp\/v2\/media?parent=606"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/guupon.com\/index.php\/wp-json\/wp\/v2\/categories?post=606"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/guupon.com\/index.php\/wp-json\/wp\/v2\/tags?post=606"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}