Hey there! This is a 🔒 subscriber-only edition of ADPList’s newsletter 🔒 designed to make you a better designer and leader. Members get access to exceptional leaders' proven strategies, tactics, and wisdom. For more: Get free 1:1 advisory | Become an ADPList Ambassador | Become a sponsor | Submit a guest post
This post is brought to you by General Assembly: Skill up for a career in design or marketing for free with Adobe x GA’s new AI Creative Skills Academy. Learn more here about free courses available in the US, UK, and India.
Friends,
Designing for AI isn’t just about coding models or optimizing algorithms. It’s about creating systems that feel intuitive, ethical, and genuinely valuable to people. But how do you translate that vision into a repeatable process?
This week, I invited Ishaani Mittal (Amazon); she has helped shape experiences that contribute to large-scale impact — saving engineers 4,500+ hours and unlocking $260M in annual value within Amazon. More recently, her work on agentic interactions for the new Q Developer web app was widely recognized in industry media. Techzine Global stated it ‘sets a new standard for how enterprises transition off legacy systems.’
Drawing from her lessons learned working on real-world GenAI projects, Ishaani unpacks her 7-step framework for designing AI experiences that balance technical rigor with human needs. Whether you’re aligning skeptical stakeholders, prototyping ethical safeguards, or scaling a fledgling idea, her approach turns ambiguity into a clear roadmap.
If you’ve ever wondered how to make AI products that don’t just function—but inspire—THIS is your guide. 📕💥
Let’s start diving in 🔽
A 7-step framework for designing GenAI experiences
The buzz around GenAI is undeniable, with new products making headlines every day. These new products require designers. However, before designing GenAI products, it's crucial to distinguish GenAI from traditional AI. A key distinction, highlighted by Justin D Weiss et al., is the concept of general variability in user experience. This concept emphasizes that GenAI outputs can vary for the same input. GenAI generates outputs like text, image, code, etc., rather than decisions, labels, classifications, or decision boundaries. Keeping this variability in mind is essential when designing for GenAI.
In this article, I highlight seven key stages of designing for GenAI. I explain each stage and then provide some questions to ask yourself as you embark on your journey to design these experiences. The questions I list are in no way exhaustive but provide a good starting point.
The Core 7 Stages
These stages work because they will encourage you to take on a holistic approach that encompasses both the visible and intangible aspects of the user experience. This approach is essential for GenAI experiences, as they often involve complex interactions between humans and AI systems that go beyond visual interfaces.
These stages will guide you to consider the unseen elements of the customer experiences, ensuring that the resulting designs are functional but also innovative and impactful. Okay, now that I have hopefully convinced you about the value of these stages, let’s dive into them.
Step 1. Understand the capabilities of GenAI technology used
Most use cases in the industry you will work on will not expose the general purpose model like GPT, Claude, Llama, etc, as is. The experiences you will design will be specific to a use case pertaining to your industry. This means that AI scientists within your organization will further fine-tune the GenAI technology that will be used to power these use cases. In this case, it becomes pertinent to understand what the technology you are designing for can and cannot do because the final experience you design will be highly dependent on the capabilities of this technology.
The goal is not to become an AI scientist yourself (unless you want to) but to create the right mental model for this technology so that you can understand how it affects the user. Some questions to ask yourself when trying to understand the capabilities are as follows:
Q1: What inputs and outputs can this system generate?
Think at a high level about what modality of input/output the system can take/generate: text, image, audio, video.
Q2: How much time does the model take to respond?
This is critical for designers to understand, as they must define the right “waiting” experience in case the LLM takes longer. It also impacts the overall choice of interaction model for this specific use case.
Q3: What specialized knowledge does the LLM system have?
This will help you understand what kind of questions and actions the LLM can answer and take. Of course, if you are interested in reading more about the techniques used to equip LLM systems with specialized knowledge, the two most popular techniques are RAG and fine-tuning.
Q4: How much context can the model take?
This affects how much the LLM can remember about previous user interactions with it. A smaller context window means the user must explain what it wants every single time. In contrast, a more significant context window means the user can have much longer conversations or provide a much larger code repository to the LLM.
Step 2. Understand the core need and final outcome that the user wants to achieve
I cannot stress enough how important this is. Yes, as designers, we all like to think that we are masters of understanding core user needs and the final outcome that users want to achieve, but most designers follow this process: 1/define workflow for how users do things currently, 2/find pain points in that flow 3/create design solutions to tackle those pain points. Now, I am in no way belittling this process and its importance. But now, with GenAI, we have to change our thinking slightly. We need to instead focus on the following questions:
Q1: What is the job to be done?
The job to be done is the underlying need or problem that a user is trying to solve, regardless of the specific solution they choose. It is the functional or emotional outcome that the user is seeking. The job to be done is more fundamental and abstract than a user need. It focuses on the underlying purpose of the user's action rather than the specific details of how they want to achieve it. For example, a user may need a car to get to work. However, the job to be done is not simply "get to work." The job to be done is to "transport myself from one location to another in a convenient and timely manner." This allows for a broader range of potential solutions, such as public transportation, ride-sharing, or even walking or biking. By focusing on the job to be done, rather than the specified user need, you can develop more innovative solutions, breaking yourself from the shackles of the status quo.
Q2: Does this particular use case have the potential to change the personas who currently work in this domain?
Don’t limit yourself to thinking only about the current personas. Technology is evolving rapidly, and the current roles may not exist in the future. Always work backward from the core job to be done, keeping the current personas in mind.
Q3: Do the existing processes exist due to the limitations in technology, or do they exist because they are a necessity regardless of technology?
Always think critically about the personas that exist to achieve a specific job. Do they exist because of the limitations of the technology, limitations of how much a human can get done, or something else? If specific personas exist due to some policy or regulations, they will most likely continue. But if personas exist because of limitations of current technology, then maybe they won’t exist in the future.
Step 3. Define the partnership between Humans and AI
We have all seen the magic and the disappointment with AI. We know when it works; it's the best thing ever. But we also know that when it doesn't, it can be a frustrating experience. Depending on the confidence of the GenAI system's output, a human needs to be involved somehow. This is also the stage where you bring in your understanding of the technology and the user and define what interaction model may work best for the particular use case you are trying to solve. In my experience, I have come across three main models for human-AI partnership: 1/hands-off model, 2/co-creation model, and 3/complementary work. [Read more here]
Let’s take the example of a user trying to create a marketing campaign.
Hands-off model:
This method involves assigning unique tasks to A. Humans delegate specific tasks to the AI and review its output afterward without directly intervening in the AI's process. In the example of creating a marketing campaign, the marketing manager provides a detailed brief to the AI, specifying the target audience, key themes, and deliverables, such as slogans, email templates, and social media posts. The AI autonomously generates the campaign, leaving the manager to review, approve, or fine-tune the output. This approach is fast and works well for routine, well-defined campaigns emphasizing efficiency over creativity.
Co-creation model:
This approach entails humans and AI collaboratively working on the same canvas to generate an output together. This method emphasizes a partnership where both contribute in real-time to achieve a shared outcome. In this case, the marketing manager and AI work together in real-time, with the manager drafting initial ideas, such as a slogan or ad copy, and the AI suggesting variations, expanding on concepts, or producing complementary assets like graphics or email drafts. This iterative approach enhances creativity and allows the manager to remain deeply involved while leveraging the AI’s capabilities.
Complementary model:
Humans and AI work in parallel, producing similar outputs independently. This allows cross-referencing and identifying complementary or innovative solutions. In this case, the manager and AI work independently, creating their own campaign version. For example, the manager might focus on crafting a social media strategy while the AI generates an email marketing plan and SEO-friendly content. The outputs are then reviewed and combined, merging the AI’s efficiency with the manager’s domain expertise to create a comprehensive, high-impact campaign.
As you can see, each partnership model offers unique benefits tailored to different levels of complexity, control, and creativity required for the project. To define this partnership, ask yourself:
Q1: Does the user generally enjoy doing this work?
Another way to consider whether doing this work the way users currently do adds value to their lives. If the work is more a part of the “necessary evil” the user has to do, then it is a good candidate for the hands-off model. If we were to think about the software development industry, keeping codebases up to date with the latest technology and security patches is one example. However, a co-creation model may be better suited for things requiring users to add more creativity, like adding a new exciting feature in their web application.
Q2: Does technology better fit one of these partnerships?
It is essential to understand this because, no matter how much we would like to think otherwise, our work is driven by timelines, deadlines, and doing what we can in the shortest amount of time. In this case, it becomes crucial to understand what technology can support.
Step 4. Think in terms of user loops
We know that for any complex use case, the outcome of a GenAI system will not be 100% correct. We will need humans in the loop in some form. We also know that GenAI has made output generation really fast. This means that users can provide some input and quickly see the output. The human-in-the-loop capabilities may differ based on the partnership you define, but training your mind to think in this loop can help you unlock new innovative patterns as you work on more complex use cases. User loop is an interaction design paradigm that defines user actions in terms of inputs and outputs. The gap that bridges the inputs and outputs forms a single user loop. A user loop is created when GenAI technology serves a particular output. A user exits the loop when necessary output is achieved. One or more user loops can exist within a user flow. Some use cases may be directly served with a single user loop, whereas some complex use cases may require multiple user loops to be defined in the user flow. User loops can end in one iteration or take multiple iterations to get the desired output.
A user loop should contain the following information: