AI First Designer

AI First Designer

7 shortcuts to rapid prototyping ⚡️

Try this with your Figma Make, v0, Framer and Lovable.

Avani's avatar
Felix Lee's avatar
Avani and Felix Lee
Dec 23, 2025
∙ Paid

Hey there! This is a 🔒 subscriber-only edition of AI First Designer (by ADPList) 🔒, to help designers transition into successful AI-First builders. Members get access to proven strategies, frameworks, and playbooks.

For more: 🏛️ Get free 1:1 mentorship | ⭐️ Be first to win in career as an AI-First Designer | 📘 Claim your AI Design Guide for free.


Hi friends,

Over the past few months, I’ve noticed something subtle but consistent in conversations with designers, PMs, and product teams.

A designer shares a prototype.
A PM presents a roadmap.
An engineer asks clarifying questions.

And somewhere in between, the same gap keeps appearing, where the idea looks good, but it doesn’t quite feel real yet.

Across startups, agencies, and in-house teams, designers kept circling back to the same frustration: they knew what the experience should feel like, but they couldn’t show it fast enough.

Almost everyone said some version of this:

“I don’t need more design tools. I need faster ways to make ideas real.”

That’s when AI design & prototyping tools entered the conversation, not as a trend, but as a response to friction.

I also wrote a two helpful guides:

  • Figma Make Essentials

  • Guide to Figma Make Part II

  • Full Video series in Jan 2026 on AI-First Designer School

Very few designers have clarity on which tool to use, when to use it, and why. The tools looked similar on the surface, yet they solved very different problems depending on where you were in the design process.

So I started digging deeper.

Today, I’ll walk you through the AI design & prototyping tools that matter right now, including Figma Make, v0, Framer AI, and Lovable, and, more importantly:

  • What part of the design process do they actually support

  • Where they create real leverage (and where they fall short)

  • How designers are using them to communicate intent earlier and more clearly

By the end, the goal is simple:

To give you a clearer mental model for choosing the right tool so your ideas spend less time being explained and more time being experienced.

Let’s dive in.


1. How to understand the tools out there (from a designer’s lens)

When we talk about AI design & prototyping tools, we’re not talking about design inspiration or pretty visuals, but we’re talking about tools that help designers move faster from idea to interface, create interactive and high-fidelity prototypes, and communicate intent more clearly to PMs, stakeholders, and engineers.

At a high level, today’s AI design & prototyping ecosystem has four key players:

  • Figma Make

  • v0

  • Framer AI

  • Lovable

All four tools are strong, and over time, they will continue to build toward feature parity. But they are optimized for very different moments in a designer’s workflow.

If we had to describe when to use each tool in a single sentence, it would be:

  • Figma Make → AI-powered design inside your existing design system

  • v0 → Beautiful, production-style UI with minimal effort

  • Framer AI → From prompt to live, responsive website

  • Lovable → Product-like prototypes that actually behave like apps

Here’s the layer deeper on each.

Figma Make

Figma Make is best understood not as a new tool, but as an AI layer inside the designer’s home base. It’s about working within existing design systems, generating layouts and flows that feel native to Figma, and speeding up early exploration without breaking established processes.

Figma Make is not built to generate production-ready code or simulate complex product logic; it’s best for early exploration within a design system. For example, here’s a prompt you can use and try:

Act as a senior product designer working inside an established design system.

Context:
– This is a B2B SaaS dashboard for marketing managers.
– We already have typography styles, spacing tokens, and button components defined.

Task:
– Generate a dashboard layout focused on “Campaign Performance Overview.”
– Use existing components only (cards, tables, charts, filters).
– Prioritize clarity and scannability over visual flair.
– Assume data density is medium-to-high.

Constraints:
– Desktop-first layout
– 12-column grid
– Clear hierarchy between primary KPIs and secondary metrics

Output:
A structured screen layout with component placement, hierarchy notes, and interaction hints that can be directly refined in Figma.

v0

Next, we have v0 that excels at generating polished, modern front-end UI extremely fast. Its outputs look closer to real products than traditional design mockups, which makes it ideal for high-fidelity demos, stakeholder presentations, and pressure-testing UI concepts before full design investment.

Here’s a prompt you can try:

Act as a front-end UI designer creating a polished, modern interface.

Context:
– This is a consumer-facing fintech product.
– Target audience: Gen Z users familiar with apps like Revolut and Notion.

Task:
– Generate a clean, modern home screen UI.
– Emphasize strong typography, spacing, and visual hierarchy.
– Include sections for balance summary, recent transactions, and quick actions.

Style Direction:
– Minimal, light theme
– Subtle shadows and rounded corners
– Modern sans-serif typography

Output:
A production-style UI layout that looks close to a real app interface, suitable for demos and stakeholder reviews.

However, it remains front-end focused and is less concerned with flows, systems, or long-term design structure. Think of it as a UI accelerator, not a design system builder.

Framer AI

Framer AI is optimized for designers who want the output to be live. Its biggest strength is closing the gap between design, prototyping, and publishing. From a single prompt, designers can generate responsive layouts, animations, and fully deployed websites.

It’s especially powerful for portfolio sites, brand campaigns, landing pages, and marketing-led design work.

Here’s a prompt you can try:

Act as a digital brand designer creating a live marketing website.

Context:
– Brand: Premium skincare startup
– Goal: Explain the product clearly and drive sign-ups

Task:
– Create a responsive landing page structure.
– Include hero section, value proposition, product benefits, social proof, and CTA.
– Use clear scroll-based storytelling.

Design Direction:
– Soft, minimal aesthetic
– Neutral color palette with one accent color
– Large typography and generous spacing

Output:
A fully responsive, live website layout with animations and interactions that feel refined but not distracting.

Framer AI is less suited for complex product flows, but it’s unmatched when speed-to-live matters more than system depth.

Lovable

Lastly, Lovable sits closest to product behavior, not just visual design. Its strength lies in creating prototypes that respond to user input, simulate real product logic, and feel closer to a working app than a mockup.

Designers using Lovable are often validating UX flows, testing interaction-heavy concepts, and collaborating closely with PMs and engineers.

Here’s a very powerful prompt:

Act as a UX concept architect translating raw ideas into structured wireframes.

Requirements:
– Interpret the provided text description and break it into clear page sections (hero, features, CTA, navigation, etc.).
– Define hierarchy, spacing intent, and content grouping as if preparing a wireframe for Figma.
– Describe component placeholders (images, headings, body text, buttons, icons) with size and importance notes.
– Ensure layout logic is responsive-friendly (desktop-first but adaptable).

Context:
– This is an onboarding flow for a productivity app.
– The goal is to guide users from signup to first task completion.

Output:
A structured, section-by-section wireframe blueprint written in an HTML-like or Figma-friendly format.

It has a slightly steeper learning curve than pure design tools, but in return, it produces far more realistic, decision-ready prototypes.

Chatbots (where ChatGPT fits for designers)

These are adjacent tools. They don’t replace design platforms, but they significantly shape how designers think, explore, and communicate ideas.

ChatGPT is particularly strong at helping designers:

  • Translate vague ideas into structured flows

  • Write and refine UX copy

  • Generate user scenarios, edge cases, and micro-interactions

  • Sanity-check IA, navigation, and user journeys

Here’s a prompt to quick hack it:

Act as a senior UX designer helping me clarify a product concept.

Context:
– I’m designing a feature for a habit-tracking app.
– Users often drop off after the first week.

Task:
– Identify 3–5 UX problems that could cause early drop-off.
– Propose interface-level solutions (not business logic).
– Suggest micro-interactions or feedback moments that reinforce habit formation.

Output:
A clear, structured breakdown that I can directly translate into screens and flows.

It can also generate basic front-end code or prototype logic, but most designers don’t use it as a place to iterate visually. Instead, they use it to think faster before designing.

Claude, with its Artifacts system, allows designers to view and run small prototypes or UI concepts directly inside the chat. However, it’s best suited for single-file experiments and quick explorations, not ongoing iteration.


2. How these tools change the design process

So now the question comes that with these new AI design & prototyping tools, how should designers actually be using them?

The biggest shift isn’t speed; it’s where design work happens in the product lifecycle. For years, design has been treated as a phase; AI is turning it into a continuous layer.

But let’s look at what changed.

The old way: design as a phase

Here’s what most design workflows have looked like for the past decade:

Ideation

Ideas live in documents, whiteboards, or loose conversations. Designers rarely prototype at this stage — only a small minority do (~10%). Most ideas remain abstract.

Planning

Designers produce early sketches or low-fidelity wireframes. Prototypes exist, but they’re often static and incomplete. Real interaction is postponed.

Discovery

In many teams, this phase is skipped entirely. Where it does exist, research relies on explanations instead of experiences.

Design exploration

This is where design finally “starts.” High-fidelity screens are created in Figma, often without validation of the underlying flow.

Handoff

Design is passed to engineering as static files. Engineers reinterpret intent and rebuild everything from scratch.

Iteration

Feedback cycles are slow. Changes require redesign, re-alignment, and re-implementation.

This workflow made sense when prototyping was expensive. But AI has fundamentally changed that cost.

The new way: design as the interface to thinking

Forward-thinking design teams are now using prototypes throughout the entire process, not just at the end. Design is no longer the output but it’s the language.

Here’s how the new workflow looks.

1. Ideation → from thought to interface

Designers are using AI tools to:

  • Explore multiple directions quickly

  • Visualize ideas before committing to structure

  • Test assumptions visually instead of verbally

How designers do this today:

  • Use ChatGPT to clarify flows, edge cases, and UX logic

  • Generate early layouts using Figma Make or v0

  • Treat outputs as disposable thinking, not final design

Try this prompt:

Act as an experienced UX designer.
Help me explore three different ways a user could complete the following task:
[describe the task]

Focus on flow logic and user thinking, not visual design.

For each approach, explain:

1. User mindset
How the user expects this task to work

2. Starting point
What triggers the flow and where the user begins

3. Step-by-step flow
Key steps, decisions, and branches
Where users might hesitate or get confused

4. Tradeoffs
What this approach does well
What it gives up

5. Edge cases
Common mistakes or drop-off points
How the system helps users recover

After outlining all three, briefly:
- Compare the approaches
- Call out the main assumptions behind each
- Suggest what to prototype first to validate them

Treat these flows as thinking tools, not final UX.

2. Concept development → from screens to systems

Once a direction feels promising, designers shift from exploration to structure. This is where AI helps designers:

  • Establish hierarchy

  • Compare layout approaches

  • Reduce time spent on repetitive setup work

Typical tools:

  • Figma Make for system-aligned exploration

  • v0 for pressure-testing visual polish

3. Discovery → let users experience the idea

Instead of explaining ideas, designers now show them. AI-powered prototypes allow teams to:

  • Validate flows with real users

  • Identify confusion early

  • Test interaction logic, not just layout

Tools that shine here:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 ADPList Pte. Ltd. · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture