Some of us stumbled into AI design without meaning to. One day you're pushing pixels for a fintech dashboard, the next you're figuring out how to make a chatbot feel less like a customer service nightmare and more like a colleague who actually listens. That's how it happened for me. I was working on a feature, and someone said, "Can we add AI to this?" I didn't have a playbook. I don't think anyone did.

If you're reading this, you're probably in a similar position. Maybe your product team just decided everything needs to be "AI-powered" now. Maybe you're genuinely curious about what changes when the interface can talk back. Either way, welcome. I've spent the past months digging through research, case studies, and my own experiments to figure out what actually works. This is what I've learned.

AI product design concept

The old rules still apply (mostly)

Here's something that surprised me: designing for AI isn't as alien as it sounds. The fundamentals (user needs, clear feedback, intuitive flows) don't disappear just because there's a language model involved. If anything, they matter more. When the system can generate unpredictable outputs, your job as a designer is to create enough structure that users don't feel lost.

But there's a catch. Traditional interfaces are deterministic. You click a button, something happens, and it happens the same way every time. AI breaks that contract. The same prompt can yield different responses. The system might confidently give you wrong information. It might do exactly what you asked but not what you meant.

According to research from Maggie Appleton, conversational AI is slow. Users struggle to articulate their intent efficiently, sometimes taking 30 to 60 seconds just to type what they need. I read that and thought about every time I've watched someone stare at a blank chat input, unsure what to type. The blinking cursor is a terrible onboarding experience. This is why the best AI products don't just throw you into a conversation. They guide you.

Start with the user, sprinkle AI where it helps

There's a phrase I keep coming back to: AI-second design. The idea is simple. You don't start by asking, "How can we use AI here?" You start by asking, "What does the user actually need?" Then you figure out where AI can quietly make that easier.

Think about it like seasoning. You don't dump an entire jar of pepper into a pot because you bought pepper. You taste the food first, then add what's missing. AI is the same. It should enhance what's already working, not become the entire meal.

This sounds obvious, but you'd be surprised how many products get it backwards. They build an AI feature, then search for a problem it can solve. Users can smell that from a mile away. It feels forced, like when a brand tries too hard to be relatable on social media.

Jakob Nielsen's team at NN/g calls this the difference between "AI-first" and "user-first" design. The products that win are the ones where you barely notice the AI is there. It just makes things faster, smarter, or more personal.

Practical example: Instead of making users type everything into a chat, consider what I call "wayfinders." These are patterns that help people get started: example prompts, suggestion chips, templates, and follow-up questions. Spotify does this well with their AI playlists. You don't have to describe your mood from scratch; they give you starting points like "chill music for a Sunday afternoon" that you can tweak. The AI does the heavy lifting, but the user feels in control.

The trust problem (and why it's your problem now)

Here's the uncomfortable truth: most people don't trust AI. And honestly, they probably shouldn't, at least not blindly. The systems hallucinate. They make things up with complete confidence. They reflect biases from their training data. As designers and developers, we're now in the business of managing expectations we didn't create.

McKinsey's research on AI trust puts it bluntly: without trust, users won't use your system. Trust comes from understanding outputs and how they're created.

I think about trust in three layers:

  1. Visibility - Can the user see what the AI is doing?
  2. Explainability - Can they understand why it did what it did?
  3. Control - Can they change it, override it, or ignore it entirely?

If you nail all three, you've built something people can actually rely on. If you miss even one, you've built something that will frustrate them eventually.

What this looks like in practice

  • LinkedIn tells recruiters why a candidate is a good match. Your profile matches four of the five required skills. That's visibility and explainability in one sentence.
  • Microsoft Copilot shows citations so you can verify where information came from. You can click through and check the source yourself.
  • PayPal uses machine learning to detect fraud and explains why transactions are flagged, giving users clear next steps. Their AI-powered fraud detection has significantly reduced fraudulent transactions, and the majority of users say they trust the platform's fraud detection. That trust didn't come from the accuracy alone. It came from transparency.

There's also what researchers call the "trust trap," a mismatch between how confident users feel and how capable the AI actually is. Some people over-trust, accepting everything the model says without verification. Others under-trust, avoiding helpful features because they assume the worst. Your design needs to calibrate both extremes.

Patterns that actually work

I've been collecting AI design patterns like some people collect sneakers. Emily Campbell's Shape of AI is the best taxonomy I've found. She's catalogued patterns across six categories. Here are the ones I keep reaching for:

Wayfinders

These help users get started. Think example galleries, prompt suggestions, templates, and that little "Try asking..." text you see in good chatbots. ChatGPT shows suggested prompts on the home screen. Notion AI offers action chips like "Summarise" and "Translate" so you don't have to type from scratch. The goal is to reduce the anxiety of the blank input field.

Governors

These keep humans in the loop. Action plans that show what the AI is about to do before it does it. Draft modes where outputs are suggestions, not final decisions. Memory controls that let users see and edit what the system "remembers" about them.

GitHub Copilot does this well. It suggests code inline, but you have to explicitly accept each suggestion. The AI proposes, you dispose. That's the right power dynamic for high-stakes work.

Trust builders

These earn credibility. Caveats that acknowledge uncertainty ("I'm not 100% sure, but..."). Citations that link to sources. Disclosure labels that clarify when content is AI-generated. Watermarks on generated images.

Perplexity built their entire product around citations. Every claim links to a source. It's slower than a pure chatbot, but the tradeoff is credibility.

Tuners

These let users refine their intent. Attachments, filters, tone selectors, parameter sliders. Instead of forcing users to describe exactly what they want upfront, you give them knobs to adjust after the fact.

Midjourney and DALL-E both let you adjust style, aspect ratio, and other parameters after generating an initial image. This iterative approach matches how creative work actually happens.

Errors will happen. Design for them.

If you've ever been stuck in a chatbot loop where the bot keeps misunderstanding you, you know how maddening bad error handling is. The frustrating part isn't that the AI made a mistake. It's that there was no way out.

Jakob Nielsen's error message principles apply here too: errors should be expressed in plain language, indicate the problem clearly, and suggest a solution.

Good error design in AI products does three things:

  1. Clears up the misunderstanding transparently. "I think you're asking about X, but I'm not sure I understood correctly."
  2. Explains what the AI can and can't do. Boundaries aren't limitations; they're clarity.
  3. Offers a path forward. A button to rephrase, a suggestion to try something else, or (critically) a way to talk to a human.

That last one matters more than most companies admit. Studies show users are less frustrated when they know they could talk to a human, even if they never do. The option itself is reassuring.

One more thing: don't repeat the same error message over and over. If the AI fails three times with the same response, it stops feeling like a conversation and starts feeling like a broken machine. Write multiple variations. Make the failures feel human.

The ethics aren't optional

I'll be honest. This section could be its own article. But I can't write about AI design without at least touching on it.

AI systems can perpetuate bias. They can invade privacy. They can manipulate users in ways that aren't immediately obvious. As designers and developers, we're not neutral. We make choices that shape how these systems behave, who they serve well, and who they leave behind.

UNESCO's Recommendation on the Ethics of AI outlines core principles: human rights and dignity, transparency, fairness, and human oversight. Microsoft's Responsible AI principles add accountability, inclusiveness, reliability, and privacy.

Some principles I try to keep in mind:

  • Design with diverse users, not just for them. If your test group doesn't include people with different abilities, contexts, and constraints, you're building blind spots into the product.
  • Be transparent about what's AI and what's not. Users deserve to know when they're interacting with a machine.
  • Question the defaults. Algorithms are trained on historical data, which often reflects historical inequities. What assumptions is your system making? Who benefits from those assumptions?
  • Build in human oversight for high-stakes decisions. AI can assist, but a person should sign off on anything that significantly affects someone's life: hiring, lending, medical advice, and so on.

The point isn't to memorise principles; it's to build the habit of asking uncomfortable questions before you ship.

Where the interface is heading: predictions for the next era

Here's where I'll stick my neck out. Based on the patterns I'm seeing, the research I've read, and the products I've used, here's what I think is coming:

1. Chat will become one input among many

The chatbot hype made everyone think text prompts would replace everything. They won't. Maggie Appleton's research shows that task-oriented UIs (temperature controls, knobs, sliders, semantic spreadsheets, infinite canvases) often outperform pure conversation for complex work.

Think about Figma's AI features. You don't chat with Figma; you click buttons, drag sliders, and use contextual menus. The AI is embedded in the interface, not bolted on as a chat window.

Prediction: The best AI products will be multimodal by default. Voice, gesture, text, and direct manipulation will work together. Chat will be the fallback, not the primary interface.

2. Agentic AI will require new design paradigms

Gartner named agentic AI their top technology trend for 2025. These are AI systems that can take autonomous actions on your behalf: booking flights, managing calendars, writing and sending emails.

This changes everything. You're no longer designing a tool; you're designing a collaborator. Users need to:

  • Understand what the agent is doing (visibility)
  • Set boundaries for what it's allowed to do (control)
  • Monitor its actions and intervene when needed (oversight)

Prediction: We'll see new UI patterns for "delegation interfaces," ways to assign tasks to AI agents, review their plans before execution, and audit what they've done. Think of it like managing a virtual assistant who's very capable but needs supervision.

Anthropic's computer use feature and Google's Project Mariner are early examples. They show the AI's planned actions and let you approve or modify before execution.

3. Progressive disclosure will become essential

As AI capabilities grow, so does complexity. But users can only handle so much information at once. The solution is progressive disclosure, revealing complexity incrementally, only when needed.

Owkin's MSIntuit CRC, a colorectal cancer screening tool, does this brilliantly. Doctors see a simple prediction first. If they want to understand why, they can drill down into visual explanations and confidence scores. The interface doesn't overwhelm; it layers.

Prediction: Default AI interfaces will become simpler and cleaner, with depth hidden behind "show more" toggles, expandable sections, and drill-down interactions. Power users get full control; casual users get streamlined experiences.

4. The interface will become adaptive

Right now, most interfaces are static. They look the same for everyone. AI enables true personalisation at the interface level.

Netflix already personalises thumbnail artwork based on your viewing history. Airbnb uses AI to reorder search results based on your preferences.

Prediction: Interfaces will increasingly adapt in real-time. Not just content, but layout, density, and feature visibility. A power user sees more options; a new user sees guardrails. The same product feels different for different people.

5. Confidence and uncertainty will be visible

Current AI products often hide uncertainty. The model outputs text with the same visual treatment whether it's 99% confident or guessing wildly.

Prediction: Future interfaces will surface confidence levels visually. Imagine a gradient that shows how certain the AI is about each claim, or a subtle indicator that says "This answer is well-supported" vs. "This is my best guess."

Elicit, an AI research assistant, already does a version of this. It shows which claims are supported by citations and which are inferences.

How to adapt: a practical guide for designers and developers

If you've made it this far, you're probably asking: "Okay, but what do I actually do differently?" Here's my answer.

For designers

1. Learn how the models work, at least at a surface level.

You don't need to understand transformer architecture in depth, but you should know what tokens are, what context windows mean, and why models hallucinate. Jay Alammar's Illustrated Transformer is a good starting point.

2. Design for variability.

Your mockups should account for the fact that AI outputs change. What happens when the response is one sentence? What about when it's ten paragraphs? Design the extremes, not just the happy path.

3. Build feedback loops into everything.

Thumbs up/down buttons aren't just for collecting data. They signal to users that their input matters. ChatGPT, Claude, and Gemini all use these prominently.

4. Treat prompts as a design material.

The way you phrase a system prompt affects outputs as much as any visual decision. Collaborate with your developers on prompt engineering. It's not just a backend concern.

5. Stay close to the research.

Bookmark NN/g's AI articles, Shape of AI, and Google's People + AI Guidebook. These aren't academic exercises; they're practical resources updated regularly.

For developers

1. Make AI behaviour observable.

Log what the model does, what confidence scores it produces, and how users respond. This isn't just for debugging. It's how you improve the product.

2. Build in human overrides.

Every AI action should have an escape hatch. Users should be able to dismiss suggestions, undo actions, and override recommendations. Make these controls obvious, not hidden.

3. Version your prompts.

Treat prompts like code. Store them in version control. Document what each version is optimised for. A/B test when you can.

4. Set sensible defaults, but allow customisation.

Temperature, response length, model choice: these parameters matter. Expose them to power users, but pick intelligent defaults for everyone else.

5. Design for latency.

AI responses take time. Use streaming where possible (word by word feels faster than waiting for the complete response). Show progress indicators. Vercel's AI SDK makes streaming straightforward in web apps.

For both

Experiment relentlessly. Build small prototypes. Test them with real users. Watch where they get confused. The field is moving too fast for best practices to solidify, which means the people who learn fastest will win.

"Put AI models in the hands of a generalist, and you have yourself a one-person army."

I still believe that. The designer who understands a bit of code, or the developer who understands a bit of psychology, that person can ship things neither could build alone.

What your role is becoming

Here's where I'll get a bit personal. If you're a product designer or developer today, your role is shifting. The screen-level work (layouts, components, basic logic) is increasingly automated. I've seen tools generate decent UI from a text prompt in seconds. I've watched AI write functional code faster than I could outline the requirements.

But here's what I've also noticed: the tools are only as good as the person directing them. They need context. They need constraints. They need someone who understands the problem well enough to know when the AI is solving the wrong thing.

What's harder to automate is the thinking behind the work. Research. Strategy. Knowing which problems to solve and why. Understanding context that doesn't fit neatly into a prompt. That's where you'll continue to add value.

I've written before about the T-shaped professional, someone with deep expertise in one area and broad knowledge across many. In the AI era, this matters even more. The designer who understands code, psychology, and business strategy can wield these tools far more effectively than someone who only knows one discipline.

Where to go from here

If you've made it this far, you're already ahead of most people. Here's how I'd suggest moving forward:

  1. Pick one AI product you use regularly. Study it. Notice what works and what frustrates you. Reverse-engineer the design decisions.

  2. Build something small. A chatbot, a prompt-based tool, a generative feature in an existing project. You'll learn more in a weekend of tinkering than a month of reading.

  3. Read the research. Microsoft's HAX Toolkit, NN/g's AI studies, Shape of AI, Google's PAIR Guidebook. These aren't dry academic papers; they're practical guides you can use immediately.

  4. Stay sceptical. Not every product needs AI. Not every AI feature makes things better. Your job is to know the difference.

The tools are evolving faster than any of us can track. What won't change is the core question we've always asked: Does this actually help the person using it? If you keep that at the centre, you'll figure out the rest.

Resources

Design patterns and guidelines

Understanding how models work

Prompt engineering

Developer tools

Ethics and responsible AI

Happy building.