QBO Welcome 

Enhanced

Diagnosing AI Help Panel Abandonment

— PLATFORM

Desktop


— DISCIPLINES

UX Research

Product Design

Prototyping


— TOOLS

Figma

FigJam


— TECH

Generative AI

Intuit, Inc.

A financial software company serving over 100 million customers with products including QuickBooks, TurboTax, and Intuit Enterprise Suite.


Intuit Assist: Welcome

The opening state of the AI-powered help panel inside QuickBooks Online and TurboTax Online. Designed to surface personalized recommendations and reduce contact volume by keeping users in-product.

Role

Lead Product Designer: Research Lead, Ideation, Design

  • UX Research: Moderated and unmoderated sessions across QBO and TTO to identify where trust broke down in the AI help experience
  • Synthesis: Affinity mapping and insight development from 8 participant sessions
  • Ideation: Cross-functional workshop facilitation to generate and pressure-test design directions
  • Design: Redesigned the Welcome state with contextual recommendation and rationalization message
  • Stakeholder Communication: Video storyboard to land the direction with PM and content leads


KEY TAKEAWAYS

  • Trust: The opening state of an AI feature is a capability signal. Generic tiles read as evidence the system is not intelligent, regardless of what runs underneath.

  • Inference: A single contextually relevant suggestion with a plain-language rationale outperformed four generic tiles with better content.

  • High-stakes users: Bank connections and tax documents are time-sensitive and emotionally loaded. Users in this state bypass AI unless the interface gives them an immediate reason to trust it.


THE PROBLEM

The team was iterating on Welcome, the opening state of the Intuit Assist help panel in QuickBooks Online. In its existing form, Welcome presented four generic topic tiles and an input box beneath a GenAI disclaimer.


The goal of the redesign was to surface a single personalized recommendation based on the user’s navigation context, paired with a brief rationale explaining why it was being shown.


Before committing to that direction, the team needed to know: does the current experience actually fail? And if so, where?


Watch what a diagnostic looks like in a real user scenario.

RESEARCH

Moderated and unmoderated sessions with 14 users across QBO and TTO tracked behavior from the moment they encountered a problem through their first interaction with the help panel. The QBO scenario centered on connecting a bank account, chosen because bank connections are time-sensitive, business-critical, and emotionally loaded. The TTO scenario asked users to amend a prior-year return, a similarly high-stakes task that requires clear guidance under uncertainty.

FINDINGS

High-stakes users bypassed the AI before opening it

Participants who ran their own businesses went straight for “Contact Experts” or a phone number before looking at the Help button.


“The first thing I would do would be to look for a contact within QuickBooks customer service. I would be looking for a phone number… I want answers right now.”


These users eventually opened the panel. But they arrived pre-skeptical and nothing in Welcome’s opening state gave them a reason to stay.

Topic tiles read as a knowledge base, not contextual intelligence

Five of eight users correctly identified an AI-powered experience. But their interpretation of the tiles told a different story.


“What I’m seeing here is not quite what I expected. I expected more of a list of different resources, but instead I’ve got kind of these FAQs.”


The problem was not that users disliked the tiles. The tiles gave no signal that the system had noticed what the user was trying to do.


Without contextual awareness, users defaulted to minimal trust

“What this feature is just trying to do, in reality, is keep you from trying to talk to someone… I’m going to say more often than not, the solution you’re looking for is not going to be here.”



That perception, that generic tiles meant the panel was deflecting and not helping, was the core trust failure. Nothing to do with the underlying AI. Everything to do with what the opening state communicated about its capability.

DESIGN DECISION

The redesigned Welcome state replaced the four generic tiles with a single recommended path, accompanied by a brief contextual rationale: “Based on where you are in QuickBooks, we think this might help.”


That single change addressed both failure modes simultaneously:

  • • For skeptical, high-urgency users: a reason to pause before routing to “Contact Experts.”
  • • For engaged, exploratory users: confirmation that the product already knew something about their situation


The single-recommendation format was a deliberate constraint. Multiple tiles were being read as a menu, implying users needed to diagnose their own problem before the AI could help. One recommendation implied the AI had already done that work.

VALIDATION

A cross-functional ideation workshop stress-tested alternative directions before committing to a prototype. Findings went into an affinity map. The storyboard concept was developed independently, then produced in partnership with the media department, who advised on script, format, and animation and owned production. The storyboard landed the concept with PM and content leads, particularly the rationalization message mechanic, which needed buy-in before it could be built.


The storyboard showed a user arriving frustrated, the panel responding with a contextually specific recommendation, and the user completing the task without leaving the product. That story, not the UI spec, was what landed the direction with leadership.

WHY THIS MATTERS

The failure mode here is not unique to QuickBooks. It shows up in any AI product where the intelligence is real but the surface does not demonstrate it. Users cannot see the model. They can only see what the interface tells them about it.


If the opening state of an AI feature looks like a static FAQ list, users will treat it like one, regardless of what is actually running underneath.


The diagnostic question is not “do users understand this is AI?” It is “does the interface give users any evidence that the AI knows something about them, right now?” If the answer is no, you have a trust gap, and trust gaps are where abandonment lives.