2. Mental Models for Capability
Learn what matters to human users, and build to accurately solve for their needs.
This document is still in early stages of drafting and incomplete. To contribute, please visit the Othello repository and send feedback to our authoring team.
Mental models form a person's internal representation of external reality, including ideas, people, objects, or systems. It is an internal mechanism for humans to represent the world in simpler forms that would otherwise be a load of complex information to understand literally.
Working with AI-powered products, like any other product, has associated mental models from its diverse pool of users. In order to build reliable, safe, and trustworthy systems, HCAI solutions should approach this set of representations delicately—by either A) working within existing mental models that users have about that system, or B) introducing users to new mental models.
Working within existing mental models and expectations of AI
Existing mental models can serve as a bridge between other experiences and yours. When building a solution, your product and engineering team typically considers competitors that serve the same or similar value to users. Their emotions, behaviors, and actions help inform the understanding they may have of new tools they use. This is immensely valuable to building emerging AI products that mold experience with user expectations.
How do we identify these existing models?
There are a few methods that we recommend in discovering and identifying mental models; they are interchangeable depending on the needs you are looking for.
Searching across scholarly literature: There is an abundance of research performed by institutions to understand more about human mental models in different modalities of technology, including more recent advancements. We have collated a few examples below within the lens of HCAI.
Market research: Product teams building AI systems in new problem spaces should consider the unique forces that may affect their product resilience with users. A model to that has framed our way of thinking of these factors is adapted from Strategyzer's Business Model Space. This model incorporates human cultural and societal trends as macro-forces to inform how to evolve an AI product/service for a changing technology space.
User research: If a user or customer base exists, research activities become more valuable when moving into new markets or capabilities. Collecting and synthesizing data about user's current workflows, the competitors they use, and their interactions with the world can paint a picture for both existing and emerging representations of interacting with an AI product. A useful resource to map mental models has been provided by PAIR: Mental Models worksheet.
Example: Recommender systems
Social media, e-commerce, streaming, and other platforms typically provide personalized recommendations to a user based on their exhibited past behavior as well as their features. New products and services that use recommender systems aim to deliver content that meets the real-time preferences and needs of their users, who demonstrate unique understandings of why recommended content is shown to them.
A good example: Amazon has evolved its phrasing of product recommendations over time. When users see recommendations such as "Customers who bought this item also bought...", they may interpret this feature signaling to them as a customer that "people who are similar to me like these items". They may also see "Top picks for you based on your history" as a system learning their preferences and as a trustworthy "shopping partner." https://arxiv.org/pdf/2109.00982
A bad example: Recommender systems can often be undermined when advertisement- or marketing-driven content is positioned as "recommended" to the user, ignoring their preferences or historical context—users change their expectations and lose trust in a recommender system that serves up content that may be irrelevant to them. https://interactivesystems.info/system/pdfs/958/original/UMAP-20-Exploring.pdf?1597843041
Example: Generative AI
Advancements in generative AI since 2022 have skyrocketed, and research in HCI has been barely keeping up in understanding user mental models when engaging with these new AI systems.
Some of the most unique research that backs many diverse mental models of GenAI has emerged from studying its usage by children. Individuals may liken a LLM-based product like ChatGPT to a "unique search tool" that can answer any of their questions, or a "friend" who can reflect emotion and help navigate everyday situations. https://arxiv.org/pdf/2405.13081
Solutions that leverage GenAI in its current state must balance the technology's capabilities with its shortcomings: GenAI is well-known for its challenges in "hallucinating" content that is non-factual or accurate and in its limited applicability to complex and data-absent tasks. An approach that many product teams are taking to address this challenge is displaying disclosures on generative outputs. While this solves surface-level issues on providing disclaimer to poor outputs, there still exists underlying issues on how users may still perceive and use the technology even with errors and ethical slips.
Introducing new mental models of AI to users
User mental models are formed within the first interaction of a product or service, and grow throughout their lifecycle. Bringing new AI-powered experiences to users requires a delicate handle, in which functionality (capabilities) of the AI technology is clearly understood and aligned with users throughout their journey.
For any given experience of an AI solution, it truly depends on the level of detail a user needs to understand about the underlying technology to use the product effectively.
Below is a reference for different categories of AI users, and example recommendations for how to address new AI capabilities directly to them. These won't always the most practical applications; the table serves to directly address some of the holistic needs observed from each category.
User Category | Description | Example recommendation for AI onboarding |
---|---|---|
Non-experts | Users with little to no foundational AI expertise. | Create carousel messaging that is clear, simple, and accessible, by placing a quick statement that helps these users understand the value of the AI solution and how to engage with the system. |
Users with basic AI literacy | Users have a general understanding of AI, and may have experimented with other solutions in the past. | Differentiate how the AI solution performs, via clear and simple product marketing material. This material can compare to solutions that these users use everyday, eg. "Cursor for realtors." |
Power users | Users are fairly experienced with AI, and seek customization and advanced features. | Annotated walkthrough of AI functionality can show these users how they can refine their inputs and how to maximize the output provided by the AI. |
Specialists | Users have a strong AI background and need to understand the solution's configuration. | Onboarding may be a background process for these users, and may point users via tooltips to functional levers they expect in other products. |
Enterprise users | Users may vary in AI competency, but are looking for transparency and explainability in data used. | Onboarding messages that communicate training data used, input/output constraints, and compliance to regulatory standards may help these types of users build confidence in using the product. |
The first interaction for users is a crucial step to building a mental model of the product and setting them up for success. Onboarding serves as a way to establish the relationship between a user and product. Onboarding can be conducted via many patterns, from annotations and tooltips, to dialogs and full-page carousels. Below, we cover a few practical patterns to introduce AI to users and start the development of appropriate mental models.