Guidances for Building HCAI
Guiding principles on building HCAI
Artificial intelligence poses many opportunities for societal transformation, with countless applications from automation of complex, risk-averse processes to intelligence powering benefits for humanity. There is however, a much taller emphasis on the technical advancements of the technology, and less discourse on the human element of technology.
The following guidances on building Human-Centered AI has been collated from numerous sources and synthesized to a viewpoint that AI should always have benefits to human users and stakeholders.
Problem statements
Among several recent advancements in AI, new and unique challenges have arisen that are still being addressed to this day. Akshay Kore's publication titled "Designing Human-Centered AI Experiences" gives us much better insight into these problems:
First is the control problem, which is how to build AI that is beneficial to us. How can we know cases when it is not working in our favor? What tools or techniques can we use to make sure that AI does not harm us?
The second is building trust. For AI to be helpful, its users need to trust it. We need to make sure that users of AI can trust its results and make appropriate choices. For example, the majority of the results of an AI system are probabilistic. When it predicts a donut in an image, it is never 100% certain. It could be 99.9% certain but never 100%. There is always a margin for error.
The third significant problem is explainability. Many AI systems are notorious for operating in a black box: the reasons for its suggestions are not known or are difficult to explain. The problem of explainability deals with providing appropriate information or explanations for AI’s results so that its users can make informed judgments.
Ethics is a critical ingredient for designing beneficial AI products. Our societies are biased, and many times AI models reflect this underlying bias, harming users or leading to unintended consequences. AI ethics focuses on formulating values, principles, and techniques to guide moral conduct when building or deploying AI.
HCAI guidance components
This section presents six guiding components essential for best practices in designing and implementing AI solutions.
1. Value Alignment
The involvement of human-centered design (HCD) applied to AI follows the same path as any other HCD process for product development: understanding what challenges deeply frustrate and motivate human users, and validating a solution that accurately solves for those needs. Read more->
2. Mental Models for Capability
Users form mental models for the tools they interact with, and conversely gain trust in the tools through a shared understanding. With the changing and evolving landscape of what AI technologies deliver, it is imperative for HCAI systems to set expectations with a user on what the AI solution confidently can and cannot do, so that the solution promotes the appropriate levels of trust. Read more ->
3. Human Agency and Oversight
While agency of an AI solution can be translated on a linear 0-5 scale, AI is not always accurate, valuable, or necessary in a human's given context. HCAI systems must provide the necessary balance between when an AI model's capabilities shine versus when to provide the user with finer control and input options to intervene, override, or guide model behavior.
4. Transparency and Explainability
Many analogies of AI describe the technology as a "black box" with little to no indicators on how it makes decisions. Reasoning or decision logic for HCAI systems should be accessible and intelligible to end users and stakeholders in order to understand how a decision or output was made.
5. Safety Mechanisms
In scenarios where AI acts in greater waves of agency and non-determinism can't be prevented, risks to users and others can lead to unforeseen negative impacts. While there are methods to learn, prepare, and monitor for potential risks in development, HCAI systems should have context-built pathways built that navigate the user away from errors and failures.
6. Adaptability
AI models are well-known for their ability to optimize their outputs from reinforcement learning techniques. In new user needs, contexts, and goals, HCAI systems need methods and interfaces that support adaptive behavior when it is required by the user. (Conversely, users will develop new mental models as they learn about these emerging capabilities and should be managed to continue expectation-setting.)
7. Data Fairness and Quality
AI models typically leverage training data to learn patterns or correlations in order to produce an output that closely models the intended output. When the data is of bad quality or favors certain input characteristics over others, the output can lead to unintended negative outcomes that can affect individuals. HCAI systems should use data that is accurate, complete, and representative of all user outcomes. Designers play a key role in identifying problem spaces that may suffer from biased outcomes and in prioritizing fairness of data used for AI solutions.
8. User Privacy and Security
Data collected from a user in an HCAI interaction must be carefully handled, ensuring that it is safely stored and away from misuse by other users. The usage of this data, whether for personalization or use in model fine-tuning, must also be disclosed in compliance with modern software safety standards.
9. Error and Failure Accountability
AI technology is similarly as probable as humans to make mistakes. Those mistakes should be communicated to users and recorded for future review and improvements. HCAI systems must weigh the considerations of how human users and stakeholders are informed of errors and how to effectively audit the failure paths to create accountability structures.
10. Accessible Experiences
While many AI solutions are novel in their advancements in understanding intent through natural language, they can still exhibit flaws in their approach to be accessible to all users. Designers should still consider the types of disabilities and impairments most closely tied to software and hardware use, and account for the needs of those demographics in HCAI design.
The bottom-line: good AI is Beneficial AI
AI-powered applications, products, and services will be built by humans, and should ultimately benefit humans. These solutions require human-centered design to advocate for human users and stakeholders.
Building AI that serves itself has controversial viewpoints: a mad scientist may see AI is our only path of evolution from homo sapiens, but a logistician may see economic and societal collapses from AI replacing humans. Design practitioners have signed up for a role in which any product regardless of technology has value and benefit to a human user, thus we should practice as such.