Creating a Design Strategy for Human-Centered AI
Reach organizational maturity of building beneficial AI through new strategies
AI technology capabilities are changing everyday, and in turn change humans' expectations about what the technology can and cannot do. Human-centered AI (HCAI) is an emerging discipline that combines the research of human interaction with new AI algorithms with evolving UX design methodologies—with the core intent of creating AI systems that amplify and augment human abilities rather than replace them.
There are countless frameworks, principles, and definitions for creating HCAI. Our synthesis in this section is derived from existing research by Ben Schneiderman and the HCI Lab out of University of Maryland. Below are our recommendations to building a strategy to understand and create beneficial HCAI systems:
Consider the balance of agency to build reliable, safe, and trustworthy AI
Agency, or the capability to fulfill a purpose, is often described in AI contexts as a one-dimensional framework in which humans levy control to a computer. This is cited as a misleading spectrum as it limits design thinking of a whole system to only grant agency to a user or a computer—more thought can be placed in the system's features as they sit on levels of shared agency between both parties.
A proposed approach to design for human-centered AI is through a two-dimensional framework, mapped by computer automation against human control. The intent is to define systems that either leverage full control to a computer, enable mastery by a human user, or harmonize the relationship between both—the last being the core driver to reliable, safe, and trustworthy systems.
/ image of hcai 2x2 matrix
What defines a reliable, safe, and trustworthy system?
Reliable, safe, and trustworthy systems each have their own defined criteria, but together promote the creation of mature and well-understood systems for complex tasks.
- Reliable AI systems produce consistent and expected responses, can withstand errors or disruptions, and can be anticipated by a human user. Reliability of AI systems affords itself to explainability of its decisions, which is supported by best practices today: not limited to logical software workflows, explainable UI, validation testing, and audit trails.
- Safe AI systems are built with consideration to minimizing risk of causing harm to users or environments. They are built with predefined limits and boundaries to control behavior, but can also engage in risk assessment and mitigation strategies when presented with an unintended scenario. Safety is cultured by builders of AI solutions, and requires regulatory and legal accountability to build safe AI.
- Trustworthy AI systems are a critical facet of whether humans perceive a system differently, and will choose to interact with it. They are built on transparency on processes and decision-making, respect to security and privacy of human users and environments, and adhere to the ethical principles and values that human users are guided by. Trustworthiness can often be contributed to and attributed by large groups, but truly comes down to individual assessment.
Shift away from emulating humans, and towards empowering them
A common trend in generative AI advancements in the 2020s is driven by the "emulation goal"— to apply the understanding of human perceptual, cognitive, and motor abilities to building computers that perform just as well as or better than humans. This goal is what is driving discussion toward physical AI (robots) and the technological milestone of reaching artificial general intelligence (AGI). This is a scientifically ambitious goal that is aligned to the controversial idea of artificial intelligence "sentience", eg. having responsibility and legal protection of their rights, developing their own sense of moral and ethical beliefs, and coexisting with the human race.
"Humans have a fundamental tendency to create, and the ultimate creation is another human."
The near future has many challenges to solve for first before we reach such a dilemma. Researchers and designers who work in today's practical use cases of computer technology should naturally be guided by the "application goal"—motivations of human users and stakeholders to recognize that many humans need controllable, predictable, and comprehensible solutions. Humans are best empowered when technologies support their abilities, raise their self-efficacy, and enable their creativity.
/ image of humans
As with the balance of agency, HCI practitioners should recognize that working toward emulation through AI does not guarantee the correct application. Building well-designed HCAI solutions requires an empirical understanding of human users and how best to balance their needs through software and hardware.
Establish AI governance structures to bridge the gap from ethics to practice
The speed at which AI is moving, the lack of government regulation on such technology, and the discourse about its geopolitical importance are some of many factors throwing AI systems into a point of no return. Technology news about AI advancements have criticized the use of copyrighted work as training data, the dangers of job displacement and inequitable applications of AI, and the dilemma of building emulated AI that may target humans as a threat to its own existence. Many are frozen in shock and act as fearful bystanders to technology advancements that only drive this impact deeper into a negative.
AI as a catalyst for technology has immense value to domains when driven toward the benefit of humanity. Team, organization, and industry leaders must take it upon themselves to define their own sets of definitions, principles, and frameworks for HCAI — these collections serve as governance structures that guardrail how AI is built, managed, and monitored with the ultimate goal driving solutions to be human-centered.
/ image of governance
- Teams start with building reliable AI systems through design and technical best practices.
- Organizations build maturity through a safety culture and must effectively manage their workforce to orient to safety of AI applications for humans.
- Industries are the largest realm of influence and can form independent oversight groups and committees, tasked with holding individuals and organizations accountable on creating ethically aligned products and services.
- Regulation is often a controversial subject due to its hindrance on innovation, but is created by legislators as necessary in order to serve the public interest.