3. Human Agency and Oversight
Finetune the balance of agency between human and computer.
This document is still in early stages of drafting and incomplete. To contribute, please visit the Othello repository and send feedback to our authoring team.
While agency of an AI solution can be translated on a linear 0-5 scale, AI is not always accurate, valuable, or necessary in a human's given context. HCAI systems must provide the necessary balance between when an AI model's capabilities shine versus when to provide the user with finer control and input options to intervene, override, or guide model behavior.
Agency, or the capability to fulfill a purpose, is often described in the context of AI as a computer's capacity to act independently, make decisions, and perform tasks within its environment. Agents are a great example of describing how AI systems can act autonomously to perform complex workflows and provide outcomes to the user.
While agentic AI may be colorful and flashy on the surface, AI use cases don't always satisfy the requirements needed to levy control away from the human factor of a situation. If a task driven by AI technology is too inaccurate, too cost-heavy, or outright unnecessary in place of a human performing the same task, it's quite hard to justify the need for AI in that workflow.
Building HCAI adds a new dimension to this argument: solutions that harmonize the capabilities of humans with the capabilities of AI help build up systems that are reliable, safe, and trustworthy.
Framework of understanding agency
Agency of AI is typically described as a linear spectrum, where the leftmost endpoint represents human control, the right endpoint represents computer control, and different points of augmentation in between.
This has been argued as a misleading framework, as it limits design thinking of a whole system to only grant agency to a user or a computer—and that more thought can be placed in the system's features as they sit on levels of shared agency between both parties.
Our approach to understanding agency of a system is adopted from Ben Schneiderman's 2-dimensional framework that maps human control against computer automation.