9. Error and Failure Accountability
While errors can't be prevented, catastrophic failures can.
This document is still in early stages of drafting and incomplete. To contribute, please visit the Othello repository and send feedback to our authoring team.
AI technology is similarly as probable as humans to make mistakes. Those mistakes should be communicated to users and recorded for future review and improvements. HCAI systems must weigh the considerations of how human users and stakeholders are informed of errors and how to effectively audit the failure paths to create accountability structures.