Othello LogoOthello

7. Data Fairness and Quality

Use high-quality and unbiased data to maximize HCAI outcomes for human users.

This document is still in early stages of drafting and incomplete. To contribute, please visit the Othello repository and send feedback to our authoring team.

AI models typically leverage training data to learn patterns or correlations in order to produce an output that closely models the intended output. When the data is of bad quality or favors certain input characteristics over others, the output can lead to unintended negative outcomes that can affect individuals. HCAI systems should use data that is accurate, complete, and representative of all user outcomes. Designers play a key role in identifying problem spaces that may suffer from biased outcomes and in prioritizing fairness of data used for AI solutions.