Bias creeps into algorithms in many ways — choices developers make, data containing historical inequities, biases of humans interacting with training data, even humans interacting with output data. An algorithm can also see patterns that humans didn't see and apply those biases to its analysis. Bias hurts everyone — those who are discriminated against and society as a whole when equal participation is limited. How do we know when an algorithm is biased, and what can we do to minimize the impact of that bias on people and society? What is fairness, and what can we do to build trust that an AI is fair?