In this paper, we focus on mitigating the harm incurred by a biased machine learning system that offers better outputs (e.g., loans, job interviews) for certain groups than for others. We show that bias in the output can naturally be controlled in probabilistic models by introducing a latent target output.
Our choice as to which movies to watch or novels to read can be influenced by suggestions made by machine learning (ML)-based recommender systems. However, there are some important scenarios where ML systems are deficient. Each of the following scenarios involves a situation where we wish to train an ML system so that it delivers a service. In each case, however, there is an important constraint that must be imposed on the operation of the ML system.