Too Long; Didn't Read
Concerns around biases introduced by AI systems have grown significantly in the last couple of years. In most cases, the issue stems from the fact that a large number of real-world applications rely on labeled data for training. Such models are only as good as the data they are initially trained on. If the underlying dataset has flaws (such as focusing on one characteristic at the expense of others), chances are the neural network would pick up those biases and amplify them. In some cases, such as in college admissions, the implications can be profound.