Towards Understanding and Mitigating Biases
- Jingyan Wang, CMU
- Time: 2019-12-23 14:00
- Host: Dr. Yuqing Kong
- Venue: Room 102, Courtyard No.5, Jingyuan
There are many problems in real life that involve collecting and aggregating evaluation from people, such as conference peer review and peer grading. In this talk, I describe multiple sources of biases that may arise in such problems, and propose methods to mitigate these biases. (1) Human bias arises because the data collected from people are noisy and reflect people's calibration criteria and subjective opinions. In our work, we consider miscalibration, that is, different people have different scales. We then propose randomized algorithms that provably work under arbitrary miscalibration. (2) Estimation bias arises when algorithms and estimators yield different performance on different subgroups of the population. In our work, we analyze the statistical bias (defined as the expected value of the estimate minus the true value) when using the maximum-likelihood estimator on pairwise comparison data, and then propose a simple modification to significantly reduce such bias. (3) Policy bias arises where inappropriate policies may induce misaligned incentives and undesirable behaviors. I briefly describe our outreach efforts in addressing the biases caused by gender and by the alphabetical-ordering authorship in scientific publications.
Jingyan Wang is a PhD student at Carnegie Mellon University advised by Nihar B. Shah. Her research interests lie in understanding and mitigating biases in decision making problems such as peer review and peer grading, using tools from statistics and machine learning. She is the recipient of the Best Student Paper Award at AAMAS 2019. She received a B.S. in Electrical Engineering and Computer Sciences with a minor in Mathematics from the University of California, Berkeley in 2015.