1. When an engineer’s implicit bias becomes a computer’s
Can the machine learning algorithms that underpin the operation of autonomous vehicles perpetuate—or worsen, even—social, structural biases against people of color?
Maybe, according to a new paper (PDF) from the Georgia Institute of Technology, where researchers tested the accuracy of object detection systems (not unlike those used in driverless cars) in positively identifying pedestrians of varying skin colors.
Researchers paid human test subjects to review a collection of 3,500 images of people of varying skin tones and to mark each photograph with “LS” or “DS” to designate light or dark skin of the subject.