Researchers at King’s College in London have discovered a major blindspot in driverless car AI. Dr. Jie Zhang of King’s Department of Informatics tested eight pedestrian detection systems used in driverless cars and determined these systems fail to identify children and people of color more than white pedestrians.
The researchers ran 8,000 images through the pedestrian detection systems and discovered the systems identified adults 20% more accurately than children. These systems failed to detect dark skin pedestrians at a 7.5% deficiency rate than light skinned pedestrians.
It’s All About the Data
All AI systems require training data to make decisions. Driverless car systems make numerous decisions. When to slow down, speed up, make turns, and avoid hitting pedestrians.
Pedestrian detection systems rely in image data to detect people from other cars, traffic lights, and other road objects.
Even though we continuously give AIs more credit than is due, AI isn’t smart and it definitely isn’t as nimble as a human being. In order to detect any kind of human, an AI needs to have ample training data representing all sorts of humans. This means any pedestrian detection AI needs pictures of big humans, small humans, white humans, child humans, dark skin humans and light skinned humans.
At this time, AI isn’t smart enough to know a human when it sees it, it needs as many human examples as humanly possible.
These systems are biased against detecting dark skinned and child-like people because they weren’t trained to recognize them.
And who exactly is doing the training?
More Representation = More Accuracy
These systems weren’t adequately trained because adding diversity to the training data wasn’t really considered. Most AI systems development teams consist of mainly white and male teams, and the resultant AIs reflect that lack of diversity.
If we truly want to approach an era of Artificial General Intelligence we have to do better in making sure AI training data represents society at-large.