Machine Learning And Human Bias: An Uneasy Pair

Artificial Intelligence , Culture, Philosophy, and Humanity , Digital Ethics , Robotics , Rudy de Waele

Very insightful article by Jason Baldridge on the topic of “What We Teach Machines“.

“Humans are biased, and the biases we encode into machines are then scaled and automated”, “we need to have an uncomfortable discussion about what we teach machines and how we use the output” and “here is a huge opportunity to help rather than harm people” are just a few quotes from the article.

Most people have an intuitive understanding of categories concerning race, religion and gender, yet when asked to define them precisely, they quickly find themselves hard-pressed to pin them down. Human beings can’t agree objectively on what race a given person is. As Sen and Wasow (2014) argue, race is a social construct based on a mixture of both mutable and immutable traits including skin color, religion, location and diet. As a result, the definition of who falls into which racial category varies over time (e.g. Italians were once considered to be black in the American South), and a given individual may identify with one race at one time and with another race a decade later. This inability to precisely define a concept such as race represents a risk for personal analytics. Any program designed to predict, manipulate and display racial categories must operationalize them both for internal processing and for human consumption. Machine learning is one of the most effective frameworks for doing so because machine learning programs learn from human-provided examples rather than explicit rules and heuristics.

Read this must read article on TechCrunch.

Posted by Rudy de Waele aka @mtrends /

Rudy de Waele


Cookies & Policy

By using this site you agree to the placement of cookies in accordance with our terms and policy.



Futurist Gerd Leonhard and TheFuturesAgency now offers 100% online solutions for your event or conference.