The ethics of care and radical data science: an inhumane reduction of humanity down to what can be counted?

Futuresagency

Os Keyes is a PhD student at the University of Washington and an inaugural Ada Lovelace Fellow who studies gender, data, technology, and control. She has a fantastic post this week that make me think: data science needs more nuanced orientations on what is measured in what specific set of norms. She also introduces Dean Spade‘s term “administrative violence” to refer to the way that administrative systems create narrow categories of gender and force people into them in order to get their basic needs met. This is mandatory reading for anybody who is responsible for policy setting and data ethics.

“The rigid maintenance of hierarchy and norms is really what the state is for. For me, my ethics of care says that we should be working for a radical data science: a data science that is not controlling, eliminationist, assimilatory. A data science premised on enabling autonomous control of data, on enabling plural ways of being. A data science that preserves context and does not punish those who do not participate in the system.”

“Data science is fundamentally premised on taking a reductive view of humanity and using that view to control and standardize the paths our lives can take, and it responds to critique only by expanding the degree to which it surveils us. So perhaps a more accurate definition of data science would be: The inhumane reduction of humanity down to what can be counted. And by the standardizing logic of data science, this tracking is going to be reductive: It will track only the things that matter to those building the systems, in the ways that are acceptable to them, punishing you when you deviate. For me, my ethics of care says that we should be working for a radical data science: a data science that is not controlling, eliminationist, assimilatory. A data science premised on enabling autonomous control of data, on enabling plural ways of being. A data science that preserves context and does not punish those who do not participate in the system.”

These are all elements to take into account when one thinks about setting up AI Ethics Boards, and more specifically on the composition of these boards and who sets the norms for our data systems.

Guest post by The Futures Agency content curator Petervan

Other Resources

 

TFA Team

Discuss

Cookies & Policy

By using this site you agree to the placement of cookies in accordance with our terms and policy.