Earlier in August 2016, Roberto V. Zicari interviewed John Markoff, technology writer at The New York Times: “In 2013 John Markoff was awarded a Pulitzer Prize. This interview is related to his recent book ‘Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots’, published in August of 2015 by HarperCollins Ecco.
“Intelligent system designers do have ethical responsibilities.”
“Q1. Do you share the concerns of prominent technology leaders such as Tesla’s chief executive, Elon Musk, who suggested we might need to regulate the development of artificial intelligence? — John Markoff: I share their concerns, but not their assertions that we may be on the cusp of some kind of singularity or rapid advance to artificial general intelligence. I do think that machine autonomy raises specific ethical and safety concerns and regulation is an obvious response.”
“Q2. How difficult is it to reconcile the different interests of the people who are involved in a direct or indirect way in developing and deploying new technology? — John Markoff: This is why we have governments and governmental regulation. I think AI, in that respect is no different than any other technology. It should and can be regulated when human safety is at stake.”
Check out Gerd Leonhard’s new book “technology vs humanity”