Why Human Values can’t be taught to Artificial Intelligence (via Slate)

Artificial Intelligence , Culture, Philosophy, and Humanity , Digital Ethics , Human Futures and AI

“How to be good” is how Adam Elkus titled his recent article published on Slate, touching an important topic. Quote:

160415_FT_elon-musk-ai.jpg.CROP.promo-xlarge2

illustration: original post

“Or, in other words, ‘computers can act intelligently to the degree that humans act mechanically.’ Lots of human activities (like voting, greeting, praying, shopping, or writing a love letter) are ‘polymorphic’—socially shaped based on an understanding of how society expects the action to be performed. Often, we execute these contextual activities mechanically because no one bothers to question their status as socially expected behavior. One paper on modeling the evolution of norms was appropriately titled ‘learning to be thoughtless.’”

Click here to read full article

Related topics:

The digital transformation of business and society

The consequence of machine intelligence

Technology tipping points and societal impact

Gabriele Ruttloff

Discuss

Cookies & Policy

By using this site you agree to the placement of cookies in accordance with our terms and policy.