Let’s Just Build AI That Share Our Values

Culture, Philosophy, and Humanity , Digital Ethics , Human Futures and AI , Technology , Updates

Instead we should be adhering to the Proactionary Principle and begin discussing these risks insofar that we develop a strategy that’ll effectively mitigate these risks and ensure the greatest possible outcome. Bostrom envisions a superintelligent A.I. that’ll share our values and fight for us when we need its help the most…

In the video, Nick Bostrom talks about how super-intelligent AI will obviously surpass our human intelligence and we won’t be able to stop it. What’s the solution? Create an AI that learns what humans value and construct its motivation system in such a way that we can leverage its intelligence to match our values. So, the question becomes: what is important to humanity in this new, digital age? Which standard of “human values” should we use in creating this structure? Nick says the bottom line is that we should solve this control problem well in advance of even constructing these super-intelligent AI.

Read more from Serious Wonder

Gerd Leonhard

Discuss

Cookies & Policy

By using this site you agree to the placement of cookies in accordance with our terms and policy.