fbpx
April 29, 2024

Slashdot: How Much Should Government Regulate AI?

How Much Should Government Regulate AI?
Published on August 13, 2017 at 07:59PM
Law professor Ryan Calo — sometimes called a robot-law scholar — hosted the first White House workshop on AI policy, and has organized AI workshops for the National Science Foundation (as well as the Department of Homeland Security and the National Academy of Sciences). Now an anonymous reader shares a new 30-page essay where Calo “explains what policymakers should be worried about with respect to artificial intelligence. Includes a takedown of doomsayers like Musk and Gates.” Professor Calo summarizes his sense of the current consensus on many issues, including the dangers of an existential threat from superintelligent AI:

Claims of a pending AI apocalypse come almost exclusively from the ranks of individuals such as Musk, Hawking, and Bostrom who possess no formal training in the field… A number of prominent voices in artificial intelligence have convincingly challenged Superintelligence’s thesis along several lines. First, they argue that there is simply no path toward machine intelligence that rivals our own across all contexts or domains… even if we were able eventually to create a superintelligence, there is no reason to believe it would be bent on world domination, unless this were for some reason programmed into the system. As Yann LeCun, deep learning pioneer and head of AI at Facebook colorfully puts it, computers don’t have testosterone…. At best, investment in the study of AI’s existential threat diverts millions of dollars (and billions of neurons) away from research on serious questions… “The problem is not that artificial intelligence will get too smart and take over the world,” computer scientist Pedro Domingos writes, “the problem is that it’s too stupid and already has.”

A footnote also finds a paradox in the arguments of Nick Bostrom, who has warned of that dangers superintelligent AI — but also of the possibility that we’re living in a computer simulation. “If AI kills everyone in the future, then we cannot be living in a computer simulation created by our decedents. And if we are living in a computer simulation created by our decedents, then AI didn’t kill everyone. I think it a fair deduction that Professor Bostrom is wrong about something.”

Read more of this story at Slashdot.

%d bloggers like this: