05-07-2014, 07:09 PM
(This post was last modified: 05-07-2014, 07:10 PM by stevebowers.)
New paper by Peter Eckersley and Anders Sandberg;
Is Brain Emulation Dangerous?
Several scenarios are explored, including the intriguing and disturbing 'The Attacker Always Wins'
(.pdf can be downloaded from)
http://www.degruyter.com/view/j/jagi.201...format=INT
In OA we assume that the computing power arrives much sooner than the technology required for scanning human brains.
We also assume that non-human AIs that are not based on whole brain emulations arrive much earlier than whole brain emulations, and that partial, incomplete emulations can be made before complete emulations. These assumptions do not necessarily avert some of the uncomfortable conclusions that Eckersley and Sandberg arrive at; in fact they might make the situation worse, since a highly competent non-human AI would probably find hacking or enslaving a human upload/emulation relatively easy.
Is Brain Emulation Dangerous?
Several scenarios are explored, including the intriguing and disturbing 'The Attacker Always Wins'
(.pdf can be downloaded from)
http://www.degruyter.com/view/j/jagi.201...format=INT
In OA we assume that the computing power arrives much sooner than the technology required for scanning human brains.
We also assume that non-human AIs that are not based on whole brain emulations arrive much earlier than whole brain emulations, and that partial, incomplete emulations can be made before complete emulations. These assumptions do not necessarily avert some of the uncomfortable conclusions that Eckersley and Sandberg arrive at; in fact they might make the situation worse, since a highly competent non-human AI would probably find hacking or enslaving a human upload/emulation relatively easy.