11-18-2015, 11:35 PM
The idea of intelligent machines that follow their instructions in unintended ways is a very interested one. I imagine that this sort of parameterisation problem could be a part of the early setting difficulties. Three laws aren't enough, you'd need to meticulously work in a complex ethical/behaviour regulation system to prevent undesirable consequences, at the same time being free enough for the intelligent machine to do its work.
OA Wish list:
- DNI
- Internal medical system
- A dormbot, because domestic chores suck!