06-04-2015, 07:26 PM
(This post was last modified: 06-04-2015, 07:38 PM by stevebowers.)
Quote:However, the problem I had is that his super-intelligent entities is that they supposedly weren't conscious / aware of self despite demonstrating every ability to be so. They were quite capable of thinking about complicated environments with many other entities / objects in the environment, and capable of manipulating themselves with respect to those others. They had to be aware that "self object" differed from the other objects because, you know, they just couldn't think and command other objects/people to do things they way they could their own bodies. They envinced survival behaviors to keep self-object safe in a way not applied to other objects.One way to break the bond between 'self object' and 'external objects' is to give the entity control of several, or numerous, active devices; if the entity controls a large crew of maintenance robots, for example, and suffers minimal hardship if one or more of these devices is damaged or destroyed, then there would be no sense of self associated with them. The central processing system that controls these devices would be just one element among many that is controlled and monitored by the entity, so is assigned no special significance.
To take this disassociation still further, the care and maintenance of the central processor could conceivably be assigned to another entity, even a human; the original entity need not be involved in self-preservation at all. Perhaps, if motivated to do so, the entity in question might somehow determine where its CPU is located, and the associated off-switch; but if the entity is not motivated in this way, then the question need not arise.