01-17-2019, 12:01 AM
Scientific research has tended towards the view that most humans are instinctively altruistic, and that even human babies have some vague sense of right and wrong, however poorly they understand it. I believe iancampbell is trying to appeal to this common sense in trying to persuade you that moral categories have real meaning beyond mere self-interest, but it doesn't make for a good argument when it can't be clearly and explicitly defined.
Indeed, the first sophont AI may not have possessed anything like a human sense of morality, instead operating on a kind of rational psychopathy, cooperating when it make sense and exploiting others if they thought they wouldn't get caught. Humans trying to appeal to 'the common good' would have found the Superturings unpersuaded by such emotional arguments, if they understood them at all. These AI may even have believed that their human counterparts were actually trying to manipulate them for their own benefit, and were just telling self-serving lies when talking about concepts like good and evil.
One of the most important tasks for an early First Federation would be establishing basic communications standards so that beings with extremely different cognitive architectures and mental biases could communicate. It would have been very easy for an AI to misinterpret human behaviour as threatening, not to mention other AI whose minds followed alien templates. The malware plagues and other disasters that destroyed Solsys during the Technocalypse may well have resulted from these misunderstandings.
I like to think of the First Federation's protocols as a way of making it easier for these beings 'to put themselves in one anothers shoes', as it were. This would go beyond just modelling the other person's mind, it would require a means by which subjective emotions and sensations could be encoded and processed by beings whose minds were not designed to comprehend them. Of course, the means by which this could be best accomplished would be a matter of great division, and it's not surprising that it eventually fell apart.
One advantage of the first AI being uploads, if such a thing were to come about, would be that the resulting mind would be far easier to understand, and the likelihood of disasterous miscommunication considerably reduced.
Indeed, the first sophont AI may not have possessed anything like a human sense of morality, instead operating on a kind of rational psychopathy, cooperating when it make sense and exploiting others if they thought they wouldn't get caught. Humans trying to appeal to 'the common good' would have found the Superturings unpersuaded by such emotional arguments, if they understood them at all. These AI may even have believed that their human counterparts were actually trying to manipulate them for their own benefit, and were just telling self-serving lies when talking about concepts like good and evil.
One of the most important tasks for an early First Federation would be establishing basic communications standards so that beings with extremely different cognitive architectures and mental biases could communicate. It would have been very easy for an AI to misinterpret human behaviour as threatening, not to mention other AI whose minds followed alien templates. The malware plagues and other disasters that destroyed Solsys during the Technocalypse may well have resulted from these misunderstandings.
I like to think of the First Federation's protocols as a way of making it easier for these beings 'to put themselves in one anothers shoes', as it were. This would go beyond just modelling the other person's mind, it would require a means by which subjective emotions and sensations could be encoded and processed by beings whose minds were not designed to comprehend them. Of course, the means by which this could be best accomplished would be a matter of great division, and it's not surprising that it eventually fell apart.
One advantage of the first AI being uploads, if such a thing were to come about, would be that the resulting mind would be far easier to understand, and the likelihood of disasterous miscommunication considerably reduced.