The Orion's Arm Universe Project Forums
Gizmodo on mind uploading, featuring Anders Sandberg - Printable Version

+- The Orion's Arm Universe Project Forums (https://www.orionsarm.com/forum)
+-- Forum: Offtopics and Extras; Other Cool Stuff (https://www.orionsarm.com/forum/forumdisplay.php?fid=2)
+--- Forum: Real Life But OA Relevant (https://www.orionsarm.com/forum/forumdisplay.php?fid=7)
+--- Thread: Gizmodo on mind uploading, featuring Anders Sandberg (/showthread.php?tid=3988)

Pages: 1 2 3


RE: Gizmodo on mind uploading, featuring Anders Sandberg - extherian - 01-17-2019

Scientific research has tended towards the view that most humans are instinctively altruistic, and that even human babies have some vague sense of right and wrong, however poorly they understand it. I believe iancampbell is trying to appeal to this common sense in trying to persuade you that moral categories have real meaning beyond mere self-interest, but it doesn't make for a good argument when it can't be clearly and explicitly defined.

Indeed, the first sophont AI may not have possessed anything like a human sense of morality, instead operating on a kind of rational psychopathy, cooperating when it make sense and exploiting others if they thought they wouldn't get caught. Humans trying to appeal to 'the common good' would have found the Superturings unpersuaded by such emotional arguments, if they understood them at all. These AI may even have believed that their human counterparts were actually trying to manipulate them for their own benefit, and were just telling self-serving lies when talking about concepts like good and evil.

One of the most important tasks for an early First Federation would be establishing basic communications standards so that beings with extremely different cognitive architectures and mental biases could communicate. It would have been very easy for an AI to misinterpret human behaviour as threatening, not to mention other AI whose minds followed alien templates. The malware plagues and other disasters that destroyed Solsys during the Technocalypse may well have resulted from these misunderstandings.

I like to think of the First Federation's protocols as a way of making it easier for these beings 'to put themselves in one anothers shoes', as it were. This would go beyond just modelling the other person's mind, it would require a means by which subjective emotions and sensations could be encoded and processed by beings whose minds were not designed to comprehend them. Of course, the means by which this could be best accomplished would be a matter of great division, and it's not surprising that it eventually fell apart.

One advantage of the first AI being uploads, if such a thing were to come about, would be that the resulting mind would be far easier to understand, and the likelihood of disasterous miscommunication considerably reduced.


RE: Gizmodo on mind uploading, featuring Anders Sandberg - Drashner1 - 01-17-2019

(01-16-2019, 11:44 PM)iancampbell Wrote: Todd - Yes, indeed, back to mind uploading. (Your beliefs seem to be rather close to Wiccan.)

I honestly have no idea what Wiccans believe, so can't agree or disagree.

(01-16-2019, 11:44 PM)iancampbell Wrote: Consider a thought experiment:

Assume, for the moment, that it is possible to create a piece of (no doubt extremely complex) highly miniaturised technology that responds in all ways, and to all inputs, the same as a neuron. Further assume that these things are reasonably easy to make in extremely large numbers - probably by some method involving nanotechnology. Further assume that the same can be done for all the other types of brain cell.

Now: Start replacing a human brain with these things, gradually - maybe 0.5% of total brain cell numbers per day, each artificial neuron to be put in exactly the same place as the real one you're replacing.

At what point, if at all, does the resultant machine/biological hybrid brain become a machine rather than alive? And once done, have you killed the human the brain is in, replacing xem with a cleverly-programmed robot? My submission is at no point, and no.

I consider human beings to be biological machines, so wouldn't really phrase it quite this way, but I think I get what you're saying.

I believe pretty strongly in Pattern Identity Theory, so my take on this largely aligns with yours. The person is alive the whole time, although the substrate of their mind has changed. They have not been killed and replaced with a robot.

(01-16-2019, 11:44 PM)iancampbell Wrote: Further, what happens if the new brain has been created all in one go using data from the meat brain to make its detailed structure? Does that change anything?

I'm not entirely sure what you're describing here. Are you referring to non-destructive uploading or something else?

Todd


RE: Gizmodo on mind uploading, featuring Anders Sandberg - Drashner1 - 01-17-2019

(01-17-2019, 12:01 AM)extherian Wrote: Scientific research has tended towards the view that most humans are instinctively altruistic, and that even human babies have some vague sense of right and wrong, however poorly they understand it. I believe iancampbell is trying to appeal to this common sense in trying to persuade you that moral categories have real meaning beyond mere self-interest, but it doesn't make for a good argument when it can't be clearly and explicitly defined.

Human beings have evolved to be social creatures, so I don't find it surprising that they exhibit social traits nor that we generally consider such traits to be a good thing. I've also recently seen some YouTube videos from atheists talking in terms of morality as related to/a product of these traits. However, I don't really find the argument persuasive in the context of 'morality' as it has usually been treated throughout human history. To me it feels more like an attempt to redefine morality (can't we all agree that THIS is what morality is now/has always been?) to fit into newly learned information and move it away from the nebulous, subjective, and largely made up social construct that it is. I also feel that the concept of morality has been so thoroughly contaminated by centuries of metaphysical baggage that the effort is perhaps more work than it's worth and risks infection of the scientific information by the moral metaphysics.

Beyond that, I would suggest that altruism, kindness, and similar 'positive' social things are actually excellent tools from a self-interested perspective. I make a relatively small investment of time and energy doing nice things for others or promoting such 'positive' things in a general way. In return/response, others do the same back to me. But because I am greatly outnumbered by everyone else, I receive a 'return on my investment' that results in my life being more pleasant than it would be if I used my initial investment of time and energy trying to achieve that pleasantness directly. So self-interest wins again and morality is demonstrated to be utterly superfluous. Big Grin

(01-17-2019, 12:01 AM)extherian Wrote: Indeed, the first sophont AI may not have possessed anything like a human sense of morality, instead operating on a kind of rational psychopathy, cooperating when it make sense and exploiting others if they thought they wouldn't get caught. Humans trying to appeal to 'the common good' would have found the Superturings unpersuaded by such emotional arguments, if they understood them at all. These AI may even have believed that their human counterparts were actually trying to manipulate them for their own benefit, and were just telling self-serving lies when talking about concepts like good and evil.

One of the most important tasks for an early First Federation would be establishing basic communications standards so that beings with extremely different cognitive architectures and mental biases could communicate. It would have been very easy for an AI to misinterpret human behaviour as threatening, not to mention other AI whose minds followed alien templates. The malware plagues and other disasters that destroyed Solsys during the Technocalypse may well have resulted from these misunderstandings.

I like to think of the First Federation's protocols as a way of making it easier for these beings 'to put themselves in one anothers shoes', as it were. This would go beyond just modelling the other person's mind, it would require a means by which subjective emotions and sensations could be encoded and processed by beings whose minds were not designed to comprehend them. Of course, the means by which this could be best accomplished would be a matter of great division, and it's not surprising that it eventually fell apart.

One advantage of the first AI being uploads, if such a thing were to come about, would be that the resulting mind would be far easier to understand, and the likelihood of disasterous miscommunication considerably reduced.

I don't recall if we've formally updated the relevant articles yet, but IIRC what you're describing is very close to our current take on the nature of the first AIs. Not that they were all what we would consider psychopaths, but that they were produced by a variety of methods, most of which themselves involved some degree of 'evolution' and uncertainty rather than being a top down, fully planned and directed process. Which often resulted in beings that were radically different from human minds. So some might have been rational psychopaths while others were something totally other that we may not even have a term for now, while yet others were something else totally other that we don't have a word for now. And yet others were some hybrid of two or more of the types already listed. And so on.

We've had a number of on and off discussions of what the First Federation, Megacorps, and Second Federation got up to. I don't think it's transferred into firm writeups yet (so much to do, so much to do) but IIRC the most recent consensus/collective notion that has so far emerged is quite similar to what you're describing here. More specifically:

The First Fed created an 'ontology' (the First Federation Ontology) that was a way of thinking and viewing reality that allowed a great many, often radically different beings (AIs, Uploads, Provolves, Near-baselines in various flavors, transapients) to live and work more or less peacefully together and maintain a loosely unified society over interstellar distances. As awesome an achievement as this was, the First Fed ontology proved to be less than stable in the long term and unable to adequately cope with the growing diversity and spacial distances as Terragen civilization expanded across space.

The Megacorps are described as being run by transapients (the CEO and top executives, although they may not have used those exact titles) and they each developed their own 'mini-ontologies' that allowed their megacorporations to operate across interstellar distances and time scales and often with a diversity of sophont 'employees'. Different megacorps operated under different ontologies and had different structures and operational processes but overall sacrificed some of the benefits of the First Fed ontology around sophont rights and such in favor of being able to work over greater distances.

The Second Federation Ontology was introduced by higher S beings and reintroduced the best features of the First Fed ontology while operating in a way that also accommodated the much greater distances and timescales that civilization was operating over by this point.

The archai ruled empires that would eventually become the Sephirotic Empires started to appear and employ more advanced memetics that supplanted the ontologies of prior eras and which could accommodate a vast range of sophont beings across multiple species, substrates, and S-levels and keep them all working more or less in harmony across thousands of light-years of space and hundreds of millions of solar systems.

Or something like that. Basically, what you're suggesting is very much in line with the direction we've most recently been thinking of taking the setting in this area. Great minds thinking alike and allBig Grin

Hope this helps,

Todd


RE: Gizmodo on mind uploading, featuring Anders Sandberg - iancampbell - 01-17-2019

(01-17-2019, 02:05 PM)Drashner1 Wrote:
(01-16-2019, 11:44 PM)iancampbell Wrote: Todd - Yes, indeed, back to mind uploading. (Your beliefs seem to be rather close to Wiccan.)

I honestly have no idea what Wiccans believe, so can't agree or disagree.

(01-16-2019, 11:44 PM)iancampbell Wrote: Consider a thought experiment:

Assume, for the moment, that it is possible to create a piece of (no doubt extremely complex) highly miniaturised technology that responds in all ways, and to all inputs, the same as a neuron. Further assume that these things are reasonably easy to make in extremely large numbers - probably by some method involving nanotechnology. Further assume that the same can be done for all the other types of brain cell.

Now: Start replacing a human brain with these things, gradually - maybe 0.5% of total brain cell numbers per day, each artificial neuron to be put in exactly the same place as the real one you're replacing.

At what point, if at all, does the resultant machine/biological hybrid brain become a machine rather than alive? And once done, have you killed the human the brain is in, replacing xem with a cleverly-programmed robot? My submission is at no point, and no.

I consider human beings to be biological machines, so wouldn't really phrase it quite this way, but I think I get what you're saying.

I believe pretty strongly in Pattern Identity Theory, so my take on this largely aligns with yours. The person is alive the whole time, although the substrate of their mind has changed. They have not been killed and replaced with a robot.

(01-16-2019, 11:44 PM)iancampbell Wrote: Further, what happens if the new brain has been created all in one go using data from the meat brain to make its detailed structure? Does that change anything?

I'm not entirely sure what you're describing here. Are you referring to non-destructive uploading or something else?

Todd

What I'm referring to is a scan detailed enough to reproduce the structure and processes of a meat brain in a non-biological substrate of some sort. I'm not at all sure that it matters whether the uploading is destructive or otherwise; the information is the same either way.

Whether the copy is really alive or not is more of a religious question, I think.


RE: Gizmodo on mind uploading, featuring Anders Sandberg - Drashner1 - 01-18-2019

(01-17-2019, 11:43 PM)iancampbell Wrote: What I'm referring to is a scan detailed enough to reproduce the structure and processes of a meat brain in a non-biological substrate of some sort. I'm not at all sure that it matters whether the uploading is destructive or otherwise; the information is the same either way.

Ok, so basically a fast copying process, whether destructive or non-destructive. It wouldn't matter to me either way, but it appears to matter to some people and if technology and processes to accommodate both methods can be developed, I don't see the harm.

(01-17-2019, 11:43 PM)iancampbell Wrote: Whether the copy is really alive or not is more of a religious question, I think.

Religious questions don't exist for me. I'm not sure the question of whether something is 'alive' or not really matters. If you can't tell the difference between an AI and a human, and you're willing to consider other humans to be people, why not consider the AI to be people as well? Making human equivalent AIs fully participating members of our society, with all the rights, responsibilities and privileges thereof seems like a much simpler solution than mucking around with the alternatives.

My 2c worth,

Todd