01-23-2017, 07:31 AM
Various thoughts come to mind here. In no particular order...
a) Not to be negative (I think the goal of creating AI is a positive thing and creating an AI with empathy is laudable), but I would point out that we have billions of self-replicating, empathy equipped General Intelligences already occupying this planet and they have a busy history of emotional and physical abuse targeted at each other and sometimes some of the other lifeforms on this planet. Not to mention being greedy, careless, cruel, etc. etc. Of course, they also have a busy history of being kind, helpful, loving, altruistic, charitable, etc. etc.
Point being that empathy is certainly a good thing, but the record already shows that it isn't a guarantee.
b) I would also point out that AI is not the only item on the 'potential existential risk' menu. Genetic engineering could result in the creation of new diseases that could kill us all or damage/destroy the biosphere. Nanotech might eventually result in some variant of grey goo that could kill us all (a smart plague say). Human generated climate change could result in conditions that make our lives untenable. Nuclear war could destroy our civ and severely damage the biosphere - resulting in our extinction. Etc.
c) Speaking of those billions of GIs, it could be argued that we are somewhat of an existential threat in our own right, either to ourselves or the other lifeforms on the planet. And that's without even actively trying. Some might argue that we should renounce most of our technology and live in a more low tech and low impact manner. Of course that would result in a whole slew of negative consequences as well - including the possibility of our own extinction as a result of disease, asteroid strike, super volcano, or some other event.
Exploring and developing advanced tech runs the risk of extinction or severely reduced circumstances, but also has the potential for enormous pay offs. Not exploring and developing advanced tech runs the risk of extinction or severely reduced circumstances - and not much else.
As a former member of OA was fond of saying - 'You pays your money and you takes your chances.'
Perhaps a better option to move forward with cautious optimism, not giving into fear, but trying to plan for and avoid negative consequences as well as we go along - aiming to safely explore and develop the potentials that we hope these technologies could open up to us.
Rather than give up on AI, or trying to create slaved AI - perhaps create AI and treat them as 'people' - equal partners in our civilization, with all the rights and responsibilities that go with it. Rather than treat outer space colonies as 'second class' members of our civ, treat them as part of our culture that just happens to be a bit further away.
While this doesn't guarantee that some future AI or colony won't try to do us harm - it perhaps ups the odds that other AIs or colonies will step up to stop them. Or that the situation won't arise in the first place because in their mind - they are us.
Incidentally, David Brin's book Existence does an interesting job exploring these sorts of ideas from a variety of angles.
On a rather different note - I would point out that there are a number of 'human derived' clades in the setting that are so alien as to make many provolves seem like close relatives in comparison (*cough, cough* the Harren (Oh Gods, the Harren!!) *cough, cough*) and also that Terragens:
a) Typically consider provolved species to be 'one of us' ie 'people' and so they aren't creating competition with Terragens per se but upping the number of viewpoints to address a potentially dangerous universe.
b) Compared to the protection (or risk) that the transapients offer, any given provolve species is likely to be seen as very small potatoes indeed.
My 2c worth,
Todd
a) Not to be negative (I think the goal of creating AI is a positive thing and creating an AI with empathy is laudable), but I would point out that we have billions of self-replicating, empathy equipped General Intelligences already occupying this planet and they have a busy history of emotional and physical abuse targeted at each other and sometimes some of the other lifeforms on this planet. Not to mention being greedy, careless, cruel, etc. etc. Of course, they also have a busy history of being kind, helpful, loving, altruistic, charitable, etc. etc.
Point being that empathy is certainly a good thing, but the record already shows that it isn't a guarantee.
b) I would also point out that AI is not the only item on the 'potential existential risk' menu. Genetic engineering could result in the creation of new diseases that could kill us all or damage/destroy the biosphere. Nanotech might eventually result in some variant of grey goo that could kill us all (a smart plague say). Human generated climate change could result in conditions that make our lives untenable. Nuclear war could destroy our civ and severely damage the biosphere - resulting in our extinction. Etc.
c) Speaking of those billions of GIs, it could be argued that we are somewhat of an existential threat in our own right, either to ourselves or the other lifeforms on the planet. And that's without even actively trying. Some might argue that we should renounce most of our technology and live in a more low tech and low impact manner. Of course that would result in a whole slew of negative consequences as well - including the possibility of our own extinction as a result of disease, asteroid strike, super volcano, or some other event.
Exploring and developing advanced tech runs the risk of extinction or severely reduced circumstances, but also has the potential for enormous pay offs. Not exploring and developing advanced tech runs the risk of extinction or severely reduced circumstances - and not much else.
As a former member of OA was fond of saying - 'You pays your money and you takes your chances.'
Perhaps a better option to move forward with cautious optimism, not giving into fear, but trying to plan for and avoid negative consequences as well as we go along - aiming to safely explore and develop the potentials that we hope these technologies could open up to us.
Rather than give up on AI, or trying to create slaved AI - perhaps create AI and treat them as 'people' - equal partners in our civilization, with all the rights and responsibilities that go with it. Rather than treat outer space colonies as 'second class' members of our civ, treat them as part of our culture that just happens to be a bit further away.
While this doesn't guarantee that some future AI or colony won't try to do us harm - it perhaps ups the odds that other AIs or colonies will step up to stop them. Or that the situation won't arise in the first place because in their mind - they are us.
Incidentally, David Brin's book Existence does an interesting job exploring these sorts of ideas from a variety of angles.
On a rather different note - I would point out that there are a number of 'human derived' clades in the setting that are so alien as to make many provolves seem like close relatives in comparison (*cough, cough* the Harren (Oh Gods, the Harren!!) *cough, cough*) and also that Terragens:
a) Typically consider provolved species to be 'one of us' ie 'people' and so they aren't creating competition with Terragens per se but upping the number of viewpoints to address a potentially dangerous universe.
b) Compared to the protection (or risk) that the transapients offer, any given provolve species is likely to be seen as very small potatoes indeed.
My 2c worth,
Todd