Posts: 611
Threads: 41
Joined: Dec 2014
01-22-2017, 05:30 AM
(This post was last modified: 01-22-2017, 05:42 AM by Bear.)
(01-18-2017, 12:33 PM)Drashner1 Wrote: (01-18-2017, 06:33 AM)Bear Wrote: All of these things are missing the main point.
I disagree. All of these things are addressing your main point in various ways. I would also point out that you aren't arguing against the points I've raised, but have instead simply dismissed them - which really isn't answering them or providing countervailing data or arguments. Anyway.
It's really hard for me to see how these points you believe you're making are relevant. I'm sorry if you think I haven't addressed them, but I don't know what addressing them would consist of when they have so little to do with what I was talking about.
I shall attempt to explain in painful detail what's wrong with this most recent crop. I apologize in advance.
(01-18-2017, 12:33 PM)Drashner1 Wrote: I see some flaws in your logic:
a) You are ignoring the fact that humans do not simply assess risk in a vacuum, nor do they assess any and all risk as being an existential or infinite one without any counterbalancing benefits. Or to put it another way: You are ignoring the benefit side of the cost-benefit analysis. Presumably anyone considering sending out colonies, either to other planets or other stars, will be doing so because they feel there is a net positive to be gained by this.
Yes. The question is not what people who believe there is a net positive to be gained would do. The question is whether anyone will believe that there is a net positive to be gained. This isn't 'dismissing' your argument. This is simply pointing out the logic that your argument implies. People will consider benefits. But if they consider the risk to be greater, they will not see a net benefit, and the colony effort will never exist.
(01-18-2017, 12:33 PM)Drashner1 Wrote: That there is the potential for future negative 'costs' may be considered, but if they see the benefits as being near term and/or concrete and the potential hazard as being distant and hypothetical then they are just as likely to go with the benefit and let the potential cost take care of itself - especially when that potential cost is hundreds or thousands of years in the future and is not a sure thing.
I can scarcely conceive of a colonization effort having a near-term, low-risk, concrete benefit. Colonization efforts take decades or a century of very high levels of commitment and investment, and run very high risks to the investment before they potentially become profitable. The markets they would serve are by no means guaranteed to still exist when they finally become productive. And I don't think that 'political independence' is a thing that can reliably be made to take much longer than that to manifest, so the window of effective productivity to the investors before the colonists quit sending stuff back is also a huge risk.
The potential hazard on the other hand goes hand-in-hand with that risk of political independence and the difficulty of reliably extending the period of productivity before it happens. If a lot of effort goes into preserving the investment's value - ie, preventing the kind of political independence where the colonists stop sending you return-on-investment - then the struggle is likely to turn violent. And getting into a violent struggle while you're at the bottom of a large gravity well, with people who are not, is a losing proposition with enormous downside. Whereas spending less effort to preserve the investment's value amounts to abandoning the investment as a near-total loss.
All of this looks very very risky for the investors. The governments involved are going to see that the people whose welfare they exist to serve are living at the bottom of that gravity well right along with the investors who may be spending too much effort to preserve the value of their investment and thus provoking people at the top of that gravity well. So it looks like a very bad risk for the governments as well.
These risks are not "thousands of years in the future." These risks are within the lifetime of the people making the decisions.
(01-18-2017, 12:33 PM)Drashner1 Wrote: b) There are various historical precedents to support the idea that the creation of potential rivals or threats will not prevent 'bean counters' from going ahead and doing something. For a major example, consider the various colonial powers of yesteryear, in particular the British Empire. By the same argument you are making here, none of these powers should have ever risked colonizing the new world. But they did it anyway because they saw a benefit(s) in it that presumably outweight the cost(s).
I believe that the situation is not at all similar. In the first place those governments did not exist at the bottom of a gravity well where they would be instantly destroyed by the simplest and easiest means the colonists could use to assert their independence. The colonists would have had to build an enormous investment in ships and weapons before they could even begin to engage the navies of those nations, and posed absolutely no credible threat to the colonial powers for the first century after the colonies were formed. Imagine the reactions of those governments to the notion that, despite their armies and navies, that first colony overseas could turn around and destroy them utterly with a modicum of effort, using only the barest minimum of infrastructure (a mass driver) that will have to be built anyway and absolutely without the need for even one soldier per thousand they'd be destroying. Imagine them realizing that creating any kind of deterrent or counterattack would cost them enormous amounts, and that the vast majority of their military power simply could not be brought to bear - and that the colonists knew perfectly well that it could not be brought to bear.
(01-18-2017, 12:33 PM)Drashner1 Wrote: c) In my first point, I mentioned 'sure things'. Humans have a long history of doing all kinds of things that all the available data says has a high probability of being bad for them, either because they find the 'benefit' to outweigh the potential cost or because they think the odds will work out in their favor or for some similar reason. Whether this could be classified as 'foolishness' or 'hope', humans (including bean counters) demonstrate it all the time and have all through their history.
It does in fact happen. But it very rarely gets a very large amount of resources invested in it. Leif Ericson headed off into the wild western sea with a pair of longboats is entirely believable; two dozen people of that culture could build a longboat from raw timber in about a week. The Spanish Government backed a venture by an Italian sailor, but they did it with three ships that had already been built and spent a lifetime in service, as an alternative to scrapping those boats, and that only after the Italian sailor had taken some non-specified extraordinary measures to convince the queen. And repeating that for emphasis, THE queen. At that time the king could make decisions, all by himself, about investment of ships and money by the crown. There was only one person who had to be convinced, and the queen, suitably inveigled, was able to convince him.
Colombus managed quite the confidence man's hype game too; rumors about El Dorado were circling before he ever set sail, and who had the motive to start those up? Of course, THAT part of the situation is exactly analogous. Any modern venture is going to be similarly supported by hype and disinformation campaigns by people who want to make it happen. But it really isn't the case that a single person persuaded can make the decision any more, and I don't imagine modern hype games are drastically more persuasive than the ones Colombus played.
(01-18-2017, 12:33 PM)Drashner1 Wrote: d) Even in cases where there is absolute recorded proof of how much of a risk something can be, humans will often go ahead and do it anyway. 9/11 demonstrated how commercial jets can destroy entire buildings and kill thousands of people.
Worthy of note, those buildings were specifically designed to be able to take the impact of a 747 flown directly into them without collapsing. They did in fact consider that risk, and those buildings did in fact stand up to that impact with the loss of only a few floors worth of offices. The fact that they did not consider the effects of a full load of jet fuel on fire and melting the structural members was an oversight. That risk was not 'known and recorded.' It was a flat-out mistake, and realizing how vulnerable they were to that attack caused an entire security infrastructure to be redesigned badly. But aside from the comedy of airport security, they are also taking effective measures. Buildings of that size are now built with an updated specification for impact resistance, because it's a mistake they fully intend not to make again.
By comparison, with a colony we're not talking about a risk that nobody will notice until it's too late. We're talking about a risk related to questions that will emerge in the early planning stages of any such project.
(01-18-2017, 12:33 PM)Drashner1 Wrote: e) Finally, you mention bean counters not caring about anyone not of their nation. The simple answer to that, at least for interplanetary colonies is to consider their inhabitants to be members of the nation that founded them, with all the rights thereof. As such, the bean counters would (by your own logic) care about them as they do their own citizens.
This is absurd. The risk is specifically the risk of a colony in rebellion. A colony in rebellion is a hostile power, by definition, or there would not be wars of independence. The bean-counters do not love the citizens of a hostile power as they do their own people. Did I really need to explain that?
Posts: 1,574
Threads: 80
Joined: Mar 2013
Bear,
Most (all?) of your anti-colonization arguments seem to be based on logic and/or profit. However, those aren't the only reasons why such efforts are or will be be funded. Emotions and personal wealth are contributors, too.
In particular, Elon Musk is determined that he'll go to Mars. Of course, whether he'll actually manage to do that has yet to be shown.
Selden
Posts: 16,242
Threads: 738
Joined: Sep 2012
(01-22-2017, 05:30 AM)Bear Wrote: It's really hard for me to see how these points you believe you're making are relevant. I'm sorry if you think I haven't addressed them, but I don't know what addressing them would consist of when they have so little to do with what I was talking about.
To me they are quite obvious - it's unfortunate you don't see that.
(01-22-2017, 05:30 AM)Bear Wrote: Yes. The question is not what people who believe there is a net positive to be gained would do. The question is whether anyone will believe that there is a net positive to be gained. This isn't 'dismissing' your argument. This is simply pointing out the logic that your argument implies. People will consider benefits. But if they consider the risk to be greater, they will not see a net benefit, and the colony effort will never exist.
Yes, this is true. However, you are jumping from this to the assumption that they will consider the possibility (possibly quite remote or long term) of negative consequences or outcomes to outweigh any possible benefits. Stating with any confidence what people even decades in the future will consider a positive or negative thing, or how they will weigh such things, let alone centuries in the future, is an iffy proposition at best.
(01-22-2017, 05:30 AM)Bear Wrote: I can scarcely conceive of a colonization effort having a near-term, low-risk, concrete benefit. Colonization efforts take decades or a century of very high levels of commitment and investment, and run very high risks to the investment before they potentially become profitable.
That would depend on the circumstances under which the colony effort is being undertaken. If it is a matter of resource extraction or processing efforts, a colony effort could grow out of wanting to make bases or facilities self-sustaining to reduce costs, wanting to allow for workers to bring out their families so they are happier and therefore more productive, etc.
Or it could be an ideological thing about wanting to spread humanity around or 'out of the cradle' or whatnot. Or something we haven't even imagined yet.
As far as the level of effort involved, that would depend on the technology available at the time an actual colony effort begins or that is invented as the colony develops. High levels of automation for example could reduce the LOE, cost, and time frame required, considerably.
(01-22-2017, 05:30 AM)Bear Wrote: The potential hazard on the other hand goes hand-in-hand with that risk of political independence and the difficulty of reliably extending the period of productivity before it happens. If a lot of effort goes into preserving the investment's value - ie, preventing the kind of political independence where the colonists stop sending you return-on-investment - then the struggle is likely to turn violent. And getting into a violent struggle while you're at the bottom of a large gravity well, with people who are not, is a losing proposition with enormous downside. Whereas spending less effort to preserve the investment's value amounts to abandoning the investment as a near-total loss.
You're making some major assumptions here:
1) that the colonists will want political independence (for interplanetary colonies - interstellar colonies would presumably be independent and then some from the get go) - which is not a given any more than individual states in the US all want to secede from the Union.
2) That the only path to independence involves warfare or the threat of same. There are examples of things not working that way - Canada comes to mind. For that matter US states are semi-independent in various ways yet are also part of a larger whole.
3) That the governments or investors in same must be 'at the bottom of a gravity well'. If the people sending out/creating the colonies are living in space habs then they are no more at the bottom of a gravity well than the colonists.
4) That the colonists will not be at the bottom of their own gravity well. Space habs or asteroid colonies would not be, but colonies on Mars, or in the gas giant systems are at the bottom or low end of a considerable gravity well in their own right.
5) That political independence automatically means economic independence. If the colony is providing resources or products to people back home (whether 'home' is on a planet or a hab or some combo of these) it is likely to be in the colony's interests (or possibly necessary to their continued survival) to have someone to sell their stuff to. Which means that blowing up potential markets would not be in their best interests.
(01-22-2017, 05:30 AM)Bear Wrote: I believe that the situation is not at all similar. In the first place those governments did not exist at the bottom of a gravity well where they would be instantly destroyed by the simplest and easiest means the colonists could use to assert their independence.
So, what you are arguing is that:
a) if the colonists are treated as full citizens of whatever government sent them, they will be fine with committing mass genocide to gain independence.
b) They will be fine with killing all the friends and relatives that they presumably have still living back on Earth.
c) They will be fine with killing millions or billions of innocent people who have no argument with them and may have nothing to do with the countries or companies or whatever that sent the colonies in the first place.
d) That Earth can fairly quickly field the tech and resources to create independent and self-sufficient colonies elsewhere in the solar system that have the means to divert asteroids but that it can't detect or defend against a diverted asteroid.
Sorry, but these all seem more than a bit of a stretch to me.
You also seem to be making some assumptions about just what the colonists will be doing that makes whipping up a mass driver something they can do all that easily or would be doing anyway. While it's possible they might be doing that kind of thing, it's not a given. Asteroid mining/moving might as easily be an industrial process run by various companies/governments on Earth.
(01-22-2017, 05:30 AM)Bear Wrote: It does in fact happen. But it very rarely gets a very large amount of resources invested in it.
And by the time we have the means to field interplanetary (let alone interstellar) colonies, it might take no more resources - comparatively speaking - then it took the Vikings or the various early colonial powers as a percentage of their overall economies.
(01-22-2017, 05:30 AM)Bear Wrote: Worthy of note, those buildings were specifically designed to be able to take the impact of a 747 flown directly into them without collapsing. They did in fact consider that risk, and those buildings did in fact stand up to that impact with the loss of only a few floors worth of offices. The fact that they did not consider the effects of a full load of jet fuel on fire and melting the structural members was an oversight.
In this case you seem to be providing evidence in support of my position for me. Rather than suggesting that passenger jets should never be built or that the buildings should never be built, you instead describe efforts made to prevent or mitigate the results of a worst case scenario and then (when that scenario emerged anyway) further efforts to prevent a repeat - while still retaining the presumed benefits of both skyscrapers and the civilian air transportation system.
I see no reason why equivalent efforts of various sorts could not be made when it comes to the subject of interplanetary/interstellar colonization.
(01-22-2017, 05:30 AM)Bear Wrote: This is absurd. The risk is specifically the risk of a colony in rebellion. A colony in rebellion is a hostile power, by definition, or there would not be wars of independence. The bean-counters do not love the citizens of a hostile power as they do their own people. Did I really need to explain that?
At the time a colony is being planned or set up it is not a hostile power, but an extension of the parties doing the planning/setting up. Fellow citizens, basically.
Therefore, the simplest option would be to continue to treat said colonists as full citizens, including giving them representation in whatever government is in play. So the incentive to rebel is negligible. Beyond that, having some sort of peaceful mechanism for the colony to mostly run itself or eventually phase over to independence if it so desires could also be an option.
Todd
Posts: 16,242
Threads: 738
Joined: Sep 2012
So, on a rather related note...
It occurs to me that - from what I can gather from various posts that you've made in your time here - that you do something in computer programming relating to AI research. Whether this is your 'day job' or an avocation you do in addition to what you do to make a living hasn't been specified and is honestly not my business unless you feel like sharing. But...
If you are working toward the creation of AI, then an argument can be (and has been) made that any such work is a source of profound existential risk since doing so at the least risks creating a competing species against humanity and at worst runs the risk of eventually creating a superhuman intelligence that could potentially destroy us for any of a variety of reasons.
Based on this fact shouldn't the same 'laser focused bean counters' that you mention in regards to space colonization also be totally against supporting AI research in any form? And, if you feel that space colonization is an unacceptable risk, shouldn't you also feel that AI research is an unacceptable risk and therefore discontinue whatever work you are doing in this area immediately and work to get any such work outlawed?
If not, why not?
Curious,
Todd
Posts: 611
Threads: 41
Joined: Dec 2014
01-22-2017, 05:04 PM
(This post was last modified: 01-22-2017, 05:18 PM by Bear.)
In fact that is exactly the case. Without going into details, I've done a whole lot of AI work professionally and I'm now working harder than ever - without much support, except a couple of my ex coworkers are working with me and I get some licensing fees on my patents - trying to figure out how to build a system with consciousness and self-awareness. Those are traits which both create new kinds of existential risk, and which can help mitigate both that risk and others.
If such systems are created and controlled by people who want power over others, or to take money from others, then we are doomed. If they are created by such people and then those people lose control, it's a gamble with very bad odds. The people working on FAI are living a fantasy life if they think their proposals to deceive or hobble these systems will work; they've made themselves irrelevant. So three of us are spending close to 60 hours a week working to try to create a third option - also a gamble but hopefully one with better odds. And that is a system with consciousness, self awareness, and empathy.
Governments or large investors would not support us; our project, if successful, will not work more to their benefit than anyone else's. And as I said, it is a gamble. The FAI people are selling a fantasy that they can find a way to do this that's completely safe. They can't. In fact any of their proposals I've seen, put into effect, would be either useless (preventing any powerful AI at all much less a friendly one) or actively harmful (creating an AI that is more likely to be hostile because it's learned its behavior from people who treat it as hostile). But to the extent they sell people on the fantasy of perfect safety, those people might decide to lock us up.
And we could be wrong. That's the hell of it; we just don't know. We can point at a bunch of things and explain why they won't work, but just because we're pursuing something different doesn't mean that it will. We're doing this not because we're sure we're right, but because time is running out.
Posts: 620
Threads: 23
Joined: Mar 2013
Provolution is also an existential risk, in that Terragens have long created species that are in competition with the human-derived species, one that is even more pronounced than the RL threat posed by AGI. The existence of those provolved beings implies either that provolves are strictly controlled to ensure they cannot become an existential threat, or that the human-derived Terragens are so unconcerned they not only accept that risk but seek to increase the risk by provolving everthing that moves. Of course, it might simply be that the humans and their derived offshoots are despondent over their self-induced functional extinction, so that at some level they even welcome the existential threat.
Radtech497
"I'd much rather see you on my side, than scattered into... atoms." Ming the Merciless, Ruler of the Universe
Posts: 16,242
Threads: 738
Joined: Sep 2012
Various thoughts come to mind here. In no particular order...
a) Not to be negative (I think the goal of creating AI is a positive thing and creating an AI with empathy is laudable), but I would point out that we have billions of self-replicating, empathy equipped General Intelligences already occupying this planet and they have a busy history of emotional and physical abuse targeted at each other and sometimes some of the other lifeforms on this planet. Not to mention being greedy, careless, cruel, etc. etc. Of course, they also have a busy history of being kind, helpful, loving, altruistic, charitable, etc. etc.
Point being that empathy is certainly a good thing, but the record already shows that it isn't a guarantee.
b) I would also point out that AI is not the only item on the 'potential existential risk' menu. Genetic engineering could result in the creation of new diseases that could kill us all or damage/destroy the biosphere. Nanotech might eventually result in some variant of grey goo that could kill us all (a smart plague say). Human generated climate change could result in conditions that make our lives untenable. Nuclear war could destroy our civ and severely damage the biosphere - resulting in our extinction. Etc.
c) Speaking of those billions of GIs, it could be argued that we are somewhat of an existential threat in our own right, either to ourselves or the other lifeforms on the planet. And that's without even actively trying. Some might argue that we should renounce most of our technology and live in a more low tech and low impact manner. Of course that would result in a whole slew of negative consequences as well - including the possibility of our own extinction as a result of disease, asteroid strike, super volcano, or some other event.
Exploring and developing advanced tech runs the risk of extinction or severely reduced circumstances, but also has the potential for enormous pay offs. Not exploring and developing advanced tech runs the risk of extinction or severely reduced circumstances - and not much else.
As a former member of OA was fond of saying - 'You pays your money and you takes your chances.'
Perhaps a better option to move forward with cautious optimism, not giving into fear, but trying to plan for and avoid negative consequences as well as we go along - aiming to safely explore and develop the potentials that we hope these technologies could open up to us.
Rather than give up on AI, or trying to create slaved AI - perhaps create AI and treat them as 'people' - equal partners in our civilization, with all the rights and responsibilities that go with it. Rather than treat outer space colonies as 'second class' members of our civ, treat them as part of our culture that just happens to be a bit further away.
While this doesn't guarantee that some future AI or colony won't try to do us harm - it perhaps ups the odds that other AIs or colonies will step up to stop them. Or that the situation won't arise in the first place because in their mind - they are us.
Incidentally, David Brin's book Existence does an interesting job exploring these sorts of ideas from a variety of angles.
On a rather different note - I would point out that there are a number of 'human derived' clades in the setting that are so alien as to make many provolves seem like close relatives in comparison (*cough, cough* the Harren (Oh Gods, the Harren!!) *cough, cough*) and also that Terragens:
a) Typically consider provolved species to be 'one of us' ie 'people' and so they aren't creating competition with Terragens per se but upping the number of viewpoints to address a potentially dangerous universe.
b) Compared to the protection (or risk) that the transapients offer, any given provolve species is likely to be seen as very small potatoes indeed.
My 2c worth,
Todd
|