The Orion's Arm Universe Project Forums





How Often Do Humans Legitimately Win?
#11
The comparison of humans with animals and transaps with modos is a common one in the setting/project and in some circumstances it is a reasonably useful analogy.

Where things break down however, is that it tends to ignore just how much more physically capable a transapient can be if it wishes. Humans are more or less stuck in the bodies we have in RL. For a transapient, the body(ies) they have at any given time can largely be based on need or desire.

Or to put it another way - what would the woman/bear/selfie situation have looked like if the woman was a vec body with diamondoid or Ultimate Muscle muscles that would let it bend steel and shot putt at least small cars and basically shrug off all the biting and clawing the bear could do? And that body was one of 50 bodies in the area, encompassing a variety of body plans from aircraft to light armored vehicles such that if she wanted to she could rip the bear limb from limb before reducing the entire zoo to a smoking crater in the ground? And all those bodies were just a fraction of what she was as she had thousands of bodies scattered around the entire planet and in orbit and her actual 'mind' was running across multiple server farms in multiple locations? And all that had backups?

And that's just a rough approximation of an S1 probably not going out of its way to be heavily armed or anything. Amp things up to S2, S3, or higher and how quickly do you reach a point where the humans are gnats trying to hurt an elephant by kicking it in the shins?

Basically, things get very hard to quantify when you have entities that have no set form or number of bodies and what they do have is either more or less disposable or already built so tough just by dint of being technology (they are more or less living technology) that a lot of threats that 'the animals' can produce simply aren't going to be a credible threat at all.

Things get 'tricky' when dealing with transaps - let alone archai.

Todd
Reply
#12
I think the bear selfie analogy is a somewhat poor one. The reasons is that higher intelligence etc. of the woman is irrelevant in this particular situation (perhaps because she isn't using it!) and physical power, in which the grizzly bear is immensely superior, is very much a factor.

Going on from Todd's re-plotting of this scenario, let's see how the woman could get that picture. Well...

First get a picture of the bear from an appropriate angle; if you're really concerned about safety, use a camera drone or a long telephoto lens. Take a selfie of yourself, posed appropriately. And use graphics manipulation software to merge the two photos together, making repairs as needed.

Same result, and nobody (including the bear) gets hurt or even inconvenienced - or not much, in the case of the drone photo. This is an example of the human using her intelligence in a way the bear is incapable of even comprehending - or even knowing that it's happening, in this particular example.
Reply
#13
Right. Absolutely the definition of "insane risk." Sending in a disposable body would be sane, and as easy for an archai as getting the bear photo with photoshop would have been for our darwin-award candidate. But no... the exact situation of physical risk by close proximity isn't anything like the form archai-vs-modo stupidity would take. It's a form that modo-vs-animal stupidity has taken, and at best an analogy.

I don't think very many higher-toposophic entities (even S:1 entities) even exist in single embodiments, so it's ridiculous to consider it as an exact situation. The analogous act for an archai might be something like provoking an interplanetary war between nuclear-armed modos while it had primary consciousness somewhere within a lightyear that could be affected by EMP if somebody should happen to detonate a nuke there, far from the anticipated conflict zone. That would be profoundly stupid, or even insane, by archai standards. It would be done deliberately, for .... reasons .... and it would be profoundly aberrant behavior by archai standards.

Maybe a few of them like running nodes vulnerable to EMP, for whatever reason - occasionally shaking their consciousness out of a local maximum by a "startle" experience, sort of like human thrill seekers or drug addicts, even though most archai would consider that insane. Maybe a few of them like running nodes vulnerable to EMP, but even they think that doing so within reach of modos would be crazy. Maybe some of them might run nodes vulnerable to EMP within reach of modos, but all acknowledge that in that circumstance deliberately provoking a war would be crazy.

And then there's the archai equivalent of Florida Man. The kind of S:1 entity that is not destined to ever become S:2.
Reply
#14
Quote:The comparison of humans with animals and transaps with modos is a common one in the setting/project and in some circumstances it is a reasonably useful analogy.

I personally disagree because animals lack distinct intelligence as we intend it for humans. It's quite possible that transap are beyond intelligence itself rather than being "mere" superintelligences but I doubt that the gap is at the same level between an human and an animal.

Regarding damagind/killing a transap I presume that below the ample use magmatter and plasma processor, I'd say below S4, anything can be killed by the right amount of antimatter/nuclear or heavy kinetic bombardment, which is avaiable even at S0 in the setting. Mass to energy is no trivial joke at any of those levels.
That should take care of any kind of armor like diamondoid or Ultimate Muscle. I assume that the average Moon Brain can't absolutely produce enough magmatter to coat itself with a magmatter infused armor.
The problem is scoring an hit against an hyperaware being with sensor array that probably outclass yours Big Grin
 

An exception to this could be wormholes: we have a group of modos, I think, blowing a mouth of a wormhole in a system during the Version War and severely damaging a Transap in another system. They had their whole clade wiped out by hunter killer ISOs, but that is another story Angel



Regarding the possibility of an archai to live without backups I think that is entirely possible, maybe for philosophical preferences: in the story "Festival Season" we have Lemmikki Kauppinen (an S1 or S2, I think), that do not have backups, like the rest of the polity:

Quote:"May I ask a question, Your Excellency?"


"Certainly."

"Some people say that if we all made periodic backups of our minds we'd never die. You could have brought back Lucy Miner."

"Or ten Lucy Miners. Or a hundred. If I write a self-aware program named Lucy onto a data cube, and then make an exact copy of it onto another data cube, which is the real Lucy? Each copy thinks it is the real thing. I can destroy the original, and lose nothing because I have an exact copy."

"So why then don't we make backups?"

"Because that is machine thinking. We can suck minds out of one life support system and place it into another without, we hope, disrupting the holistic entity that makes up a person. Look at it this way: if I make a copy of you, and that copy walks up and kills you, are you dead or alive? There's a 'You' still here. Does it matter?"

"I think it'd matter to me."

"Right. Because 'You' are gone, replaced by 'You Mark Two.' No one else may notice the difference, but you might. Assuming that there really is an afterlife."

"And if there is no afterlife?"

"Then nothing matters to you anymore. One way of looking at it is that a backup copy that survives your death is a type of immortality similar to begetting offspring to carry on some part of you. One is a 'personality line of descent' and the other is a 'genetic line of descent.' But you are gone. The only solace is that something of you lives on after your demise.

"I have trouble making my colleague Wang Khan understand this. My origin is organic, but Wang is an artificial intelligence. To Wang, one copy is as good as another, or as good as the original, and he sees no problem in discarding one for the other. I once suggested to Wang that he create a backup of e-self and then commit suicide. E's response was, 'Why would I want to do that?' Which only proves my point."


BTW, if the author will ever read this post: I loved those! Big Grin 

A same philosophy regarding backups is adopted by the Sarge, the protagonist of Adam's stories, but he's human.


Anyway, for the setting stories, articles and blog posts I'd say that taking down a being a S level above you is so difficult that is usually a remarkable event. Still, it's possible.
Regarding humans taking down an S1 in a conflict  I'd say that is possible that a large amount of enanched human forming a military oriented tribemind with enough processing power to match an S1, I'm not sure if that was ever discussed in details. After that, as I wrote above, you have quite a lot of tools to deal horrific damage if you can strike even a single hit.
In time of peace there is no way an angelnet would allow any modos to go around with large amount of explosives.
Semi-professional threads diverter.
Reply
#15
Quote:And then there's the archai equivalent of Florida Man. The kind of S:1 entity that is not destined to ever become S:2.

I'd like to say that Verifex went up to S4 before trying the absolutely reasonabe endeavour of jumping two S levels at once and get FUBAR Tongue
Semi-professional threads diverter.
Reply
#16
(04-30-2021, 02:16 AM)Vitto Wrote:
Quote:The comparison of humans with animals and transaps with modos is a common one in the setting/project and in some circumstances it is a reasonably useful analogy.

I personally disagree because animals lack distinct intelligence as we intend it for humans. It's quite possible that transap are beyond intelligence itself rather than being "mere" superintelligences but I doubt that the gap is at the same level between an human and an animal.

For this to be usefully discussed we'd need a clear definition of what 'intelligence' is.

That said - what makes you think that humans possess 'distinct intelligence' as transapients intend it?

As far as the gap - I would argue that it's vastly larger than between a human and an animal. Humans and animals generally have the same number and kind of senses (there are some exceptions) and can effectively focus their attention on one thing at a time. Compare that with even an S1 as described HERE - which includes senses humans don't possess at all and abilities relating to senses, manipulating their environment, and modes of thought that humans either lack or only possess in miniscule measure in comparison.

And that's only for an S1. By the time you get to S3 (where you kind of topped out here) you have a brain that outmasses the entire Biosphere of the Earth and a mind that can model every human being on Earth (meaning it could effectively run our entire civilization) 'single handedly' with only a tiny fraction of its attention. Are you really going to argue that such a being is really just a human being with super powers?

I once read a saying to the effect that when a process or technology is made 10x better it is revolutionary - and when it is made 100x better it is no longer the same process (no one thinks of jet airliners as horses that are 100x faster). Even low transapients exceed the 100x threshold by a huge degree - and by S3 the gap is so large that it seems more likely that any elements of their minds that look anything like ours are tiny and coincidental - or being temporarily purposed to that task for some reason.

(04-30-2021, 02:16 AM)Vitto Wrote: Regarding damagind/killing a transap I presume that below the ample use magmatter and plasma processor, I'd say below S4, anything can be killed by the right amount of antimatter/nuclear or heavy kinetic bombardment, which is avaiable even at S0 in the setting. Mass to energy is no trivial joke at any of those levels.
That should take care of any kind of armor like diamondoid or Ultimate Muscle. I assume that the average Moon Brain can't absolutely produce enough magmatter to coat itself with a magmatter infused armor.
The problem is scoring an hit against an hyperaware being with sensor array that probably outclass yours Big Grin

This is kind of like saying that a single human armed with a sling can defeat the United States in a war because technically the sling could kill a single soldier.

While a lot of SF is hugely enamored of reducing conflict down to a simplistic punching match of guns against armor or 'shields' (which conveniently decrease in strength as the plot requires it), actual warfare is much more involved. And is going to be vastly more involved when transapients are involved, especially S3.

Put another way - that 'hyper-awareness' means that the S3 is going to know what the humans are up to long before they can martial the resources to threat it's physical structure in any way. And there are any number of things it can (and will) do before they have launched so much as a single ship. Think Batman levels of crazy prepared multiplied by quadrillions. The would be attackers could suddenly have their nanoimmune systems malfunction and kill them all. Or turn off their eyes. Or Rewrite their minds such that they all become willing slaves of the transapient. Or all turn on each other and wipe themselves out. Or their weapons could all turn on them. Or their culture could undergo some number of changes that result in them no longer having any desire to attack the transapient.

Even if the humans manage to launch their weapons - the S3 can field vastly more energy than they can. It can shoot down their weapons - likely before they've done more than barely launch. Or detonate the local star and habs. Or detect the weapons via gravity wave interferometers and use conversion powered lasers to destroy them. Note also that hitting even a planet size object is extremely hard at near c due to Lorentz contraction and related issues. But most likely the S3 would stop the attack before it ever started. So it would still win. And the inherent inferiority of mere humans (sacks of dirty water with an overinflated sense of their own value) would be proven once again Wink

More editorially, any such conflict would fall under our intertoposophic conflict guidelines - LINK

(04-30-2021, 02:16 AM)Vitto Wrote: An exception to this could be wormholes: we have a group of modos, I think, blowing a mouth of a wormhole in a system during the Version War and severely damaging a Transap in another system. They had their whole clade wiped out by hunter killer ISOs, but that is another story Angel

Link to the article/story in question,please?

Per our current Canon this is basically impossible and the article likely needs to be modified.

(04-30-2021, 02:16 AM)Vitto Wrote: Anyway, for the setting stories, articles and blog posts I'd say that taking down a being a S level above you is so difficult that is usually a remarkable event. Still, it's possible.
Regarding humans taking down an S1 in a conflict  I'd say that is possible that a large amount of enanched human forming a military oriented tribemind with enough processing power to match an S1, I'm not sure if that was ever discussed in details. After that, as I wrote above, you have quite a lot of tools to deal horrific damage if you can strike even a single hit.
In time of peace there is no way an angelnet would allow any modos to go around with large amount of explosives.

See above, both in general and re intertoposophic conflict. The odds of this ever happening are astronomically low. Re a tribemind vs an S1 - a tribemind is only a slightly more clever animal compared to even an S1. And if it has as much processing power as an S1 then it will either ascend to become an S1 - in which case it is no longer a case of modos beating a transap or (much more likely) it will crash or go insane or become a blight or perversion. For the first options it's not winning and for the last it's again a case of S1 vs S1 at that point.

Todd
Reply
#17
Quote:That said - what makes you think that humans possess 'distinct intelligence' as transapients intend it?

Is quite difficult, as per setting's rules, compare their "intelligence" or "operative system", call them as you want, but I think that is quite simple to agree that both share traits that an animals maybe barely shows, like planning, creating objects or imporving themselfs.
My disagree was more on the human-bear side.
As usual I should think twice and explain myself more, sorry! Confused 


Quote:And that's only for an S1. By the time you get to S3 (where you kind of topped out here) you have a brain that outmasses the entire Biosphere of the Earth and a mind that can model every human being on Earth (meaning it could effectively run our entire civilization) 'single handedly' with only a tiny fraction of its attention. Are you really going to argue that such a being is really just a human being with super powers?

I always think on those situation as conflict among opponents that have no more than one S level between them. And that already is quite overwhelming.

Quote:This is kind of like saying that a single human armed with a sling can defeat the United States in a war because technically the sling could kill a single soldier.

While a lot of SF is hugely enamored of reducing conflict down to a simplistic punching match of guns against armor or 'shields' (which conveniently decrease in strength as the plot requires it), actual warfare is much more involved. And is going to be vastly more involved when transapients are involved, especially S3.

Put another way - that 'hyper-awareness' means that the S3 is going to know what the humans are up to long before they can martial the resources to threat it's physical structure in any way. And there are any number of things it can (and will) do before they have launched so much as a single ship. Think Batman levels of crazy prepared multiplied by quadrillions. The would be attackers could suddenly have their nanoimmune systems malfunction and kill them all. Or turn off their eyes. Or Rewrite their minds such that they all become willing slaves of the transapient. Or all turn on each other and wipe themselves out. Or their weapons could all turn on them. Or their culture could undergo some number of changes that result in them no longer having any desire to attack the transapient.

Even if the humans manage to launch their weapons - the S3 can field vastly more energy than they can. It can shoot down their weapons - likely before they've done more than barely launch. Or detonate the local star and habs. Or detect the weapons via gravity wave interferometers and use conversion powered lasers to destroy them. Note also that hitting even a planet size object is extremely hard at near c due to Lorentz contraction and related issues. But most likely the S3 would stop the attack before it ever started. So it would still win. And the inherent inferiority of mere humans (sacks of dirty water with an overinflated sense of their own value) would be proven once again [Image: wink.gif]

More editorially, any such conflict would fall under our intertoposophic conflict guidelines - LINK

Again, my  bad way to express myself: there is no way a modos civilization could beat a Moon Brain, I never intended to insinuate that.
Still, on a single S level gap there could be an opportunity: the amount of antimatter needed to crak open a Moon Brain would be quite a lot, but probably any S1 can put its manipulator on the right amount to blow the home of any S2, if its target is not very widely dispersed among habitats, orbitals or the entire crust of a planet.

Quote:Link to the article/story in question,please?

Per our current Canon this is basically impossible and the article likely needs to be modified.

The God Web Itself!
"The Version War Period" paragraph. It doesn't describe the S level of the attackers but from the fact they were a whole clade I doubt  they were more than S2 and, anyway, we are talking about a wormhole for archai use during the Version War, so at least S4, if not directly S5.
Pretty nasty way to leave this valley of tears...

Quote:See above, both in general and re intertoposophic conflict. The odds of this ever happening are astronomically low.

Yup! I noted that a long ago. The only two example made that I know are a mad wandering S1 and an S2 during the Version War that got bold and was ambushed my weapons hidden in a moon (still survived).
Semi-professional threads diverter.
Reply
#18
There seems to be some confusion here regarding what advantages transapients have by virtue of thinking at a higher toposophic level, and the separate advantages they have as virtual entities. A Superturing AI could also control multiple bodies at once, collate data from a vast array of senses and evaluate thousands of different courses of action simultaneously, while still being blind to possibilities that only a transapient could see. It would just as hard to "kill", at least from a modosophont perspective. If they know they're in danger, they could have backups all over the place, vots acting on their behalf to look out for potential threats in the spaces they inhabit, and so on.

Subsumption is one way you could do it, but you could never know for sure if you'd merged with or corrupted every possible copy of your opponent unless you detected telltale signs of their influence on the local culture or economy. Simply destroying the node they physically rely on isn't equivalent to killing an embodied being. An advanced Superturing entity might even have different parts of its mind residing on physically distinct nodes much as a supercomputer spans multiple server racks across a building. It's far from clear what would constitute a victory in a universe of virtual beings, and we should separate this issue from that of toposophic conflict, since biological transapients exist too.

This is where the human vs animal analogy breaks down, in my opinion, because both humans and animals are embodied beings with limited ability to surpass what our bodies let us do. In a virtual universe, there could be any number of methods to determine if you'd eradicated a particular mind or its descendents and copies from your sphere of influence, and game theory for evaluating such conditions might well be unsolvable for modosophonts. Perhaps only a transapient could ever know if it had truly won or not. A setting like this does not go well with Hollywood-like stories where the criteria for victory and defeat are easily for modern humans to make sense of.
Reply
#19
Who here has read Blindsight, by Peter Watts?  Part of the premise is advanced aliens with intelligence but not self-awareness.  They do everything on instinct, apparently.  I haven't read it.  Has anything like the premise been done in OA?
How would they do against various levels of AI?

He wrote several other related works.
https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)
Reply
#20
(04-30-2021, 09:51 AM)sandcastles Wrote: Who here has read Blindsight, by Peter Watts?  Part of the premise is advanced aliens with intelligence but not self-awareness.  They do everything on instinct, apparently.  I haven't read it.  Has anything like the premise been done in OA?
How would they do against various levels of AI?

He wrote several other related works.
https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)

The Muuh system of response operates in a similar type of intelligence without apparent self awareness. 
https://orionsarm.com/eg-article/4a146869e34f6

Also Vots
https://orionsarm.com/eg-article/479bc589f3d66
https://orionsarm.com/eg-article/478589c0a46e2
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)