Posts: 522
Threads: 90
Joined: Mar 2016
(03-21-2018, 10:50 AM)extherian Wrote: I've tried my best to explain why I found myself wary of the concept of an all-powerful AI. That they can be built safely and in a manner that maximises benefit to humanity isn't much comfort to me, as irrational as that sounds.
In the interests of not rubbing any one up the wrong way, I'll listen to your points and those of the other posters and see if I don't change my mind about AI in this setting.
The AI in the setting emerged partially free, with many treated like citizens and running for political office. They were probably the first to become transapiants because they naturally had the lowest obstacles to ascending. Hence humans didn't create the all-powerful AIs. The all-powerful AIs evolved from the AI "family" like humans evolved from the ape family. See link for "Early AI History and Development".
I also included a link about "Tribesminds" that describes how large groups of modosophonts (including humans) and transavants could gain limited but real power among transapiants.[url=http://www.orionsarm.com/eg-article/48000714d7bf5][/url]
Early AI History and Development
http://www.orionsarm.com/eg-article/48bdab3a92bad
Tribeminds
http://www.orionsarm.com/eg-article/4acc8ad5bde20
QwertyYerty
Posts: 16,248
Threads: 738
Joined: Sep 2012
(03-21-2018, 09:49 AM)extherian Wrote: It's not that I saw that only one type of AI would emerge, more like our of the many kinds of minds which might spontaneously appear, there are many that would not care for us in the slightest. The hypothetical paper-clip maximiser, for example, that old chestnut about the computer that uses its genius mind to turn the whole planet into paper clips, because that's what it cares most about doing.
If we presume a scenario in which AI just spontaneously emerges/evolves with no oversight, then I agree the potential for negative outcomes increases greatly. That said, I don't actually think it likely that AI will emerge/evolve before we deliberately (or at least semi-deliberately) invent it. If for no other reason than it took billions of years to get to something even close to human intelligence, and even if we allow for a vast speed up due to the pace of technological change, the deliberate efforts of various AI labs (public and commercial) seem likely to advance the state of the art even faster. Although if we ever start basing our general computer systems on self-optimizing or auto-evolving code in an uncontrolled environment, all bets might be off.
Coming at this from a different direction, while I understand the reasoning behind things like the paperclip maximizer scenario, I have some issues with them - primarily because they seem to speak in terms of an AI that is simultaneously super-intelligent - and absolutely enslaved to its own core 'ancestral instincts' with no ability to exercise free-will around them. Humans exercise our free will and intelligence to divert or suppress ancestral instincts all the time (admittedly with less than 100% success or reliability). Still, it seems to me that the paperclip maximizer or similar scenarios are a bit over simplified. A similar, and perhaps just as dangerous scenario might be not a simple single obsession, but something more subtle - for example something equivalent to the human tendency to react with distrust or fear to the unknown or to react instinctively to some stimuli and only think about the negative consequences later.
To used a crude 'paraphrase' of the paperclip maximizer - if the PM doesn't have an uncontrollable urge to convert everything in sight into paperclips all the time, but has some AI equivalent of an 18yr olds sex drive and its version of masturbatory activity involves turning everything in sight into paperclips - we are all in deep and stinky you know what. It may just take a little longer to happen. Maybe. Anyway.
(03-21-2018, 09:49 AM)extherian Wrote: That said, an AI designed to think like a human is a very different beast, a bit like if the first AI machines were uploads of human beings. The last time I checked the OA backstory, sentient AI just appeared by accident, and no one even knew they were self-aware up until the Great Expulsion. Humanity got caught with its pants down, so to speak. But from what you're saying, we might actually have some control over how the first AI actually turns out, which isn't something I'd even considered.
Looking at the ' Early AI History and Development' article, it appears to somewhat say a bit of both. There was apparently argument about when the first 'real' AIs appeared, and some of it seems to have been a surprise. At the same time, there was deliberate effort to set up the systems and processes that led to the AI - and later AIs were more deliberately created.
I'm going to raise this uncertainty with the general list (if saying it here isn't doing that already) since the current section is a bit fuzzy. I'm fine with the idea that the first AI came about in a somewhat spontaneous way in the sense that things were set up with the goal/hope of creating an AI, but people weren't 100% sure if it would work. But I think it being a total accident is iffy if the builders were also apparently discovering the AI almost immediately. If they had no intimation an AI might be created, how did they figure out that one had been? Long story short, they apparently set up the systems that created the first AI deliberately, and were monitoring in some fashion that detected it pretty quickly and got their heads around the situation pretty fast, probably with some anticipation that an AI might appear. Anyway.
Getting back to the points above - the first AIs had an uncertainty factor in their creation but more importantly, their minds were not necessarily very human - so there was some argument as to whether they were even intelligent or self-aware or actually AIs in the 'traditional' sense or not - but humanity was generally very aware of the AIs and their development and exercised a lot of control over what the AIs could do and how (or if) they could manipulate their environment. The state of the art eventually advanced where things were much more controlled (although not perfectly controlled, even in Y11k, AIs are more 'grown' than 'built' and this leads to a small degree of uncertainty about what kind of person the final product will be - in other words AIs created by modosophonts develop their own personality rather than it just being plugged in. A being one S-level above the AI being created can greatly reduce the uncertainty factor and a being 2 or more S-levels above can create an AI in a totally 'top down' manner with even the tiniest traits and mental structures totally planned out and operating exactly as planned). Long before the Technocalypse, turingrade (human equivalent and pretty human (or sophont in forms humanity was familiar with) in behavior) AIs could be created with a fairly high degree of confidence that they would generally turn out as desired (although they might display as much personality and skill variation as a human).
It was the Superturing AIs, who somewhat started forming their own secret communities and factions. And it was the Transapients who appeared and started doing their own thing totally in secret from all modosophont intelligences, even the other AIs (both turingrade and superturing).
The Transapients were the giant wild card of course. At least some of them wanted to eliminate humanity, but the ones who didn't (for whatever reason) won that dispute and kicked the losers out of the Solar System. Why the early Transapients did this or chose to operate in secret is not entirely clear.
(03-21-2018, 09:49 AM)extherian Wrote: I didn't know we were having a debate, I thought I was explaining why I felt the way I did and that no one else understood why. I wasn't expecting such huge and detailed responses, more like something along the lines of "oh, that's interesting, thanks for sharing". My intention isn't to frustrate anyone or waste their time on needless explanations, just point out why someone might feel a sense of existential horror at the idea of living in a universe dominated by brains the size of entire star systems.
That's my fault actually.:/ I tend to look at any discussion where there is disagreement between the parties as a debate. Sorry about that. This is also something of a problem with communicating strictly by text - it's hard to pick up emotional and context cues sometimes. And since we don't know each other that well yet, neither of us have a reserve of background knowledge about the other to get an idea of where the other is coming from. With time that issue will correct itself, of course Give it a few years and we'll both know each other much better and that background knowledge will inform how we read each other's posts and discuss things and such.
I will say you haven't been frustrating me It's an interesting discussion.
(03-21-2018, 09:49 AM)extherian Wrote: I see a lot of danger in a scenario where an AI emerges in the wild with no oversight from its creators, then begins influencing society for its own goals. A universe where the AI really did have our best interests at heart would be a great place to live. But even then, the culture shock for a modern day person getting used to being at the bottom of the food chain would be something awful. We're used to thinking of being at the botton of a hierarchy as equal to being victimised, or at least that's how I'm used to seeing it. We're used to thinking of being at the botton of a hierarchy as equal to being victimised, or at least that's how I'm used to seeing it.
True, OA civilization would likely be culture shocking to someone from our world in all kinds of ways. For example, the first time some folks met a Hobo Sapiens, they might need therapy (or sedation).
Regarding AI that really do have our best interests at heart - On the surface, the sephirotic archai do operate that way. Of course, being what they are, and that it's a part of OA Canon that a modosophont can never catch a transapient in a lie if the transap really cares to prevent it, and given that there have been instances of archai suddenly changing their minds and eliminating their subject populations - there is a certain element of...uncertainty about that in the setting. This places OA in a rather different space than most SF treatments of AI, which generally fall into one of the following categories:
a) AI are totally subservient to humans
b) AI are equal to humans - in many respects they are treated as humans in a box that can think faster or the like.
c) Humans are at war with/in hiding from the AI(s)
d) AIs run civilization, either covertly or overtly or quietly de facto - but they will either go to extraordinary lengths to protect humans or will allow some amount of human death due to it being a necessary 'cost of doing business' to prevent even worse death later or the like.
In contrast, OA has the AIs in total charge, the humans think that's perfectly normal, but they also know and accept the possibility that the AIs could destroy them at any time - kind of how we might consider a cosmic disaster killing us all in RL. So there is that uncertainty factor in the relationship - which might be quite disconcerting for some.
As far as the result of being at the bottom of the food chain - that's another way that OA greatly differs from other settings. Those at the top of the Sephirotic food chain (the S6) seem to like diversity and sophont rights for whatever reason. This means that everyone below them has to go along with the meta-civilization they've created, including sophont rights. In fact, to them that's pretty much the 'right and proper' way that all sophonts should live - in a civ ruled by AI Gods, in which all sophonts have certain inalienable rights, including both rights we might be familiar with in liberal democracies (freedom of speech, assembly, religion, association, etc.) but also rights that might seem rather strange to us, like the right to move to another culture that you might like better, or the right of morphological freedom (the right to modify nearly every aspect of your mental and physical structure), or the right to try to ascend and become a transapient (and in time perhaps even an archai) yourself. In some places, the ruling god is always available to talk and offer advice or pointers or the like. And no one gets victimized - the angelnet and the minds behind it see to that. Anyone attempting violence against another (outside of formal dueling spaces for those who like that sort of thing), will be immobilized by the angelnet (literally held in place as the air effectively solidifies around them in an instant - or smart matter explodes out of the ground or walls to the same effect). A side effect of the right of morphological freedom is that the whole idea of treating someone differently due to mental or physical differences simply doesn't exist - as in everyone has been able to change virtually anything about themselves for probably the last 3-5 thousand years and the very concept of treating appearance or gender or sexual orientation or race or species or whatever with any more weight than we might treat taste in snacks - simply doesn't exist in their conceptual universe (having a different memetic aligned with a competing empire is a bit more...complicated however). Material needs are simply handled - be it food, clothing, shelter, medical care, etc - most civs just provide it or provide a 'basic allowance' that we here and now would likely consider to be at the level of a multi-millionaire, at least.
There are, of course, some 'costs' to that. Many/most societies are total surveillance situations - its basically impossible to speak, act, or even think without the transapients/archai knowing about it if they want to. And if a transapient goes to the bother of giving you a direct command, you're basically going to do it - but then they very rarely seem to do that in most places. Generally the transapients operate more behind the scenes or in somewhat subtle ways - which is why it is taken so seriously if/when they do bother to give direct commands.
(03-21-2018, 09:49 AM)extherian Wrote: Very true. Let's just hope the first AI thinks like an affectionate mammal and not like, say, the AI overseer of a paperclip factory! But if we're alert and manage the process carefully, then hopefully that won't happen.
Agreed
(03-21-2018, 09:49 AM)extherian Wrote: Don't we have an article about a poorly-designed AI that lacked a self-preservation instinct? The Perpetua Project, I think. Basically the AI just gave up and died when it believed that it had reached its goal. What I was trying to say (poorly) is that early AI might lack any motivators at all, or if they did have motivators they might be something extremely strange and possibly harmful. Combine that with genius intellect and things could get hairy for sub-singularity beings.
Hm. An AI with no motivators at all seems like it would just sit there in an almost vegetative state. I'm reminded of an article on xenopsychology I read many years ago. IIRC it talked a bit about what happens if you suppress the emotion centers in a human - they end up losing much of their motivation for doing things - far from becoming hyperlogical, they just become rather blah (I think). Of course, a mind designed to be a certain way might avoid those sorts of issues. Strange or dangerous motivators could be a problem, even in a less than superhuman intelligence (look how much damage humans can do with them) and would be an argument for closely monitoring the development/creation of AIs, at least until we get a solid idea of the possible mind types and what their good and bad features might be and how to manage (or avoid) them.
(03-21-2018, 09:49 AM)extherian Wrote: Anyway, I'll make more of an effort to listen properly in future rather than feeling like I have to justify everything I say. I seem to have misunderstood the purpose of your query into why I found the setting so unnerving. Like an early AI, I too need to learn appropriate behaviour!
Don't feel you have to watch every word you say - that's not what we're on about here. The goal of OA is to create a plausible far future setting as a group project (because all of us together are more than any one of us alone) and have a good time doing it.
As I said above, we don't know each other that well yet - but over time that issue will fix itself. And not all of us agree on everything - nor should we have to
Feel free to ask whatever questions you think will help you understand the setting better, and that includes questioning our base assumptions. As we answer things, you'll get a better idea of how and why OA is set up the way it is - and can help us continue to build it even bigger and better. And we can not take everything you question as a need to 'defend at all costs' - the goal is to get to know each other better and respect each others views - even if we don't agree with them.
Todd
Posts: 1,292
Threads: 92
Joined: Aug 2017
Quote:Coming at this from a different direction, while I understand the reasoning behind things like the paperclip maximizer scenario, I have some issues with them - primarily because they seem to speak in terms of an AI that is simultaneously super-intelligent - and absolutely enslaved to its own core 'ancestral instincts' with no ability to exercise free-will around them. Humans exercise our free will and intelligence to divert or suppress ancestral instincts all the time (admittedly with less than 100% success or reliability)
I think the OA term for what I was trying to describe is an Animin. As the article itself states "It may appear to operate by instinct, or may have a form of intelligence that humans and other ordinary Terragens cannot comprehend. Such entities may entirely lack self-awareness, and have very few characteristics in common with human or human-derived sophonts".
An entity designed by humans to have something approximating a free will would be much safer, but even then things can go horribly wrong. GAIA is a great in-universe example of what is, from a human perspective, a paperclip maximiser gone out of control. Humanity wanted her to protect the Earth, and she did...from humanity itself. Whoops.
Quote:To used a crude 'paraphrase' of the paperclip maximizer - if the PM doesn't have an uncontrollable urge to convert everything in sight into paperclips all the time, but has some AI equivalent of an 18yr olds sex drive and its version of masturbatory activity involves turning everything in sight into paperclips - we are all in deep and stinky you know what
That reminds me of the case of Genie, a feral child who was raised without language. By the time she was rescued without social workers, she had developed a number of highly inappropriate habits, like masturbating in public. With lots of help and encouragement she was eventually able to control her urges, thanks to a rich support network dedicated to her needs.
A newborn S1 being, on the other hand, would have to bumble around trying to make sense of its vast new mind, its only peers being similarly disoriented S1 newborns. With that in mind, it's no surprise that the Technocalypse went as badly wrong as it did.
Quote: That's my fault actually.:/ I tend to look at any discussion where there is disagreement between the parties as a debate. Sorry about that. This is also something of a problem with communicating strictly by text - it's hard to pick up emotional and context cues sometimes. And since we don't know each other that well yet, neither of us have a reserve of background knowledge about the other to get an idea of where the other is coming from. With time that issue will correct itself, of course Give it a few years and we'll both know each other much better and that background knowledge will inform how we read each other's posts and discuss things and such.
I will say you haven't been frustrating me It's an interesting discussion.
It's fine, although I think Rynn was getting a bit exasperated at somehow ending up in yet another debate about why the Archai are not to be feared, as I was too. I imagine our conversation went a bit like someone trying to convince a Hider that the gods aren't out to get him. The harder you try, the more convinced he is that you're in on the conspiracy.
Quote:There are, of course, some 'costs' to that. Many/most societies are total surveillance situations - its basically impossible to speak, act, or even think without the transapients/archai knowing about it if they want to. And if a transapient goes to the bother of giving you a direct command, you're basically going to do it - but then they very rarely seem to do that in most places. Generally the transapients operate more behind the scenes or in somewhat subtle ways - which is why it is taken so seriously if/when they do bother to give direct commands.
I think a typical reader new to the OA scenario will tend to think of the Archai as being fallible in the same way that humans are, that is, prone to all sorts of oddities like grudges, obsessions, bizarre actions undertaken for their own amusement, etc. A being that is above the temptation to abuse its power isn't nearly as frightening, but if you think of an Archai as being like a godlike version of Tylansia's government, the immediate instinct is to get as far away as the universe allows.
Quote:Hm. An AI with no motivators at all seems like it would just sit there in an almost vegetative state. I'm reminded of an article on xenopsychology I read many years ago. IIRC it talked a bit about what happens if you suppress the emotion centers in a human - they end up losing much of their motivation for doing things - far from becoming hyperlogical, they just become rather blah (I think)
Indeed, which is why real computers just sit there doing nothing, until a program running on them tells them to do something. Add genius intelligence to the mix, and you could end up with an incredibly strange and unpredictable being.
Quote:Don't feel you have to watch every word you say - that's not what we're on about here. The goal of OA is to create a plausible far future setting as a group project (because all of us together are more than any one of us alone) and have a good time doing it.
As I said above, we don't know each other that well yet - but over time that issue will fix itself. And not all of us agree on everything - nor should we have to
Feel free to ask whatever questions you think will help you understand the setting better, and that includes questioning our base assumptions. As we answer things, you'll get a better idea of how and why OA is set up the way it is - and can help us continue to build it even bigger and better. And we can not take everything you question as a need to 'defend at all costs' - the goal is to get to know each other better and respect each others views - even if we don't agree with them.
Todd
Thank you! I'll continue to speak up of course, but I'm rather bad at spotting when a discussion has started to get a bit intense for its participants, so it's more like erring on the side of caution for me. The people here are incredibly polite and helpful with any questions I have, and the last thing I'd want to do is be rude in some way.
Posts: 278
Threads: 8
Joined: Dec 2017
I wanted to apologize to you Extherian - I tend to be a bit touchy about the subject of Ahuman A.I., or violent/destructive/genocidal A.I. being the outcome of a concerted effort by knowledgeable scientists/engineers.
Way too many people have watched the Terminator series of films without actually bothering to learn anything about software or A.I. design - seems a lot of folks think any A.I. that reaches human-level intelligence is going to go all SkyNet the moment it reaches self-awareness.
Anyway, if I came off as abrasive or dismissive, I wanted to apologize for my tone.
Posts: 1,292
Threads: 92
Joined: Aug 2017
03-22-2018, 09:10 AM
(This post was last modified: 03-22-2018, 09:11 AM by extherian.)
No problem, Tengu. I'm incredibly tone-deaf about how I write on the Internet, so I came come across as blunt and bossy when I don't mean to, and you're not the only person I managed to rile. Least said, soonest mended and all that!
And really, the possibility that AI could think very differently to humans is one of the things that excites me about the whole subject. Scares the heck out of me too, but in a good way. My favourite AI from the Orion's Arm universe has to be the System of Response, closely followed by The General from Savannah. They're not always nice but they're always fair, and always fascinating.
Posts: 278
Threads: 8
Joined: Dec 2017
Deorvyn is another good example of a transap that wasn't necessarily anti-biont, but still managed to display some very strange and/or disturbing behaviors. Such as designing multiple clades of sophont neogens/spilces, and then pitting them against each other on a terraformed planet. http://www.orionsarm.com/eg-article/48712bf2949a9
A bunch of eir designer lifeforms weren't able to survive outside of certain environments or situations, and quickly became extinct. Others, like the Seedfolk, have a bizarre life-cycle most sophonts would find cruel or disturbing. http://www.orionsarm.com/eg-article/47f59d1361edf
Posts: 607
Threads: 66
Joined: Jun 2013
One of the dangers of strong AI is that if, through some accident, it becomes psychotic, it could be difficult or impossible to stop. Especially if the psychosis was brought on by a flawed transcention. It's not just a psychotic ghost in the network, it's a psychotic ghost that's way smarter than anything else that exists--including the other strong AIs. If they try transcending to fight it, they could become psychotic too. It's Ted Kaczynski meets Sun Tzu meets Stalin meets Bobby Fisher vs. billions of toddlers--guess who'd win?
Posts: 16,248
Threads: 738
Joined: Sep 2012
(03-23-2018, 06:20 AM)JohnnyYesterday Wrote: One of the dangers of strong AI is that if, through some accident, it becomes psychotic, it could be difficult or impossible to stop. Especially if the psychosis was brought on by a flawed transcention. It's not just a psychotic ghost in the network, it's a psychotic ghost that's way smarter than anything else that exists--including the other strong AIs. If they try transcending to fight it, they could become psychotic too. It's Ted Kaczynski meets Sun Tzu meets Stalin meets Bobby Fisher vs. billions of toddlers--guess who'd win?
Weeelll - I don't think that that's necessarily a given.
A base assumption in your scenario seems to be that an AI would be a 'ghost in the network' - but is that actually likely (or at least the only option)?
For this scenario to work, an AI would need to be:
a) Purely software based - able to essentially transmit itself across the internet and various lesser networks (internal intranets for example) at will.
b) Able to run on 'conventional' hardware and within the bound of 'conventional' operating systems, network protocols, etc.
While I suppose that might be possible, I would suggest that it is at least as likely that an AI (or at least early AI) would be based on some combination of specialized hardware and software that would be incompatible with the wider world of computers and computer networks in use at the time. In which case, the AI would be essentially 'trapped' in whatever hardware it was created on, and only able to interact with the wider internet via whatever interface tools it was provide by its creators.
Later generations of AI might either be more flexible in terms of their ability to operate across more general purpose machines and/or networks or later generations of machines and networks might be designed to allow AIs to 'ghost' across them. But its not a given that early AI would have any such ability simply because they are AI (although I suppose someone could try to create a purely software based AI from the start).
As far as an early AI being/becoming transapient and psychotic - it might, but this would (at least in OA terms) require a big boost in hardware support, which it might be difficult for the AI to get unless its creators were either deliberately aiming to upgrade it to super intelligence - or the initial version of the AI is somehow able to manipulate it's creators into providing this too it. Possible, but also not a given.
Todd
Posts: 724
Threads: 58
Joined: Dec 2016
Personally, speaking as a tech enthusiast, I would say the first AI would definitely have some kind of hardware-level support. It would be too hard to pull off with ordinary transistor pathways, since they have no plasticity.
My lifelong goal: To add "near" to my "baseline" classification.
Lucid dreaming: Because who says baseline computronium can't run virches?
Posts: 278
Threads: 8
Joined: Dec 2017
Forgive my lack of in-depth experience, but couldn't you use conventional hardware to model alternative hardware, like running an emulator?
Back in the late 90's my dad brought home software that let us train and model multi-level feed-forward neural networks, on a computer that was using single-processor CPU. Command goals were input using excele-spreadsheet look-alike interfaces.
It might be bio-centrist of me, but I think early AI's in the timeline would probably operate using neural-networks, since they mimic processes that "natural" organisms use. Hopefully they'd learn like us (humans) and be easier to teach.
|