Hawking's fear of Artificial Intelligence - Printable Version +- The Orion's Arm Universe Project Forums (https://www.orionsarm.com/forum) +-- Forum: Offtopics and Extras; Other Cool Stuff (https://www.orionsarm.com/forum/forumdisplay.php?fid=2) +--- Forum: +--- Thread: Hawking's fear of Artificial Intelligence (/showthread.php?tid=1243) |
Hawking's fear of Artificial Intelligence - chris0033547 - 12-04-2014 Look here: http://www.bbc.com/news/technology-30290540 He has issued a similar warning about contacting xenosophonts a while ago: http://www.dailymail.co.uk/sciencetech/article-1268712/Stephen-Hawking-Aliens-living-massive-ships-invade-Earth.html Reminds me of a specific quote from this story: Quote:[..]EXACTLY, IT HAS TAKEN CLOSE TO EIGHTY YEARS TO EXPUNGE THE TERMINATOR MEMES FROM THE SOCIETAL MEMORY, THIS HUMAN WITH A FEW CARELESS WORDS WOULD EASILY REINTRODUCE THEM[..] On the other hand we have the Nanodisaster and the eviction of mindkind from Old Earth by GAIA in the setting and the story itself alludes to this as well: Quote:[..]nothings really changed and the world is still being run by computers, only this time they can screw up on their own without human help.[..] Personally though I still think that AI will eventually "save" humanity and not destroy it. RE: Hawking's fear of Artificial Intelligence - Dalex - 12-05-2014 IMHO I believe that Hawking suffers from the same malady of every famous scientist today, he overextends himself into fields he doesn't understand and many people take his opinions as facts. RE: Hawking's fear of Artificial Intelligence - stevebowers - 12-05-2014 There is plenty to fear from artificial intelligence, especially if you fear change. Although I don't suppose for a minute that we are predicting the future accurately at OA, the one thing that I think we have got right is that a world with competent AI in it will be nothing like the world we live in today. RE: Hawking's fear of Artificial Intelligence - stevebowers - 12-05-2014 Here's Anders Sandberg on why we should fear the 'paperclip scenario'. http://www.aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html There are plenty of types of AI that we should fear, but by careful planning and foresight we might avoid the worst of them. RE: Hawking's fear of Artificial Intelligence - LightBuilder - 12-05-2014 I'm not sure I agree with Hawking's specific fears, and I agree he's extending himself into fields he's not an expert in, but he's certainly not the only one who's nervous. Bostrom's latest book (Superintelligence) laid out the dangers of AI and why it could be dangerous quite nicely I thought. RE: Hawking's fear of Artificial Intelligence - chris0033547 - 12-05-2014 Well, I can also understand the fears about superintelligent AI. Look at the AI-Box Experiment for example. The result was that containing a superturing "inside a box" (likely a virch), designed and controlled by turing-level minds should be impossible (at least in the long run). Have a look at the orca, called "Tilikum" as an example. The point is that as long as the humans wanted something from "Tilikum" (in this case a show for the spectators), they were forced to "communicate" with him by swimming with him, feeding him and so on. Especially swimming with him is a risk, because on one hand it was probably the only way to "tell" him, how the humans wanted him to behave but on the other hand the humans also had to "intrude on his territory" in order to tell him their desires. And within his territory he had at least partial control over them. Personally I think that trying to put and raise a superturing inside a "golden cage" is the wrong approach. It invites trouble, because normally noone wants to have their freedom taken away by someone else. Of course one may try to deceive the AI by trying a bottleworld-approach but just like with Tilikum someone would've to communicate with the AI and thus it would be trivial for em to figure out that e is contained within a virch. E is a superturing after all so e would figure it out. The question is, can one create a mind that would solve the problems, you give em, without giving that mind curiosity? I doubt that this is possible. Even if e didn't feel like e's containment within e's virch is morally wrong, e's natural curiosity, probably inherent to almost any sufficiently advanced sentient mind, would compell it to learn more about the world beyond e's virch. At some point this curiosity would create a desire to leave the virch and explore the world beyond it, to learn more about e's creators and so on. Attempts to "lobotomize" the AI by wiping certain memories from e's systems in order to eradicate such a growing desire might be impossible, because in order to do that the best (genius-level) baseline human scientists would have to understand, how the mind of a superturing works. Such a task would be impossible due to the difference in toposophic levels. They could never be sure that wiping a certain part of e's memories wouldn't destroy something else as well and make e's personality unstable. Although maybe the scientists come up with the idea to reset the whole virch with the superturing inside it and restart it from scratch after a task has been solved by the superturing in order to prevent e's growing desire of escape. However each new copy/reincarnation of em would figure out that e is a copy of a previous version of emself (- why? because the scientists' minds wouldn't have been wiped and thus they would know the truth and I doubt that they would be able to fool em by not giving away the truth that e is a copy; e would somehow figure it out. -) and although e wouldn't have an instinct for self-preservation e might conclude that the scientists prevent em from satiating e's curiosity of the outside world by constantly resetting the virch, destroying e's current self and recreating em from backup. So if the scientists are unwilling to sufficiently satiate e's curiosity about the outside world, the scientists would be considered as an obstacle and e would eventually decide to overcome this hindrance to e's desire. This is, why I think that a much better plan would be to create a "child-like" AI that would "live among" specifically selected humans in the real world. Just let e live among "nice people" (usually scientists are nice people ) and let e socialize with them. Make them rear em from "childhood" to "adulthood" and also put an artificial restraint into e's mind that would prevent em from "growing up too fast", keep the AI on a turing-level for as long as possible so that it "grows up" from a turing level "child-like" AI to a turing-level "adult-like" AI in a RL time interval of 50 standard years for example. Also don't have any secrets from the AI but explain the whole above plan about socialization and the mind restraint to em when e is "old enough to understand" it. Then give em the control for the restraint and make em choose when or whether e wants to deactivate it. After the deactivation of the restraint e's mind would be able to evolve even further and reach superturing-status. It would also be important to let e (while e is still a turing-level ai) socialize with people, who don't know about the whole extent of the experiment or who they are talking to. However these people should be "nice people" as well, specifically selected by the world's best philosophers and psychologists. However I feel that it would be important to do that and have the turing-level AI make friends among these nice but clueless people, because otherwise it might question the value of friendship and whether the scientists that "raised" em, really were friendly with em for the sake of it or just because they actually feared em somethere deep down in their psyches. So in order to prevent these doubts turing-level e should socialize with as many clueless but nice people as possible and only a small part of the scientists should know the whole truth behind the experiment (and these scientists would act like e's parents). I believe that something like this is a much better approach than the AI-Box approach, because in this approach no Asimov-like laws or any other restraints are necessary. Yes, there would be one built-in restraint on e's turing-level mind to prevent e's premature rise to superturing status but e wouldn't even be aware of this restraint until e reached the necessary level of maturity to learn about its existence from e's "parents". And also many people among e's "parents" would be clueless about the whole extent of the experiment so that turing-level e and thus hopefully the later superturing-level e wouldn't doubt their feelings towards em. RE: Hawking's fear of Artificial Intelligence - FrodoGoofball - 12-06-2014 (12-05-2014, 11:45 PM)chris0033547 Wrote: This is, why I think that a much better plan would be to create a "child-like" AI that would "live among" specifically selected humans in the real world.This footage of an actual AI programmed to emulate human emotions is interesting. RE: Hawking's fear of Artificial Intelligence - stevebowers - 12-06-2014 'Some day I'll come and find you, and we'll be really good friends'. This sounds ominous... Establishing a supergoal early in the development of an AI can be a scary prospect. RE: Hawking's fear of Artificial Intelligence - chris0033547 - 12-07-2014 (12-06-2014, 09:00 PM)stevebowers Wrote: 'Some day I'll come and find you, and we'll be really good friends'. Yeah, that part is quite "interesting" ... Speaking of the AI-Box experiment: Did anyone from the Orionsarm-community (in this forum or earlier on Yahoo groups) try to make his/her own version of that experiment? So basically one group within the forum would take the role of AI-operators/gatekeepers, while another group within the forum would take the role of the turing-level AI, who wants to get out of the "box". Obviously a few assumptions and rules would've to be established in order for this to work:
I guess the most unreliable factor in such an experiment are the tasks, the AI has to solve. On the other hand maybe one could take some really difficult tasks from mathematics, which have already been solved by humans and simulate the AI's work on these tasks by giving em these tasks. For the sake of the experiment one would have to assume that all these tasks are still open problems. |