Posts: 272
Threads: 28
Joined: Mar 2013
Dear all,
After reading the following article in the New York Times:
http://www.nytimes.com/2014/10/19/fashio...s-bff.html
I just have to wonder, whether the classical Turing test in order to determine, whether an AI has human level intelligence or not, is really an "accurate" measurement of an AI's human-like capabilities. I mean, would an autistic person like Gus be able to pass the classical Turing test? If the answer is maybe not, when maybe this test doesn't prove anything about human intelligence or intelligence in general after all. How do you feel about all this? How accurate can an "intelligence test" like the Turing test be if it actually focuses so much on human conversational skills. The Orion's Arm-Universe even has solipsist ais in the setting which can have transapient capabilities. And yet if a baseline human tried to perform the Turing test on them, they could just ignore the human (in the best case). And so from the point of view of the human it would be the same as talking to an inanimate object, while that "inanimate object" would perform calculations and deliberations (like pondering about the fate of the universe) completely beyond the capabilities of the human, who tried to perform the test. And all the human would think: Well, e doesn't seem to be very intelligent since e can't talk (like me).
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
Posts: 7,362
Threads: 297
Joined: Jan 2013
It's pretty clear that the original incarnation of the Turing test is flawed. Which shouldn't be much of a surprise given that it was proposed over sixty years ago and computer science has advanced in ways that beg belief. The assumption behind the Turing test is that if a machine can pretend to be a human then it is of equivalent intelligence to a human, the idea being to separate the complex notion of consciousness from the more practical concept of capability. The problem is that development over the past several decades has given us a wealth of technologies that allow us to break down tasks previously thought to require a human-level intelligence and get relatively simple machines to use it.
So you're right, we can make computers good enough to fool some people and there are humans that can't pass it. In that manner the Turing Test is useless. But we could play around with it for the setting. Our current article on the Turing Test is very short and very old. We could perhaps write it up that the Turing test was revised over time to include multiple (and in the end a huge amount of-) every day tasks that a single software package was submitted to. The idea being to apply the label of "Passed the Turing test (edition 12)" to show that the software in question was human-equivalent in capability. Separate to this is the idea of sophonce. Just because a machine is human capable in every field humans are doesn't mean that it has an ego or self identity. The development of a test of sophonce would be far more useful than a Turing test in dealing with the matter of if a machine is a conscious entity. Just what that test would be I can't guess, I imagine it would involve taking a detailed look at the entity in question's mind and having a sound theory of consciousness to know what to look for.
OA Wish list:
- DNI
- Internal medical system
- A dormbot, because domestic chores suck!
Posts: 271
Threads: 12
Joined: Apr 2013
10-22-2014, 12:04 AM
(This post was last modified: 10-22-2014, 01:44 AM by stevebowers.)
(10-21-2014, 08:19 PM)chris0033547 Wrote: Dear all,
After reading the following article in the New York Times:
http://www.nytimes.com/2014/10/19/fashio...s-bff.html
I just have to wonder, whether the classical Turing test in order to determine, whether an AI has human level intelligence or not, is really an "accurate" measurement of an AI's human-like capabilities. I mean, would an autistic person like Gus be able to pass the classical Turing test? If the answer is maybe not, when maybe this test doesn't prove anything about human intelligence or intelligence in general after all. How do you feel about all this? How accurate can an "intelligence test" like the Turing test be if it actually focuses so much on human conversational skills. The Orion's Arm-Universe even has solipsist ais in the setting which can have transapient capabilities. And yet if a baseline human tried to perform the Turing test on them, they could just ignore the human (in the best case). And so from the point of view of the human it would be the same as talking to an inanimate object, while that "inanimate object" would perform calculations and deliberations (like pondering about the fate of the universe) completely beyond the capabilities of the human, who tried to perform the test. And all the human would think: Well, e doesn't seem to be very intelligent since e can't talk (like me). While we do have people who have failed the turing test in RL and in OA we have turing-capable and turing-equivalent as synonyms they probably are not the same and not equal to a sophont test. A sophont AI might not be able to pass and a non-sophont might be able to pass. It or rather a revised edition might be useful as a test to see how well a given AI or Vec would work with humans.
In OA though your average sophont might be better at spotting Vec and AI or they might not, a nearbaseline might be coded in a similar way to a machine and a vec migth have auto-evolved out of botsystem or virch equivalent if given enough time,so the lines are blurred a little.
Posts: 272
Threads: 28
Joined: Mar 2013
Yes, a sophonce test seems to be more important in an OA setting whan the various versions of the Turing test. Determining whether an entity has consciousness can be really difficult. For example it's unclear whether the "Rat Brain Robot" is a sophont entity:
http://www.youtube.com/watch?v=1QPiF4-iu6g
As long as the 'Rat Brain Robot' doesn't try to socialize with the scientists in the video, the scientists can't be sure, whether e is sophont or not. On the other hand maybe the scientists cannot understand e's attempts to socialize with them. (The subjective impression e makes on me is that e seems to be "confused" or maybe e is trying to "search for something". Noone can tell for sure.)
In any case I think that the first reliable sophonce test will be available long before the appearance of the first destructive mind uploading technology. It wouldn't make sense if it was the other way round.
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison
Posts: 296
Threads: 24
Joined: Nov 2013
Well I have pet rats for several years and this is pretty much how a rat behaves in safe environment.
Posts: 16,242
Threads: 738
Joined: Sep 2012
(10-22-2014, 12:04 AM)kch49er Wrote: While we do have people who have failed the turing test in RL and in OA we have turing-capable and turing-equivalent as synonyms they probably are not the same and not equal to a sophont test. A sophont AI might not be able to pass and a non-sophont might be able to pass. It or rather a revised edition might be useful as a test to see how well a given AI or Vec would work with humans.
In OA though your average sophont might be better at spotting Vec and AI or they might not, a nearbaseline might be coded in a similar way to a machine and a vec migth have auto-evolved out of botsystem or virch equivalent if given enough time,so the lines are blurred a little.
It's gets even more complicated than that, if we really want to delve into it. Mind takes a vast number of forms and types in Y11k. From AIs to uploads of various ages and from various environments, to bionts adapted to a huge number of environments, again ranging in age across centuries, and all of them able to modify their minds in various ways, even before we get to the issue of singularity levels.
I would suggest that both terms 'turingrade' and 'sophont' are probably convenient shorthand for a reality that is both very complex and multi-dimensional, incorporating a whole range of variables that combine in different ways to produce different types of mind. Among professional mind creators (or even just dedicated hobbyists) there is perhaps a highly technical vocabulary and notation to communicate this. For the layman, there is generally just one or both very simple terms.
Just some thoughts,
Todd
Posts: 271
Threads: 12
Joined: Apr 2013
Well the baseline rats wouldn't be sophont or modo or turing-grade annyway,you'd have to continue growing the brain-tissue or extend comptronium to try and get sophonce.
On the other hand by OA time they could be part of a hyper-intellegint being. Just becuase one part is sub-sophont doesn't mean the whole thing is of course.
Agreed it gets a lot fuzzier and blurrer by y11k though based on the history AI and Provolve is where this is first coming in.
Posts: 725
Threads: 32
Joined: Mar 2013
A bit obvious but worth mentioning: A notional sapient AI which "wanted" to pass the Turing test would actually need to massively degrade its apparent capabilities for answering some of the possible questions. For example, if the question was something like "please factorise this 56-digit number into its two component primes" a human would take weeks (at least!) to answer the question, whereas a computer-based AI could do the job in a couple of seconds. (I think!)
Posts: 16,242
Threads: 738
Joined: Sep 2012
(10-24-2014, 05:19 AM)iancampbell Wrote: A bit obvious but worth mentioning: A notional sapient AI which "wanted" to pass the Turing test would actually need to massively degrade its apparent capabilities for answering some of the possible questions. For example, if the question was something like "please factorise this 56-digit number into its two component primes" a human would take weeks (at least!) to answer the question, whereas a computer-based AI could do the job in a couple of seconds. (I think!)
An AI that was designed to be able to do that kind of math quickly, while also being sapient/sophont would be able to do the math quickly. But there's nothing that says that an AI must automatically be really good at math or be able to expand or transmit its mind across the internet or any of the other things that SF is so fond of having AIs do.
Most likely an AI would run on specialized hardware or software or some combination of both. Unless that hardware/software included the ability to be really good at math, the AI would be no better than a human (although it could probably use a calculator like a human). Unless that hardware/software was compatible with whatever is supporting the internet at the time, it wouldn't be able to use it any more capably than a human could.
Given time and advances in AI design (and the hardware/software running the future internet) you could create both super math whizzes and minds that could jump around the net. But it seems very unlikely that would be a first run capability.
My 2c worth,
Todd
|