Posts: 11,753
Threads: 454
Joined: Apr 2013
01-30-2015, 08:53 AM
(This post was last modified: 01-30-2015, 08:54 AM by stevebowers.)
Quote:Why should the humans create AIs with such a strange psychology? In my opinion we should deliberately give them the capabilities (memory, cognitive abilities) of a baseline human baby and the social skills of a baseline human baby first.
These are noble goals for an AI program, but they are likely to be a lot trickier than we think. The human mind is the most complex thing we know of at present, and researchers are barely scraping the surface of this complexity. Even if we can make realistic human minds they will need a decade or more of education before they are really useful.
By the time we can emulate human behaviour and human learning patterns in an AI, we will probably find ways to make a wide range of useful minds that only vaguely resemble humanity. Perhaps these non-human minds will resemble the minds of ultra-complex insects, or of schools of fish, or meta-conscious search engines; but with proper programming these non-human minds will be able to hold conversations with humans.
As far as the Turing Test goes, I think this is not that good a yardstick for detecting a sentient AI. Human behaviour can probably be closely emulated using a very large database of human behaviours and a set of fairly complex rules on how to choose between them. To tell the difference between a true sophont AI and a
semi-conscious intelligence or even a completely unconscious one, it might be necessary to observe such a being for years or decades.
Posts: 725
Threads: 32
Joined: Mar 2013
I think that part of the point of the Turing test as a gedankenexperiment is to emphasise the point that telling whether something other than you (anything else, including another human) is "really" conscious is an impossibility.
Typing words on this forum and getting answers is in itself an impromptu Turing test. I've never met anyone else on this forum in the flesh (at least, not knowingly) so from my point of view the entire forum could be a hyper-complex weak AI designed to fool me into thinking there are real people typing their replies. (I hope it is obvious that I don't really think that!)
And even if you are talking to another human - still, there is the possibility that the other human is actually an extremely carefully programmed zombie with no actual consciousness. Barring telepathy, there is no way to know.
But none of that matters. "If it walks like a duck, quacks like a duck then it's a duck." Consciousness is arguably irrelevant to AI discussions.
Posts: 11,753
Threads: 454
Joined: Apr 2013
Curiously enough, one of the earliest AI minds described in OA is supposed to be an on-line correspondent presumably masquerading as a human;
Hal
by 134 AT the technology of artificial intelligence would be quite sophisticated, so I expect that Hal would be quite convincing.
Posts: 268
Threads: 19
Joined: Mar 2013
02-03-2015, 05:35 PM
(This post was last modified: 02-03-2015, 06:02 PM by stevebowers.)
Right. The Turing test is often portrayed as an intelligence test for AIs. It's not, it is an "imitation game": Can a computer fool a human into thinking it is another human?
Evidence separates truth from fiction.