This is a non-story IMO. This article has a good criticism of why: https://www.techdirt.com/articles/201406...tter.shtml
In general I'm also very skeptical of the utility of the Turing test. Taking it to basics what does it actually demonstrate? In it's generic form of a conversation with the goal of appearing human all the Turing test really tests is the sophistication of a chatbot. It doesn't demonstrate general intelligence and if anything computer science (and automation in general) have shown over the past several decades that acting like a human isn't required for being better than a human at a task. Even if you could generate software and hardware that is generally equal to humans in all tasks that doesn't prove consciousness or human-like intellect. All it demonstrates is human-like capability which I would argue is far more desirable as it avoids a host of ethical concerns.
Quote:Okay, almost everything about the story is bogus. Let's dig in:
It's not a "supercomputer," it's a chatbot. It's a script made to mimic human conversation. There is no intelligence, artificial or not involved. It's just a chatbot.
Plenty of other chatbots have similarly claimed to have "passed" the Turing test in the past (often with higher ratings). Here's a story from three years ago about another bot, Cleverbot, "passing" the Turing Test by convincing 59% of judges it was human (much higher than the 33% Eugene Goostman) claims.
It "beat" the Turing test here by "gaming" the rules -- by telling people the computer was a 13-year-old boy from Ukraine in order to mentally explain away odd responses.
The "rules" of the Turing test always seem to change. Hell, Turing's original test was quite different anyway.
As Chris Dixon points out, you don't get to run a single test with judges that you picked and declare you accomplished something. That's just not how it's done. If someone claimed to have created nuclear fusion or cured cancer, you'd wait for some peer review and repeat tests under other circumstances before buying it, right?
The whole concept of the Turing Test itself is kind of a joke. While it's fun to think about, creating a chatbot that can fool humans is not really the same thing as creating artificial intelligence. Many in the AI world look on the Turing Test as a needless distraction.
In general I'm also very skeptical of the utility of the Turing test. Taking it to basics what does it actually demonstrate? In it's generic form of a conversation with the goal of appearing human all the Turing test really tests is the sophistication of a chatbot. It doesn't demonstrate general intelligence and if anything computer science (and automation in general) have shown over the past several decades that acting like a human isn't required for being better than a human at a task. Even if you could generate software and hardware that is generally equal to humans in all tasks that doesn't prove consciousness or human-like intellect. All it demonstrates is human-like capability which I would argue is far more desirable as it avoids a host of ethical concerns.