03-30-2014, 04:57 AM
(03-30-2014, 02:52 AM)Drashner1 Wrote: Even if consciousness is a quantum mechanical thing (and not arguing that it is), there's nothing that says we can't develop quantum computers that could replicate the process in that case or that such computers would take substantially longer to be developed than AI based on more 'conventional' computing models. Quantum computing is admittedly in its infancy at this point, but so was regular computing less than a century ago - even on merely historical time scales that's very little time indeed.
Interestingly enough, Hameroff makes the same point later in the interview and even speculates about other materials than microtubules as the substrate. He's not arguing that uploading or AI are impossible, but that currently the AI community is haring off in the wrong direction.
Speaking of haring off in odd directions, Hameroff goes into some speculative flights himself later in the interview. Speculations that are, for me, a pretty far reach even if Hameroff and Penrose are somehow spot-on in their initial hypothesis.
For what it's worth, I think Hameroff and Penrose's central idea is in the realm of 'not impossible', but also and perhaps more significantly it is well into the realm of 'not very likely given what we know'. One part I find particularly hard to go along with its the idea that it is consciousness in particular depends on some kind of quantum effects mediated through microtubules. If microtubules do have a direct role in the cognitive aspects of a neuron's activity (as opposed to an indirect one, as in they are part of what supports a neuron's very complex behaviour), then they'd also have a similarly important role in all of the brain's unconscious activity. Anyway, as Hameroff mentions, the basics of this will eventually be sorted out one way or another on available evidence, probably well within the coming decade. I'll be interested to see what turns up and very, very surprised if Hameroff's picture of how things work turns out to be right.
The broader point, that the complexity of the human mind/brain is not adequately described even by a very thorough description of the human connectome (map of all the connections between all the neurons) is pretty well unassailable. That's something one can predict quite well with a much more conventional take on the complex decision-making behaviour of individual neurons, and a look at the structures that support that complexity. Early estimates of how 'big' a computer would have to be to support a human upload, especially those thrown out by Kurzweil and company, are very very far off the mark. That's not a doom-and-gloom-it's-all-impossible statement but just a matter of fact observation that there's a lot of room for growth in our capabilities and understanding on the way to human-equivalent AI, human uploads and so on, assuming (as I do) that it's not impossible.
Stephen