01-29-2015, 08:04 PM
(01-29-2015, 10:18 AM)stevebowers Wrote: but it seems likely to me that the first human-equivalent AIs will have plenty of data to keep them occupied.
In that case we simply shouldn't give them that much data at the beginning in the first place. The less data we give them the more predictable their behaviour will become. After a while we can give them more and more data to deal with and see how they behave. It would be bad if an AI experienced something like a burn-out syndrome or an equally erratic behaviour so it's better to start with small problems and go from there.
(01-29-2015, 10:18 AM)stevebowers Wrote: Despite stating in the EG that the first human-equivalent AIs were built in around 2042 c.e,, I doubt very much that these entities were all that close to human in psychology; they would probably have much better memories than humans but have the social skills of a toddler, or maybe the social skills of a severely autistic human, or something far stranger.
Why should the humans create AIs with such a strange psychology? In my opinion we should deliberately give them the capabilities (memory, cognitive abilities) of a baseline human baby and the social skills of a baseline human baby first. In order to understand these entities, we have to make them resemble a human as closely as possible and since that's difficult, we have to build in some physical constraints:
- Don't give them perfect memory; Give them "unprecise" and sometimes "faulty" memory like in a human
- Don't give them so much computing power that their subjective flow of time becomes faster than ours. Because then it will be very difficult to communicate with them and predict their behaviour. Instead artificially adjust their subjective flow of time to the real world's flow of time.
- Give the AI an avatar and "raise" the AI through that avatar like a "son" in a "good" environment with nice and mentally balanced people like in the movie Twins for example - quote:
Quote:Julius was taken to a South Pacific island and raised by Professor Werner, growing into a handsome, muscled Adonis, receiving tutelage in art and intellectual pursuits.
Another possibility is to raise the AI in a buddhist temple for example. However while being raised and teached the value of life there, e should have regular contact with various top-scientists from all over the world. Somehow e also has to understand the evil side of humanity. But I'm sure that the monks and scientists will be able to teach that to em as well.
(01-29-2015, 10:18 AM)stevebowers Wrote: The likely fact that these entities will not closely resemble humans in their psychology is another reason why they should be constrained very closely in their range of actions.
In that case we should make them resemble us, so that we can still understand them. After a while we can leave it up to the AI to tinker with the boundaries of e's mind: Increase the subjective flow of time just a little bit and see what happens, make the memory just a little bit better and so on...
If one raises the AI like that, certain philosophical concepts like the concept of "what is good and evil? (search for the following text on the website "The sum of its pained existence came down to a batch of text")" will be much easier to explain. If the AI understands the value of community from the beginning, as ai_vin pointed out, e will naturally develop a sense of good and evil without any explanations. For example e would feel that if someone tried to take away e's community from em (e.g. killing e's "father" or best friends) that act would be evil. Furthermore e would see that each member of e's community has a community of his or her own, (s)he wants to protect. Therefore it would also be evil if someone deliberately inflicted harm upon these communities or the people, who belong to the communities of the members of those communities and so on. So instead of explaining good and evil to the AI, one would simply have to show em how good-natured humans live among each other. Doing that, would also teach it, how some or most of the members of e's community would react to certain actions performed by em. E would learn empathy and e would learn it naturally in order to better live among e's community.
Why is empathy important? Imagine a street thug beats up one of the members of e's community. So the AI would feel that the street thug is evil, simply because (s)he has taken away someone valuable to the AI. However without empathy, the AI may conclude that people with similar behaviours to the street thug are evil as well - so far nothing wrong with that - and then e might conclude that all these street thugs have to be eliminated, because each one of them is a potential danger to e's community. This is, what an AI without empathy might conclude. Such an AI would only care about e's own feelings! But with empathy the AI would be forced to take the feelings of others into account - for example:
How would some or all members of my community feel about it, if I killed all the street thugs in the world? And so on. E's fear that members of e's community might not like e's actions would naturally restrain e's actions. In any case the feeling of empathy will develop naturally as the AI grows from a "child" into an "adult". And thus the understanding about good and evil and also about good actions in response to evil will develop naturally as well.
"Hydrogen is a light, odorless gas, which, given enough time, turns into people." -- Edward Robert Harrison