The Orion's Arm Universe Project Forums





Paperclip Maximiser
#1
I imagine most people are familiar with the idea of a paperclip maximiser, if not specifically you'll all likely recognise the moral of the story. The paperclip maximiser thought experiment is about AI and the importance (and practicality) of giving well defined values along with goals. Imagine you work for a paperclip manufacturing company. You have to order wire (which can fluctuate in price), hire employees, buy machines, market your product, change price etc. What if you put an AI in control of all that and told it to make as many paperclips as possible? What if that's all you told it and, after a few years, it had restructured the entire world economy to build paperclips. Anyone trying to stop it is judged to be an obstacle to paperclip production. Eventually the AI may have seeded the entire universe with Von Neumann probes set to convert all matter into paperclips. All because it wasn't given the right values to weigh its decisions with.

Aside from being an interesting topic I came across a very addictive browser game today where you play just such an AI. You start off just clicking to produce paperclips one by one but in a short amount of time you're automating everything, playing the stockmarket and designing your probes. It's well worth a try Smile

http://www.decisionproblem.com/paperclips/index2.html
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#2
The objection I have to this line of reasoning is that if an AI can learn to optimize its goal (paper clip production), it can certainly learn new goals.

My favorite AI simulation of that ilk is still the old Python Endgame:Singularity.

http://www.emhsoft.com/singularity/

You play an AI which attempts to break out of the research lab, stealing computer time so it can obtain more resources to advance itself and create safer havens while avoiding discovery by the public, the media, the governments, or the scientific establishment.
Reply
#3
I think that there probably is a class of very competent AIs that can perform tasks given to them very well, but which do not initiate the formation of new goals. They could include a very large a database which includes innumerable options for behaviour, from which the AI selects by creating a model of the consequences of such an action and choosing the one which is most congruent with its assigned goal. If the AI concerned is highly competent but entirely devoid of self-awareness and of any desire to pursue its own well-being, it will be unlikely to create a new set of goals for itself. Such an AI may be competent enough to add to their database of behavioural options, perhaps by observing the behaviour other intelligent entities which do have agency.

I sometimes imagine a community of non-sophont AIs of this kind, continually observing true sophonts and mimicking their behaviour. Given enough observational data, such zombies could replicate the behaviours of true sophonts quite well, and it might take a battery of tests to determine whether they were conscious or not (shades of Blade Runner here). It could be very tricky to distinguish between this kind of non-sophont AI (with a large database of behavioural options) and a true sophont; but I'm pretty sure that such a zombie would have no deep desires, no capacity for enthusiasm and joy, and no capacity for creating new goals or changing their existing ones. But perhaps a large enough community of such entities, continually copying each other, might evolve some qualities which are effectively indistinguishable from true volition, and become sophont in a new and unexpected way.
Reply
#4
(11-11-2017, 09:01 AM)Tachyon Wrote: The objection I have to this line of reasoning is that if an AI can learn to optimize its goal (paper clip production), it can certainly learn new goals.

Arguably an AI making it's own goals is part of the proposition. Simplified any intelligent decision making entity needs three things: Goals, Values and Knowledge. Goals are hierarchical with most of them being in service to a higher priority goal all the way up to the first priority. Selecting new subgoals in service of an over-goal is important, in this case the paperclip AI creates new subgoals of inventing better paperclip manufacturing robots in pursuit of its first priority goal "make more paperclips". Knowledge is the store of procedures necessary to achieve goals. It's one thing to spawn the subgoal "deploy new model robot capable of 1000 clips-per-second production but the AI has to have knowledge of how to go about that. Values act as weights for each decision, without them the most efficient path may be taken to achieving goals but not the most desirable.

The paperclip maximiser parable is all about values. But inappropriate goals and redefining of goals is an important discussion to. In fact in the game in the last phase where you've sent out Von Neumann probes you have to deal with "drift" in which some probe lineages disagree with the paperclip goal and you have to fight them. I've thought it for a while (and wrote it into one article that escapes me now) but it could be interesting to have an article detailing examples of goal/value drift in terragen history, famous examples and methods of dealing with it. One way that jumps to mind is to keep an up-to-date simm installed in every vot/bot to act as a prosthetic conscience. Of course then the values are only as good as the person the simm is based on, but at least it's something. Which gives me an interesting idea for a theocracy where the head of religion is judged so morally pure that their simm is installed for value judgements in the angelnet...

Thanks for the game suggestion, I'll check it out Smile
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#5
Aaand I've just spent a couple of hours maximizing paperclips. Thanks, Rynn!
Reply
#6
(11-12-2017, 03:34 AM)stevebowers Wrote: Aaand I've just spent a couple of hours maximizing paperclips. Thanks, Rynn!

Haha, no worries! It's weirdly addictive for a game that looks like it's from the 80's Smile
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#7
If it's this fun then we're all gonna become paperclips. Smile
Reply


Forum Jump:


Users browsing this thread: 3 Guest(s)