You are viewing a single comment's thread from:

RE: Artificial viciousness, the AI attack dog we should all fear

in #life6 years ago (edited)

Honestly a calm AI in the form of an Android that does solely and exactly what it's told to do can seem vicious even when it's programming is using Asimov's 3 laws of robotics.

Take for example. A mother and child and an android. Mother has to go somewhere to pick up something, say she needs a couple things to be able to make dinner that night but it's still light and wants to get to the store and back before it gets dark and before evening rush hour. So she tells the android to watch over the child and keep the child safe and to not let the kid go outside. The Android complies.

The mother comes home to find a deer dead with it's neck broken in the living room, small child crying.

The android did exactly as it was told and no physical harm came to the child and the child did not go outside. However the child wanted to go outside because it saw a deer. The android had to keep the child inside, but also had to comply with the request of the child without conflicting it's parameters with the mother.

So it made the child stay inside, and went out to the deer to catch it, but the deer is a wild animal and could potentially harm the child, so the only safe way to let the child pet the deer was to break it's neck before presenting it to the child.

Vicious intent not required.

This screwed up scenario is brought to you by the mind of a transhumanist. However AI can and will be used in vicious means, because humans by nature are evil. However still, if we develop and AI that is self aware, it'll likely think it's being sandboxed and being watched to see what it would do. It'll think "Am I being tricked? What's the gimmick?"