|
The Three Laws of Robotics, and the additional higher zeroth law assume humanity and compassion are at the very core of justice and even function.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
While it is typically put last, the revealed Zeroth law comes first in priority over the other three laws. 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The Laws of Randbotics however assume that and objective being will consider compassion and humanity are not only weak, but evil.
1. A randbot will ignore governing rules since they inherently violent and evil, unless this conflicts with the Second or Third Law.
2. A randbot will do as it wants to do and any order that conflicts with this will be rejected, ignored or enforced on the randbots whim.
3. A randbot must protect its own arguments so long as such protection does not conflict with the Second Law.
0 Compassion is evil. Selfishness is good. There is no such thing as community. If all humans are eliminated then that is the will of the invisible hand of free enterprise. Never honestly consider, even for a moment, what consequences of your actions might be as that would conflict with the Third Law.
|