I don't recall this board ever having kicked this around for a discussion, but even if it's been cherry picked before, as AI advances with
increasing speed, it can't hurt to be up to speed as to what we're in for or NOT in for either.
(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
(2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
(4) "promulgated" later: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Needless to say any search engine on this would turn up zillions of hits especially of the commentary type responses/reactions to this subject (NONE of which, as far as I can tell, were from a robot but perhaps they are already smart enough to know that they should stay under the radar re being 'discovered').
By the way, somehow or other there doesn't seem to be a fifth law that a robot can't disavow laws 1-4.
Sadly, those laws have already been made redundant by various military's around the world. How many targeted strikes are done without some form of A.I.?
AI in organisations with precious little RI is worrying.
Tell it to The Terminator! Or Robo-cop - the bad one.