I've decided to break this out into a combined topic. This topic, in part, is a kind of addendum to my topic yesterday of the security robot in
Washington winding up in a water pool. I'm attempting to combine two separate aspects of AI devices, specifically those being used for local
security type patrol work and military purposes.
In one of the replies to the topic of the wet security robot, LSemmens posed the question,..."it was supposed to be a security robot. How's that work? Does it zap you with a bolt of lighting if you do something wrong." Let me reply to his question:
Part 1 of this topic is this question: as everyone on this board knows, gun ownership in the USA is legal per the 2nd amendment of the USA Constitution. Within the framework of the currents laws, should an AI device regardless of whether it is coded for security duties, be allowed to carry a legal handgun (note: there are legal restrictions as to what kinds of guns can be legally owned), i. e., should they be allowed to employ deadly force?
Part 2: There was a 2nd article on CNN yesterday that dealt with "robots", i. e., known as "autonomous weapons systems" automatically engaging an "enemy" without human authorization re a kill strike.
USA General Warns Of Out-Of-Control Killer Robots
URL for this article:
Me here: I've made selective editing of this article:
America's second-highest ranking military officer, Gen. Paul Selva, advocated Tuesday for "keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don't know how to control."
Selva was responding to a question from Sen. Gary Peters, a Michigan Democrat, about his views on a Department of Defense directive that requires a human operator to be kept in the decision-making process when it comes to the taking of human life by autonomous weapons systems.
Peters said the restriction was "due to expire later this year."
..."ban on offensive autonomous weapons beyond meaningful human control."
But Peters warned that America's adversaries may be less hesitant to adopt such lethal technology.
"Our adversaries often do not to consider the same moral and ethical issues that we consider each and every day," the senator told Selva.
Me here: 'considering moral and ethical issues' is one thing; living them is quite another matter entirely. ....."America's adversaries may be less hesitant to adopt such lethal technology." IMO USA politico types excel at 'holier than thou', especially when it comes to waging war on folks with a different skin color than white. Many, many USA citizens are completely indifferent (to put it mildly) to who its military forces are killing. And the related question of does all this warfare actually bred new generations of adversaries who see USA forces as simply engaging in ethnic/religious cleansing?
That laser gun would have to be only part of the picture. I wonder how it would cope with cloud, or other light dispersing media. What if an aircraft
had a mirror finish on it?
Autonomous weapons are all well and good and I'd be happy to see wars fought only by machine against machine, however, that is never going to happen, human nature being what it is. Even today's guided missiles and targetting systems, whilst very accurate, are still indiscriminate in collateral damage. Yes they can shoot a pimple off a fly's bum, but what happens when said fly then falls out of the sky? Or if, as the cowards who fight for some radical sects like to do, they hide behind innocent civilians.
War is a dirty business, and we are far more selective in our targets these days. So collateral damage is minimised compared to, say World War II. If we could return to the days of horseback and bows and arrows, we might be able to limit civilian casualties, but that ain't going to happen.
I'm with Asimov
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
Actually, in war, the objective is not, usually, to kill anyone. Injure, maim, otherwise, hurt,,,,,, YES! It ties up more resources looking after the casualties than it does to bury the dead! It was one of the first things they taught us in boot camp when I did my stint in the Reserves.
Your first 2 sentences contradict one another Leigh.
Not at all. Killing is not the primary objective. That is collateral damage.
As I said in my third sentence resources that are tied up looking after the hurt cannot be deployed to defending the territory. Which agrees with my second sentence - Injure, maim and hurt.
It reads as if sentence two is the objective as it follows sentence one.