Karl`s PC Help Forums Last active: Never
Not logged in [Login ]
Go To Bottom

In memory of Karl Davis, founder of this board, who made his final journey 12th June 2007

Post Reply
Who Can Post? All users can post new topics and all users can reply.
Username   Need to register?
Password:   Forgot password?
Subject: (optional)
Icon: [*]
Formatting Mode:
Normal
Advanced
Help

Insert Bold text Insert Italicised text Insert Underlined text Insert Centered text Insert a Hyperlink Insert E-mail Hyperlink Insert an Image Insert Code Formatted text Insert Quoted text Insert List
Message:
HTML is Off
Smilies are On
BB Code is On
[img] Code is On
:) :( :D ;)
:cool: :o shocked_yellow :P
confused2 smokin: waveysmiley waggyfinger
brshteeth nananana lips_sealed kewl_glasses
Show All Smilies

Disable Smilies?
Use signature?
Turn BBCode off?
Receive email on reply?
The file size of the attachment must be under 200K.
Do not preview if you have attached an image.
Attachment:
    

Topic Review
marymary100

[*] posted on 22-7-2017 at 10:35
It reads as if sentence two is the objective as it follows sentence one.
LSemmens

[*] posted on 22-7-2017 at 02:05
Not at all. Killing is not the primary objective. That is collateral damage.


As I said in my third sentence resources that are tied up looking after the hurt cannot be deployed to defending the territory. Which agrees with my second sentence - Injure, maim and hurt.
marymary100

[*] posted on 21-7-2017 at 09:32
Your first 2 sentences contradict one another Leigh. confused2
LSemmens

[*] posted on 21-7-2017 at 01:06
Actually, in war, the objective is not, usually, to kill anyone. Injure, maim, otherwise, hurt,,,,,, YES! It ties up more resources looking after the casualties than it does to bury the dead! It was one of the first things they taught us in boot camp when I did my stint in the Reserves.
JackInCT

[*] posted on 20-7-2017 at 16:08
Quote:
Originally posted by marymary100
I'm with Asimov.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.....


The USAF is on a fast track ASAP pace to replace all of its fighter bomber type aircraft with unmanned aircraft. Such aircraft differ from cruise missiles in that cruise missiles are one way weapons, i. e., from launch point to target.

These unmanned aircraft are being designed/tested/etc. to replicate everything that a manned aircraft can do to include such things as target recognition; they are simply much more advanced versions of the current stable of drone type aircraft, and with much greater 'killing' power--the epitome of what AI is capable of.

Whether anyone chooses to call such aircraft robots, or some other label, is pure semantics.

And whether anyone on the ground would be needed to "monitor" what such an aircraft is doing remains to be seen, especially the first generation of this type of weapon. Surely the goal is to make 2nd generation aircraft totally independent of ground control from engine start, takeoff, the sortie, and the landing.

There are huge amounts of money being poured into the development of this type of aircraft. And as usual, the goal is political, i. e., the goal is not to keep as many military personnel as possible out of harms way during combat, but to go to war and keep the casualty count as small as possible because USA military/civilian body count carries the greatest political risk for an incumbent politician.

YES, I'm saying that the majority of the American public could care less who this country is at war with, and why, as long as their sons and daughters are not dying in such a war. So when this tech is perfected, there will be more wars, not less.
marymary100

[*] posted on 20-7-2017 at 11:08
I'm with Asimov


A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
LSemmens

[*] posted on 20-7-2017 at 03:12
That laser gun would have to be only part of the picture. I wonder how it would cope with cloud, or other light dispersing media. What if an aircraft had a mirror finish on it?

Autonomous weapons are all well and good and I'd be happy to see wars fought only by machine against machine, however, that is never going to happen, human nature being what it is. Even today's guided missiles and targetting systems, whilst very accurate, are still indiscriminate in collateral damage. Yes they can shoot a pimple off a fly's bum, but what happens when said fly then falls out of the sky? Or if, as the cowards who fight for some radical sects like to do, they hide behind innocent civilians.

War is a dirty business, and we are far more selective in our targets these days. So collateral damage is minimised compared to, say World War II. If we could return to the days of horseback and bows and arrows, we might be able to limit civilian casualties, but that ain't going to happen.
JackInCT

[*] posted on 19-7-2017 at 13:10
I've decided to break this out into a combined topic. This topic, in part, is a kind of addendum to my topic yesterday of the security robot in Washington winding up in a water pool. I'm attempting to combine two separate aspects of AI devices, specifically those being used for local security type patrol work and military purposes.

In one of the replies to the topic of the wet security robot, LSemmens posed the question,..."it was supposed to be a security robot. How's that work? Does it zap you with a bolt of lighting if you do something wrong." Let me reply to his question:

Part 1 of this topic is this question: as everyone on this board knows, gun ownership in the USA is legal per the 2nd amendment of the USA Constitution. Within the framework of the currents laws, should an AI device regardless of whether it is coded for security duties, be allowed to carry a legal handgun (note: there are legal restrictions as to what kinds of guns can be legally owned), i. e., should they be allowed to employ deadly force?

Part 2: There was a 2nd article on CNN yesterday that dealt with "robots", i. e., known as "autonomous weapons systems" automatically engaging an "enemy" without human authorization re a kill strike.

USA General Warns Of Out-Of-Control Killer Robots

URL for this article:
http://www.cnn.com/2017/07/18/politics/paul-selva-gary-peters-autonomous-weapons-killer-robots/index.html

Me here: I've made selective editing of this article:

America's second-highest ranking military officer, Gen. Paul Selva, advocated Tuesday for "keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don't know how to control."

Selva was responding to a question from Sen. Gary Peters, a Michigan Democrat, about his views on a Department of Defense directive that requires a human operator to be kept in the decision-making process when it comes to the taking of human life by autonomous weapons systems.

Peters said the restriction was "due to expire later this year."

..."ban on offensive autonomous weapons beyond meaningful human control."

But Peters warned that America's adversaries may be less hesitant to adopt such lethal technology.

"Our adversaries often do not to consider the same moral and ethical issues that we consider each and every day," the senator told Selva.

Me here: 'considering moral and ethical issues' is one thing; living them is quite another matter entirely. ....."America's adversaries may be less hesitant to adopt such lethal technology." IMO USA politico types excel at 'holier than thou', especially when it comes to waging war on folks with a different skin color than white. Many, many USA citizens are completely indifferent (to put it mildly) to who its military forces are killing. And the related question of does all this warfare actually bred new generations of adversaries who see USA forces as simply engaging in ethnic/religious cleansing?