The Case Against Killer Robots - Why the United States Should Ban Them
By Denise Garcia May 10, 2014
n the Terminator movies, fully autonomous robots wage war against humanity. Although cyborg assassins wont be arriving from the future anytime soon, offensive Terminator-style autonomous robots that are programmed to kill could soon escape Hollywood science fiction and become reality. This actual rise of the machines raises important strategic, moral, and legal questions about whether the international community should empower robots to kill.
This debate goes well beyond drones, as they are yesterdays news. Existing armed unmanned aerial vehicles are precursors to lethal autonomous robotics -- that is, killer robots -- that could choose targets without further human intervention once they are programmed and activated. The Pentagon is already planning for them, envisioning a gradual reduction by 2036 of the degree of human control over such unmanned weapons and systems, until humans are completely out of the loop. But just because the Department of Defense wants it doesnt mean the United States should allow it. Instead, Washington should take the lead in drafting a new, international agreement to ban killer robots and regulate other kinds of autonomous systems. There is no better time to push for such a prohibition than next week, on May 13, when 117 countries will meet in Geneva for the first multilateral UN talks on killer robots at the United Nations. There, the United States should stand up and tell the world that people must remain in complete control when it comes to war and peace.
SKYNET TAKES OVER
Wars fought by killer robots are no longer hypothetical. The technology is nearly here for all kinds of machines, from unmanned aerial vehicles to nanobots to humanoid Terminator-style robots. According to the U.S. Government Accountability Office, in 2012, 76 countries had some form of drones, and 16 countries possessed armed ones. In other words, existing drone technology is already proliferating, driven mostly by the commercial interests of defense contractors and governments, rather than by strategic calculations of potential risks. And innovation is picking up. Indeed, China, Israel, Russia, the United Kingdom, the United States, and 50 other states have plans to further develop their robotic arsenals, including killer robots. In the race to build such fully autonomous unmanned systems, China is moving faster than anyone; it exhibited 27 different armed drone models in 2012. One of these was an autonomous air-to-air supersonic combat aircraft.
Several countries have already deployed forerunners of killer robots. The Samsung Techwin security surveillance guard robots, which South Korea uses in the demilitarized zone it shares with North Korea, can detect targets through infrared sensors. Although they are currently operated by humans, the robots have an automatic feature that can detect body heat in the demilitarized zone and fire with an onboard machine gun without the need for human operators. The U.S. firm Northrop Grumman has developed an autonomous drone, the X-47B, which can travel on a preprogrammed flight path while being monitored by a pilot on a ship. It is expected to enter active naval service by 2019. Israel, meanwhile, is developing an armed drone known as the Harop that could select targets on its own with a special sensor, after loitering in the skies for hours.
Read more: http://www.foreignaffairs.com/articles/141407/denise-garcia/the-case-against-killer-robots
Dr Hobbitstein
(6,568 posts)need to eat the medications of the elderly for fuel...
http://www.digyourowngrave.com/saturday-night-live-old-glory-robot-insurance/
Judi Lynn
(160,217 posts)Judi Lynn
(160,217 posts)Published on Monday, May 12, 2014 by Common Dreams
'Unconscionable': Nobel Winners Blast Development of 'Killer Robots'
On eve of UN conference on fully autonomous weapon systems, peace groups call for global ban
- Lauren McCauley, staff writer
The use and development of "killer robots"or weapons that can select and kill without human interventionis "unconscionable," charged a group of Nobel Peace Prize winners in a joint statement released Monday.
Published on the eve of a multi-day United Nations conference in Geneva, Switzerland to discuss the Convention on Certain Conventional Weapons (CCW), otherwise known as the Inhumane Weapons Convention, the group is voicing their support of a pre-emptive global ban on the weapons.
"It is unconscionable that human beings are expanding research and development of lethal machines that would be able to kill people without human intervention," reads the statement, which was signed by a number of peace organizations and activists including Jody Williams, Archbishop Desmond Tutu, Mairead Maguire and Shirin Ebadi.
The statement continues:
Not all that long ago such weapons were considered the subject of science fiction, Hollywood and video games. But some machines are already taking the place of soldiers on the battlefield. Some experts in the field predict that fully autonomous weapons could be developed within 20 to 30 years; others contend it could even be sooner. With the rapid development of drones and the expansion of their use in the wars in Afghanistan and Iraq and beyond, billions of dollars are already being spent to research new systems for the air, land, and sea that one day would make drones seem as quaint as the Model T Ford does today.
More:
https://www.commondreams.org/headline/2014/05/12-4
longship
(40,416 posts)Pssst! It's a movie. It's not real.
Same thing for The Terminator SkyNet Rabbit Hole. But at least then we'd have time travel to the past. (I imagine we'd have to be first properly processed through the atavachron by Mr. Atoz. Hint: Avoid the witch burning era as well as any prehistoric arctic mountain wildernesses. Choose wisely.)
ARRRRGH! The silly!
Diclotican
(5,095 posts)undeterred
What about the 3 rules who Asonov stated out - so many years ago - who in his mind was the paramout of importance for robots - http://en.wikipedia.org/wiki/Three_Laws_of_Robotics
Diclotican
DavidDvorkin
(19,404 posts)undeterred
(34,658 posts)1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[1]
DavidDvorkin
(19,404 posts)Cliven Bundy.