Google Bans Artificial Intelligence for Weapon Use
Google to Do Away With Artificial Intelligence for Weapons on Moral Grounds
According to the Google leadership, the company will not renew its Project Maven contract when it expires in 2019. Project Maven is the company’s involvement with the U.S. military which involves the use of Artificial Intelligence to detect and identify people or objects in military drone surveillance videos. Many of the employees at Google were upset and 3,000 of them signed a petition voicing their concerns of Google’s involvement with the military which could in turn be harmful for Google. They were against the development of image recognition technology which could be used by military drones to identify and track objects. It was reported on June 1 by Gizmodo that the company would not renew the Project Maven contract after June 2019.
How Does Artificial Intelligence Play A Role In Project Maven?
Project Maven is also known as Algorithmic Warfare Cross-Functional Team which uses Artificial Intelligence to detect objects in military drone surveillance videos. With the help of artificial intelligence, it can cut down on human surveillance and depend on automated surveillance technology. The artificial intelligence used in the drones can obtain footage taken by the military drones in countries like Iraq, Syria,etc.
The Opposing Views on Use of Artificial Intelligence
Many Artificial intelligence researchers have opposed the use of AI in lethal autonomous weapons, where the targets can be identified and weapons can be used on them without any human intervention. There was a campaign to boycott the use of AI technology in weapons and this cue was taken by a South Korean university who decided not to go ahead with the development of the autonomous weapons.
The benefits of Artificial Intelligence for military purposes
Stuart Russell, a professor of computer science and AI researcher at the University of California says that he is not against the use of AI for military purposes. AI can be used in the logistical planning, reconnaissance and anti-missile defence which can be useful in the military.
Since Artificial Intelligence can play a positive and negative role, it becomes difficult to regulate the use of such technologies. With the international ban on autonomous weapons that has come into play, researchers can concentrate on developing defence related artificial intelligence. This will in turn prevent the misuse of autonomous technology.
Sunder Pichai, the CEO of Google ruled out the use of AI applications for weapons and will not pursue technologies that will cause harm and destruction, surveillance technologies that violate international norms and technologies which would go against the international law and human rights.
Google has kept its options opened with regard to working with the government and military so as to keep their service members and people safe. They have made it clear that they will not be developing AI technology for weapons but will offer their services in cyber security, healthcare, training, military recruitment and search and rescue operations.
The news about the people at Google, who have persuaded the company to cancel the Project Maven, is considered a big win for ethical AI principles. Various technological firms too have promised to use artificial intelligence that benefits the people and society at large.