The Ministry of Defence is pursuing more than 200 artificial intelligence (AI) projects to develop lethal weapons systems, including “killer robots” that require little or no human controls, documents released under a Freedom of Information request show.
But it is withholding information about many of the projects despite repeated assurances of transparency and the AI programmes are going ahead without long-promised key ethical guidelines.
The MoD’s list of AI projects, released to Drone Wars UK, an independent group monitoring new weapons projects, includes programmes for all three branches of the armed forces, intelligence analysis, and drone swarms.
It covers major multi-billion pound projects stretching over several decades. These include the Future Combat Air System (which involves the proposed new Tempest aircraft, a project which Saudi Arabia has expressed interest in joining), new spy satellites and uncrewed submarines.
The core of the list, say Drone Wars, is a scheme to advance the development of AI-powered autonomous systems for use on the battlefield. Many of these use drones as a platform – usually aerial systems, but also maritime drones and autonomous ground vehicles.
“Even when they are under nominal human control, computer-directed weapons pose a high risk of civilian casualties for a number of reasons including the rapid speed at which they operate and difficulties in understanding the often un-transparent ways in which they make decisions”, say Drone Wars’ Peter Burt and Chris Cole.
Many aspects of the programmes have not been disclosed and the MoD has avoided answering questions from parliamentarians who have sought more details.
Although the government’s Defence Artificial Intelligence Strategy says that over 200 programmes are underway, less than half appear on the list provided to Drone Wars. Their request for full information was refused on “defence” or “national security” grounds.
The government says it “does not possess fully autonomous weapons and has no intention of developing them”. However, all the projects involve technologies that have the potential for use in autonomous weapon systems, and the UK does not support proposals put forward at the United Nations to ban them.
“It’s clear that the Ministry of Defence is crossing a line here”
“It’s clear that the Ministry of Defence (MoD) is crossing a line here”, says Cole. “The projects in this list represent the building blocks needed to produce killer robots in the near future. The information revealed in this list raises significant questions about the government’s stated commitment not to develop autonomous weapon systems.”
Cole adds: “The MoD’s own Artificial Intelligence Strategy accepts that transparency will be essential in gaining acceptance for AI and similar new technologies. It is therefore very disappointing that the list of AI schemes had to be prised out of MoD following an FOI battle, and not released proactively at the time the Strategy document was published.
The government has argued that it wishes to see AI technologies used for ethical and responsible purposes, and it should therefore use the AI summit planned for later this year to help kickstart a major international initiative to ban killer robots.”
The US also uses Britain to direct drone attacks in Africa and recently applied to the British Civil Aviation Authority to allow its military drones to fly from “RAF” Fairford in Gloucestershire, an expanding air base operated by the US.
The use of drones has already raised serious questions about breaches of international law and rules of engagement.
“Oversight depends on primary evidence”, said the British parliamentary Intelligence and Security Committee in a report on the drone strike that killed British-born Reyaad Khan in Syria back in 2015.
“The government should be more transparent on matters of such seriousness”, it added.
Britain’s spy agencies, GCHQ in particular, are also using AI more and more to analyse the vast amounts of data they collect and store and now want the government to relax existing safeguards designed to prevent the abuse of sensitive personal information.
The FoI papers also raise concerns about the transparency of the MoD’s Ethical Advisory Panel. Independent members of the panel have repeatedly stressed the need for it to work in a transparent manner, yet the MoD refuses to publish the terms of membership, meeting minutes, and reports prepared for the panel.
A discussion on implementing AI ethics principles has been partly redacted in the papers released to Drone Wars UK. The notes of the discussion state that “The Armed Forces need to set the right policy, permissions and constraints frameworks that comply with our legal and ethical obligations but also do not impede our ability to fight effectively”.
That raises the question: if it came to the crunch, which of the two principles – i.e., complying with legal obligations or ignoring anything impeding the ability to fight with these weapons – would win?
Drone Wars argues that ethical principles which do not act as obstacles to prevent unethical conduct would be meaningless and pointless. It says the MoD’s commitment to an ethical approach for its AI programmes is slowly waning.
Burt says: “The lack of transparency about the panel’s activities is also a matter for concern, especially given that MoD’s own ethical principles pledge that ‘what our systems do, how we intend to use them, and our processes for ensuring beneficial outcomes result from their use should be as transparent as possible’ and that panel members have themselves called for more transparency.”.
He adds: “With a long-standing and deeply ingrained culture of secrecy and unaccountability MoD officials evidently have no understanding of how to ‘do transparency’: they may pay lip service to the concept but they don’t really ‘get it’ or understand the implications.”