2ND SUMMIT ON REAIM

Last Updated on 10th September, 2024
9 minutes, 48 seconds

Description

	2ND SUMMIT ON REAIM

Source: IndianExpress

Disclaimer: Copyright infringement not intended.

 

Context

The second summit on Responsible Use of Artificial Intelligence in the Military Domain (REAIM) began in Seoul.

Details

What is the Responsible Use of Artificial Intelligence in the Military Domain (REAIM) Summit?

  • REAIM is a summit that brings together nations, technology companies, academics, and civil groups to discuss the use of AI in military settings in a controlled and safe manner.
  • REAIM summits focus on responsible use rather than halting AI development in military affairs.
  • There is a clear need to design a set of global guidelines for the safe use of AI in order to avoid catastrophic outcomes.
  • The US has already issued a political declaration on responsible AI use.
  • NATO has also adopted a policy aimed at ensuring the safe use of AI in combat.

India and the Debate

  • India has been more or less skeptical in participating in all these global discussions.
  • While it uses AI in civilian sectors, it has not committed fully to shaping the rules for military AI.
  • In that respect, India's reluctance bears a resemblance to its earlier hesitation in joining nuclear arms control talks. The latter had long-term consequences. 
  • The longer India waits, the chances of missing out on influencing global rules on military AI will grow.

China's Approach

  • Unlike India, China has been very active in these talks.
  • It regards AI as essential to the concept of future "intelligised warfare."
  • China supported the REAIM summit's call for responsible use of military AI and has its own laws on the use of AI in warfare.

REAIM 2023

  • REAIM 2023 was held in The Hague, Netherlands.
  • There were sixty participating countries, including major players like the United States and China, but Russia was noticeably absent.
  • It issued a nonbinding document from the summit-a "call to action" supported by 60 nations for the responsible use of AI in military settings.

Political Declaration on Responsible Military Use 

  • The most important outcome of the summit was a proposal by the US for a "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy".
  • It laid down the law on artificial intelligence in order for it to be used in military contexts with ethics, transparency, and in manners compatible with international law-in particular, international humanitarian law.
  • It made it known that military AI had to:
  • Comply with international law, including under international humanitarian law during an armed conflict.
  • Ensure human accountability through a chain of command that is responsible for all activities.
  • Reduce unintended bias and accidents through tight scrutiny and examination of AI systems and transparent approach
  • Ensure safety, security, and effectiveness through rigorous testing-including features related to the safety of autonomous systems.
  • Moreover, the declaration called on public commitments of the endorsing states to adhere to these principles and encourage transparency in military AI use.
  • It was meant to advocate for continued discussions about responsible military deployment of AI and encourage the international community to institute similar measures.

AI in the Military

  • Artificial intelligence is part of actual military operations from all sides in various parts of the world, especially Ukraine and Gaza.
  • Both of these places are "labs" where AI is being trialed for its full potential in war.
  • AI aids in decision-making, logistics, surveillance, and even in pinpointing targets during battles.
  • While it can be more accurate and less damaging to civilians, critics caution against such dangerous over-reliance on AI.

 

 

Disclaimer: Copyright infringement not intended.

Autonomous Weapons and Lethal AI

  • One of the most debated topics is lethal autonomous weapons systems (LAWS), also known as "killer robots."
  • These weapons can make decisions without human input, raising ethical and legal concerns.
  • The United Nations (UN)has been discussing these concerns since 2019, trying to find ways to regulate their use.

UNIDIR Expert Network on Governance of Artificial Intelligence in the Military Domain 

  • It is a collaborative initiative aimed at developing robust governance mechanisms for AI technologies used in military settings.
  • The network brings together experts from various sectors to explore the complexities and societal implications of military AI governance.

Key Focus Areas of the Network

  • The network examines how AI policies and action plans are designed, implemented, and evaluated at the national level, particularly in the military domain.
  • It assesses the risks associated with AI in international security, focusing on both weapons-related and non-weapons-related military functions.
  • The network supports AI-related data initiatives through the AI Policy Portal, aiming to provide a comprehensive view of AI governance efforts.
  • In collaboration with the UNIDIR Futures Lab, the network engages in strategic foresight exercises, exploring the future implications of AI and emerging technologies in military contexts.
  • Initiatives like the Women in AI Fellowshipand the Roundtable for AI, Security and Ethics (RAISE) demonstrate the Institute's commitment to equitability and substantive participation.

Current Initiatives for AI Adoption in the Indian Military

  • In February 2018, the Department of Defence Production established a task force to explore AI applications in defense.
  • Based on the task force's recommendations, the Defence AI Council (DAIC) and the Defence AI Project Agency (DAIPA) were formed in 2019.
  • DAIC: Headed by the Defence Minister, DAIC provides guidance on policy changes and structural support for AI adoption.
  • DAIPA: Oversees AI project implementation, standards development, and resource allocation.
  • The Ministry of Defence has allocated Rs. 100 crore annually for AI projects over five years.
  • Additionally, each service is expected to earmark Rs. 100 crore annually for AI-specific applications.

 

Way Forward

  • Oversee the formulation of an overall CDS AI strategy: objectives, sectors of use, organization, and ethical considerations.
  • A Directorate of Artificial Intelligence is to be established. The following initial three sections are to be set up under this directorate:
  • Policy Section: Planning policies for AI technologies and harmonizing with Service HQs in ensuring uniformity in AI functions.
  • Section of Data: Availability of data to AI applications. Facilitate inter-service integration of data.
  • Section of Acquisition: Approving the acquisition of AI systems. Confirm the adherence of the systems to policy and ethical standards.
  • Several questions regarding human supervision and autonomy arise in the development of AI systems. The military leadership will have to make a decision on the extent of autonomy to be allowed to AI systems while having the transparency of AI decisions.
  • The adoption of AI requires a skillset that is rare in the job market, and retaining talent to keep pace with the development of AI will entail new personnel policy, incentives, and curated training.
  • Military AI requires collaboration between the military and civilian industries. A few selected technologies for dual use-such as image recognition or robotics-can easily be adapted for defense.
  • The military allocation by India for AI is a fraction of what China spends annually, and much more is needed to promote indigenous AI industries in order to remain competitive.

Sources:

IndianExpress

PRACTICE QUESTION

Q:  The integration of Artificial Intelligence (AI) into military operations presents significant opportunities and challenges. Discuss the principles of responsible use of AI in the military. How can India ensure the responsible deployment of AI technologies in its defense sector? (250 Words)

Free access to e-paper and WhatsApp updates

Let's Get In Touch!