Accountability for Autonomous Lethality

By: Dan Kuwali, Affiliated Scholar* 

Artificial intelligence (AI) represents a transformative, society-shaping technology. When applied in the military sphere, cognitive computing has far-reaching consequences for the conduct of warfare, influencing how decisions are taken, how operations are executed, and how responsibility is assigned in one of humanity’s most consequential domains. The proliferation of lethal autonomous weapon systems (LAWS), also known as “killer robots”, has stirred profound legal, ethical, and humanitarian debates.

LAWS are weapons that can autonomously, using AI, identify military objectives and civilian objects without human intervention. The rapid pace of AI development is reshaping the very nature and character of contemporary armed conflict. Integrating AI into military systems does more than boost operational performance; reconfigures how armed forces apply the Observe-Orient-Decide-Act (OODA) cycle by introducing faster and more informed decision-support capabilities across key functions such as intelligence, surveillance, and reconnaissance (ISR).

As  AI, machine learning, and advanced robotics increasingly enable weapons capable of selecting and engaging targets with minimal or no human intervention, long-standing assumptions about agency, responsibility, and control in armed conflict are being challenged. In this context, questions about accountability are no longer theoretical but are real and urgent.

The Core Challenge: Autonomy, Agency, and Human Responsibility

At the heart of the debate on LAWS lies a simple yet unsettling truth: machines cannot be held morally or legally responsible. As the International Committee of the Red Cross (ICRC) has consistently emphasised, international humanitarian law (IHL) is addressed to humans — those who plan, decide, and carry out attacks — not to machines.

For LAWS, this reality poses a challenge: if the critical functions of selecting and attacking targets are delegated to software, sensors, or algorithms, how can responsibility for unlawful conduct be meaningfully attributed (for example, indiscriminate attacks on civilians, disproportionate force, failure to distinguish military objectives and civilian objects)?

Proponents of LAWS argue that human decisions remain embedded in design, programming, activation, deployment, and chain-of-command decisions — and that these human contributions can provide a basis for responsibility. Others, including civil society actors such as Human Rights Watch (HRW) and Article 36, warn that liability and accountability gaps remain likely, especially given the unpredictability of complex real-world environments.

The Global Commission on Responsible AI in the Military Domain (GC REAIM) promotes a “responsibility by design” approach, which calls for ethical and legal safeguards to be built into AI systems from the outset and maintained throughout their entire lifecycle. To this end, armed forces and other relevant actors should embed responsibility-by-design principles and practices across all stages of development.

To advance ethical, lawful, and human-centric governance, the GC REAIM outlines three core principles for responsible military use of AI: a) compliance with international law and ethics—LAWS must adhere to international legal obligations and widely shared ethical norms to protect human life and promote lasting peace; b) human-centric system design— LAWS should be developed and tested through rigorous, lifecycle-wide processes that safeguard human agency, responsibility, and accountability, ensuring that all critical decisions remain under meaningful human control; and c) empowered and informed personnel—developers and operators must be supported through institutional practices, including continuous training, capacity-building, and system designs that enhance understanding and oversight, ensuring informed human judgment throughout the system’s use.

What Institutions Say: Article 36, ICRC, GGE, and the Emerging Consensus

Since 2013, the issue of LAWS has been debated under the framework of the Convention on Certain Conventional Weapons (CCW), notably through its working-level forum, the Convention on Certain Conventional Weapons Group of Governmental Experts on Lethal Autonomous Weapons Systems  (GGE). The GGE’s “Guiding Principles on LAWS” reflect widespread — although not unanimous — agreement that: existing IHL is applicable to LAWS; and human responsibility for decisions to use force must be retained; machines cannot bear responsibility.

Taking into account humanitarian imperatives, legal responsibilities, ethical concerns, and the realities of military operations, the  ICRC and the Stockholm International Peace Research Institute (SIPRI) recommend a blended approach to regulating LAWS. This approach involves three complementary forms of control: a) limits on the design and technical parameters of the weapon; b) restrictions linked to the operational context in which the system may be deployed; and c) measures that ensure meaningful and appropriate human–machine interaction. These safeguards should guide not only the use of LAWS, but also their research, testing, development, and procurement.

The ICRC, for its part, urges states to adopt binding regulations to ensure “sufficient human control and judgement” over LAWS. This includes limiting use against human targets, restricting deployment in certain situations and environments, and ensuring human supervision, predictability, and traceability.

Yet the institutional consensus  — while a vital first step — leaves many questions open. What, precisely, constitutes “human control”? Which humans — commanders, operators, programmers, designers, manufacturers? And how does accountability function when multiple human actors, across different phases, contribute to the design, deployment, and activation of autonomous systems?

Key Questions of Accountability

Under the rules of IHL governing the conduct of hostilities, the duty to comply rests with the individuals who plan, authorise, and execute military operations. Machines cannot bear legal responsibility; only humans can interpret, apply, and be held accountable for these rules. Regardless of the technologies employed—whether software, automated systems, or advanced weapons—responsibility for their use and for the consequences they produce remains squarely with the persons and parties directing the hostilities.

Even so, the fact that LAWS can select and engage targets autonomously raises serious concerns about whether, in practice, parties to a conflict and individual operators can be held legally accountable for their outcomes, including any violations of IHL. Given this context, several interlocking questions drive the examination of accountability for LAWS. Below are five central questions — and a sketch of preliminary reflections based on existing doctrine, lethal principles, and emerging scholarly debate.

  1. Who bears legal responsibility for IHL violations committed by LAWS?

At the most general level, accountability for LAWS-related violations might be sought under existing IHL frameworks: the law of state responsibility for internationally wrongful acts, and individual criminal responsibility for war crimes. These frameworks remain valid: autonomy does not exempt states or individuals from obligations to respect the distinction between combatants and civilians, to observe proportionality, to exercise precaution, and to act in accordance with military necessity.

However, applying them to LAWS raises serious practical and normative challenges: unpredictability of effects; opacity of machine decision‑making; difficulty in tracing who programmed, activated, or authorised a given attack. For these reasons, many experts argue that existing frameworks risk leaving a significant accountability vacuum, undermining effective oversight and responsibility when autonomous systems are involved in the use of force.

  1. To what extent can states be held responsible for LAWS violations?

If a LAWS-enabled attack results in unlawful harm — for instance, indiscriminate civilian casualties — the deploying state arguably remains responsible. Under the law of state responsibility for internationally wrongful acts, the state is accountable for wrongful acts committed by its organs or agents, including the military, even if the acts were carried out by machines rather than by humans.

Moreover — and critically — states have an obligation under IHL (e.g., under Additional Protocol I to the Geneva Conventions and customary law) to prevent, prosecute, and punish serious violations. The use of LAWS does not discharge those obligations.

That said, liability in practice may depend on the quality of the state’s regulatory and review framework: Was the weapon lawfully reviewed? Was there human supervision? Was the use within predictable parameters? If not, states may bear responsibility for failure to prevent or control unlawful use. This possibility is highlighted in institutional reviews of LAWS.

  1. How can individuals be responsible for LAWS violations?

At first glance, attributing individual criminal responsibility may seem problematic: machines act autonomously, so who is the “perpetrator”? Some argue that programmed instructions, deployment orders, or command authorisation suffice to attribute war crimes to those humans. But this approach runs into difficulties: many LAWS rely on machine‑learning algorithms whose decisions may be unpredictable, emergent, or opaque. In such cases, it may be difficult to establish the requisite mental elements for criminal liability  (intent, knowledge, recklessness, negligence). As argued by HRW and others, these obstacles risk creating an accountability vacuum — a scenario in which no human can be meaningfully held criminally liable, even when civilians suffer unjust harm.

  1. Do LAWS create an accountability gap under existing IHL — and if so, how should responsibility be allocated?

Many experts believe the risk of an accountability gap is real and significant. The core of this concern lies in the separation — sometimes vast — between the human actors involved (programmers, designers, commanders, operators) and the moment when the autonomous system makes a targeting decision. When that separation is combined with unpredictability and opacity, tracing decisions back to human agents becomes exceedingly difficult.

To address this, scholars recommend a “chain of responsibility” model that maps all stages of a weapon system’s lifecycle — design, manufacturing, programming, deployment, activation, command authorisation — and allocates accountability accordingly.

In practice, this could mean that: designers and manufacturers bear responsibility for defects, negligence, or inadequate safeguards; programmers and developers bear responsibility if they intentionally design algorithms that foreseeably violate IHL; commanders or operators bear responsibility if they authorise deployment without appropriate human control or supervision; and the state bears responsibility if it fails to regulate, review, or supervise appropriately.

  1. What legal mechanisms are most appropriate for allocating accountability for LAWS-related violations?

Given the complexity, opacity, and diffusion of responsibility in LAWS use, relying solely on traditional frameworks (state responsibility; command or individual criminal liability) may be insufficient. Instead, a comprehensive, systemic mechanism may be required — one that combines: mandatory and robust weapon review procedures before deployment;

legal requirements for “meaningful human control” in operations (supervision, ability to intervene, predictability and traceability); transparent record‑keeping throughout the lifecycle of the system (design, programming, deployment, use, maintenance); clear chain-of-command and chain-of-responsibility rules allocating liability across actors; national (and, potentially, international) mechanisms for investigation, prosecution, and redress in case of unlawful outcomes; binding international rules (treaty  or protocol) to ensure uniform standards. To this end, the ICRC, among others, has urged states to negotiate a new legally binding instrument to regulate or prohibit certain types of autonomous weapons.

The Stakes: Why Accountability Matters

The issue of accountability for the lethality of autonomous weapons is not a narrow technical or military debate. The question is existential: does humanity accept a future in which machines — not humans — make life-and-death decisions, and where no human actor can be held responsible if those decisions go horribly wrong?

An accountability gap does more than undermine justice for victims. It erodes deterrence, weakens compliance incentives, frustrates reconciliation, and normalises the use of force without moral or legal restraint.  Conversely, a robust acountability regime — built on human responsibility, transparency, and enforceable obligations — can strengthen civilian protection, preserve human dignity in armed conflict, and reinforce the rule of law even as technology evolves.

Conclusion: Call for Clarity, Regulation, and Human-Centred Control

“Accountability for autonomous lethality” demands more than academic reflection and rhetorical advocacy — it requires firm a normative commitment. States, individuals, and corporations must reaffirm the principle that only humans can and should bear responsibility for decisions to use lethal force.

State and non-state actors alike have a duty to ensure that LAWS are subject to meaningful human control, that their development, deployment, and use are transparent and reviewable, and that legal responsibility is clearly assigned across all relevant actors. The work of the CGE, the ICRC, HRW, Article 36, and leading scholars provides an important foundation — but much remains to be done.

As LAWS transition from speculative future to concrete reality, there is no place for legal ambiguity or ethical abdication. Building on the trajectory set by the GGE and the Tallinn Manual on the International Law Applicable to Cyber Operations, the international community — states, civil society, and academia — must support the codification of accountability for violations arising from the deployment of LAWS. Otherwise, humankind risks entrusting death to algorithmic systems, without ever deciding who should answer when those algorithms kill.

Recommendations: Ensuring Accountability for Autonomous Lethality

Coupled with the growing dangers posed by LAWS, the more than 130 armed conflicts currently underway worldwide have driven human suffering to alarming levels. In flagrant violation of IHL, civilians are deliberately targeted, civilian infrastructure is systematically destroyed, and humanitarian assistance is obstructed or denied. In response to these grave challenges, the Raoul Wallenberg Institute of Human Rights and Humanitarian Law (RWI) has developed an ambitious, evidence-based initiative—the International Humanitarian Law Compliance Monitoring Database (ICMD). The ICMD , whose mission is to promote compliance for a more humane world,  is designed to systematically collect, centralise, and rigorously analyse data on incidents of IHL relevance arising from armed conflicts worldwide, thereby strengthening accountability, informing policy and legal analysis, and enhancing compliance with IHL. Inter alia, the ICMD will prioritise issues related to the regulation of LAWS, with a particular focus on strengthening compliance with IHL obligations to protect civilians and civilian objects.

As a starting point, the following recommendations outline the measures necessary to preserve clear human responsibility and legal accountability in the development, deployment, and use of autonomous lethal capabilities.

  1. Adopt Binding International Standards on Human Control: States should negotiate and adopt international instruments that mandate meaningful human control over LAWS, including the authority to intervene or abort an operation at any stage.
  2. Clarify Chains of Responsibility: Legal frameworks should clearly delineate responsibility among designers, programmers, operators, commanders, and states. The “chain-of-responsibility” approach recommended by the GGE should be operationalised in domestic and international law.
  3. Strengthen National Weapon Review Mechanisms: Prior to deployment, all LAWS  should undergo rigorous, legal, technical, and ethical review to ensure compliance with IHL. States should establish transparent procedures with accountability for review failures.
  4. Mandatory Transparency and Record-Keeping: Comprehensive logs of design, programming, testing, deployment, and operational decisions should be maintained to enable accountability in the event of violations.
  5. Integrate Criminal Accountability Frameworks: National and international criminal law should be adapted to address the role of individuals in authorising, programming, or deploying LAWS, even when autonomous systems mediate actions.
  6. Invest in Training, Risk Assessment, and Capacity-Building: Military, engineering, and legal personnel should receive specialised training on LAWS, focusing on legal obligations, risk assessment, and ethical considerations. In addition to the requirement for legal advisers under IHL Article 82 of Additional Protocol I to the Geneva Conventions, the inclusion of risk analysts in planning military operations employing LAWS is imperative.
  7. Foster Multi-Stakeholder Dialogue: States should engage civil society, academia, and international organisations to continuously review, assess, and improve regulatory frameworks governing LAWS.
  8. Encourage Regional Harmonisation: Regional bodies, such as the African Union, should harmonise positions and standards on LAWS, building on frameworks like the Common African Position on the Application of International Law to the Use of Information and Communication Technologies in Cyberspace to ensure coherent approaches across states.
  9. Implement Redress and Remediation Mechanisms:Victims of unlawful LAWS operations should have access to effective remedies, including compensation and accountability processes, to reinforce deterrence and compliance.
  10. Promote Research on Accountability Gaps and Emerging Technologies: Continuous academic and policy research should monitor evolving technologies, their operational use, and potential gaps in legal frameworks to ensure proactive regulation.

Dedication
In cherished memory of Ms Lena Olsson — an exceptionally astute librarian, a generous colleague, and a steadfast source of intellectual curiosity. Her unwavering support, kindness, and willingness to assist were invaluable to me during my postgraduate studies at Lund University and beyond. Her legacy of service and scholarship continues to inspire all who had the privilege of knowing her.

Author

*Brigadier General, Commandant Emeritus, National Defence College-Malawi, Extraordinary Professor of International, Centre for Human Rights, University of Pretoria, Visiting Scholar, Raoul Wallenberg Institute of Human Rights and Humanitarian Law, University of Lund, and Senior Research Fellow, African Institute of South Africa (IASA), Human Sciences Research Council (HRC).

Disclaimer

The views expressed in this article are those of the author and do not represent those of any previous or current institutions the Author is affiliated with.

Share with your friends
Scroll to top