Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/311807 
Authors: 
Year of Publication: 
2025
Series/Report no.: 
CIGI Papers No. 315
Publisher: 
Centre for International Governance Innovation (CIGI), Waterloo, ON, Canada
Abstract: 
The need for transparency is often emphasized in international governance discussions on military artificial intelligence (AI) systems; however, transparency is a complex and multi-faceted concept, understood in various ways within international debates and literature on the responsible use of AI. It encompasses dimensions such as explainability, interpretability, understandability, predictability and reliability. The degree to which these aspects are reflected in state approaches to ensuring transparent and accountable systems remains unclear and requires further investigation. This paper examines the feasibility of achieving transparency in military AI systems, considers the associated challenges and proposes pathways to develop effective transparency mechanisms. Transparency efforts are one critical part of the broader governance and regulatory framework that needs to be developed for military applications of AI and autonomy.
Creative Commons License: 
cc-by Logo
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.