Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/301936 
Year of Publication: 
2024
Series/Report no.: 
CIGI Papers No. 296
Publisher: 
Centre for International Governance Innovation (CIGI), Waterloo, ON, Canada
Abstract: 
The focus of this paper is on policy guidance around explainable artificial intelligence (AI) or the ability to understand how AI models arrive at their outcomes. Explainability matters in human terms because it facilitates including an individual's "right to explanation" and it also plays a role in enabling technical evaluation of AI systems. The paper begins with an examination of the meaning of explainability, concluding that the constellation of related terms serves to frustrate and confuse policy initiatives. Following a brief review of contemporary policy guidance, it argues that there is a need for greater clarity and context-specific guidance, highlighting the need to distinguish between ante hoc and post hoc explainability, especially in high-risk, high-impact contexts. The question of whether post hoc or ante hoc methods have been employed is a fundamental and often-overlooked question in policy. The paper argues that the question of which method should be employed in a given context, along with the requirement for human-level understanding, is a key challenge that policy makers need to address. A taxonomy for how explainability can be operationalized in AI policy is proposed and a series of recommendations is set forth.
Creative Commons License: 
cc-by Logo
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.