Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/294875 
Year of Publication: 
2024
Series/Report no.: 
ZEW policy brief No. 06/2024
Publisher: 
ZEW - Leibniz-Zentrum für Europäische Wirtschaftsforschung, Mannheim
Abstract: 
With the final approval of the EU's Artificial Intelligence Act (AI Act), it is now clear that general-purpose AI (GPAI) models with systemic risk will need to undergo adversarial testing. This provision is a response to the emergence of "generative AI" models, which are currently the most notable form of GPAI models gen- erating rich-form content such as text, images, and video. Adversarial testing involves repeatedly interact- ing with a model to try to lead it to exhibit unwanted behaviour. However, the specific implementation of such testing for GPAI models with systemic risk has not been clearly spelled out in the AI Act. Instead, the legislation only refers to codes of practice and harmonised standards which are soon to be developed. In this policy brief, which is based on research funded by the Baden-Württemberg Foundation, we propose that these codes and standards should reflect that an effective adversarial testing regime requires testing by independent third parties, a well-defined goal, clear roles with proper incentive and coordination schemes for all parties involved, and standardised reporting of the results. The market design approach is helpful for developing, testing and improving the underlying rules and the institutional setup of such adversarial testing regimes. We outline the design space for an extensive form of adversarial testing, called red team- ing, of generative AI models. This is intended to stimulate the discussion in preparation for the codes of practice, harmonised standards and potential additional provisions by governing bodies.
Document Type: 
Research Report

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.