Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/237027 
Year of Publication: 
2021
Citation: 
[Journal:] Foundations of Management [ISSN:] 2300-5661 [Volume:] 13 [Issue:] 1 [Publisher:] De Gruyter [Place:] Warsaw [Year:] 2021 [Pages:] 103-116
Publisher: 
De Gruyter, Warsaw
Abstract: 
Recent developments in artificial intelligence (AI) may involve significant potential threats to personal data privacy, national security, and social and economic stability. AI-based solutions are often promoted as "intelligent" or "smart" because they are autonomous in optimizing various processes. Because they can modify their behavior without human supervision by analyzing data from the environment, AI-based systems may be more prone to malfunctions and malicious activities than conventional software systems. Moreover, due to existing regulatory gaps, development and operation of AI-based products are not yet subject to adequate risk management and administrative supervision. Resonating to recent reports about potential threats resulting from AI-based systems, this paper presents an outline of a prospective risk assessment for adaptive and autonomous products. This research resulted in extensive catalogs of possible damages, initiating events, and preventive policies that can be useful for risk managers involved in conducting risk assessment procedures for AI-based systems. The paper concludes with the analysis and discussion of changes in business, legal, and institutional environments required to ensure the public that AI-based solutions can be trusted, are transparent and safe, and can improve the quality of life.
Subjects: 
digital innovations
digital services
artificial intelligence
smart services
risk assessment
risk management
JEL: 
M10
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by-nc-nd Logo
Document Type: 
Article

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.