Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/274694 
Year of Publication: 
2022
Citation: 
[Journal:] Journal of Risk and Financial Management [ISSN:] 1911-8074 [Volume:] 15 [Issue:] 4 [Article No.:] 172 [Year:] 2022 [Pages:] 1-15
Publisher: 
MDPI, Basel
Abstract: 
We consider a risk-aware multi-armed bandit framework with the goal of avoiding catastrophic risk. Such a framework has multiple applications in financial risk management. We introduce a new conditional value-at-risk (CVaR) estimation procedure combining extreme value theory with automated threshold selection by ordered goodness-of-fit tests, and we apply this procedure to a pure exploration best-arm identification problem under a fixed budget. We empirically compare our results with the commonly used sample average estimator of the CVaR, and we show a significant performance improvement when the underlying arm distributions are heavy-tailed.
Subjects: 
conditional value-at-risk
extreme value theory
heavy-tailed distributions
multi-armed bandits
risk-aware reinforcement learning
sequential decision making
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.