Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/258066 
Year of Publication: 
2020
Citation: 
[Journal:] Risks [ISSN:] 2227-9091 [Volume:] 8 [Issue:] 4 [Article No.:] 113 [Publisher:] MDPI [Place:] Basel [Year:] 2020 [Pages:] 1-20
Publisher: 
MDPI, Basel
Abstract: 
In traditional Reinforcement Learning (RL), agents learn to optimize actions in a dynamic context based on recursive estimation of expected values. We show that this form of machine learning fails when rewards (returns) are affected by tail risk, i.e., leptokurtosis. Here, we adapt a recent extension of RL, called distributional RL (disRL), and introduce estimation efficiency, while properly adjusting for differential impact of outliers on the two terms of the RL prediction error in the updating equations. We show that the resulting "efficient distributional RL" (e-disRL) learns much faster, and is robust once it settles on a policy. Our paper also provides a brief, nontechnical overview of machine learning, focusing on RL.
Subjects: 
distributional reinforcement learning
markov decision process
leptokurtic distribution
tail risk
efficient estimator
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article
Appears in Collections:

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.