Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/331350 
Autor:innen: 
Erscheinungsjahr: 
2025
Schriftenreihe/Nr.: 
IES Working Paper No. 12/2025
Verlag: 
Charles University in Prague, Institute of Economic Studies (IES), Prague
Zusammenfassung: 
This paper develops a novel gradient-based reinforcement learning algorithm for solving dynamic quantile models with uncertainty. Unlike traditional approaches that rely on expected utility maximization, we focus on agents who evaluate outcomes based on specific quantiles of the utility distribution, capturing intratemporal risk attitudes via a quantile level τ ∈ (0, 1). We formulate a recursive quantile value function associated with time consistent dynamic quantile preferences in Markov decision process. At each period, the agent aims to maximize the quantile of a distribution composed of instantaneous utility combined with the discounted future value, conditioned on the current state. Next, we adapt the Actor- Critic framework to learn τ-quantile of the distribution and policy maximizing the τ-quantile. We demonstrate the accuracy and robustness of the proposed algorithm using an quantile intertemporal consumption model with known analytical solutions. The results confirm the effectiveness of our algorithm in capturing optimal quantile-based behavior and stability of the algorithm.
Schlagwörter: 
Dynamic programming
Quantile preferences
Reinforcement learning
JEL: 
C61
C63
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe





Publikationen in EconStor sind urheberrechtlich geschützt.