Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/331350 
Year of Publication: 
2025
Series/Report no.: 
IES Working Paper No. 12/2025
Publisher: 
Charles University in Prague, Institute of Economic Studies (IES), Prague
Abstract: 
This paper develops a novel gradient-based reinforcement learning algorithm for solving dynamic quantile models with uncertainty. Unlike traditional approaches that rely on expected utility maximization, we focus on agents who evaluate outcomes based on specific quantiles of the utility distribution, capturing intratemporal risk attitudes via a quantile level τ ∈ (0, 1). We formulate a recursive quantile value function associated with time consistent dynamic quantile preferences in Markov decision process. At each period, the agent aims to maximize the quantile of a distribution composed of instantaneous utility combined with the discounted future value, conditioned on the current state. Next, we adapt the Actor- Critic framework to learn τ-quantile of the distribution and policy maximizing the τ-quantile. We demonstrate the accuracy and robustness of the proposed algorithm using an quantile intertemporal consumption model with known analytical solutions. The results confirm the effectiveness of our algorithm in capturing optimal quantile-based behavior and stability of the algorithm.
Subjects: 
Dynamic programming
Quantile preferences
Reinforcement learning
JEL: 
C61
C63
Document Type: 
Working Paper

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.