Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/333240 
Year of Publication: 
2025
Citation: 
[Journal:] OR Spectrum [ISSN:] 1436-6304 [Volume:] 47 [Issue:] 4 [Publisher:] Springer Berlin Heidelberg [Place:] Berlin/Heidelberg [Year:] 2025 [Pages:] 1217-1266
Publisher: 
Springer Berlin Heidelberg, Berlin/Heidelberg
Abstract: 
Abstract Due to exponentially growing state and action spaces, network dynamic pricing problems are analytically intractable such that state-of-the-art approaches rely on heuristics. Reinforcement learning has successfully been applied in various complex domains, but its successful applicability to pricing may be limited by two factors. First, the need for extensive state and action space exploration causes lost revenues when directly training within the real world. Secondly, alternatively replicating the real world in an accurate simulation to perform the training therein comes with limitations as well, because calibrating the simulation would require precise domain knowledge, which in general does not exist. To overcome the above issues, with this work, we propose a new dynamic pricing approach based on offline reinforcement learning. In contrast to online reinforcement learning, training solely requires a static data set containing information on historic sales, which stems from applying some arbitrary behavior policy in the past. In particular, we develop a low-dimensional state and actions space reformulation of the considered generic dynamic pricing problem which allows to incorporate the critic-regularized regression algorithm within a scalable approach. We also adapt the standard algorithm’s actor loss function, such that it can deal with the pricing problem’s state-dependent action space. Our studies show that the trained policy dominates and in some cases substantially outperforms the respective behavior policy. Hence, although there are some limitations that have to be discussed, offline reinforcement learning seems to be a promising approach for dynamic pricing in case online reinforcement learning is not an option.
Subjects: 
Network revenue management
Dynamic pricing
Offline reinforcement learning
Critic-regularized regression
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article
Document Version: 
Published Version
Appears in Collections:

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.