Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/86935 
Year of Publication: 
2010
Series/Report no.: 
Tinbergen Institute Discussion Paper No. 10-036/4
Publisher: 
Tinbergen Institute, Amsterdam and Rotterdam
Abstract: 
In this paper we study Markov Decision Process (MDP) problems with the restriction that at decision epochs only a finite number of given Markovian decision rules may be applied. The elements of the finite set of allowed decision rules should be mixed to improve the performance. The set of allowed Markovian decision rules could for example consist of some easy-implementable decision rules, but also many open-loop control problems can be modelled as an MDP for which the applicable decision rules are restricted. For various subclasses of Markovian policies methods to maximize the performance are obtained, analyzed and illustrated with examples. Advantages and disadvantages of optimizing over particular subclasses of applicable policies are discussed and optimal performances are compared. One of the main results gives sufficient conditions for the existence of an optimal Markovian policy belonging to the subclass of applicable policies having a so-called regular structure.
Subjects: 
Markov Decision Process
Mixing Decision Rules
Optimization
Regular Sequences
JEL: 
C60
C61
Document Type: 
Working Paper

Files in This Item:
File
Size
3.55 MB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.