Please use this identifier to cite or link to this item:
van der Laan, Dinard
Year of Publication: 
Series/Report no.: 
Tinbergen Institute Discussion Paper 10-036/4
In this paper we study Markov Decision Process (MDP) problems with the restriction that at decision epochs only a finite number of given Markovian decision rules may be applied. The elements of the finite set of allowed decision rules should be mixed to improve the performance. The set of allowed Markovian decision rules could for example consist of some easy-implementable decision rules, but also many open-loop control problems can be modelled as an MDP for which the applicable decision rules are restricted. For various subclasses of Markovian policies methods to maximize the performance are obtained, analyzed and illustrated with examples. Advantages and disadvantages of optimizing over particular subclasses of applicable policies are discussed and optimal performances are compared. One of the main results gives sufficient conditions for the existence of an optimal Markovian policy belonging to the subclass of applicable policies having a so-called regular structure.
Markov Decision Process
Mixing Decision Rules
Regular Sequences
Document Type: 
Working Paper

Files in This Item:
3.55 MB

Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.