Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/286736 
Erscheinungsjahr: 
2021
Quellenangabe: 
[Journal:] OR Spectrum [ISSN:] 1436-6304 [Volume:] 44 [Issue:] 1 [Publisher:] Springer [Place:] Berlin, Heidelberg [Year:] 2021 [Pages:] 29-56
Verlag: 
Springer, Berlin, Heidelberg
Zusammenfassung: 
In this study, we propose a reinforcement learning (RL) approach for minimizing the number of work overload situations in the mixed model sequencing (MMS) problem with stochastic processing times. The learning environment simulates stochastic processing times and penalizes work overloads with negative rewards. To account for the stochastic component of the problem, we implement a state representation that specifies whether work overloads will occur if the processing times are equal to their respective 25%, 50%, and 75% probability quantiles. Thereby, the RL agent is guided toward minimizing the number of overload situations while being provided with statistical information about how fluctuations in processing times affect the solution quality. To the best of our knowledge, this study is the first to consider the stochastic problem variation with a minimization of overload situations.
Schlagwörter: 
Scheduling
Mixed model sequencing
Reinforcement learning
Metaheuristics
Combinatorial optimization
Persistent Identifier der Erstveröffentlichung: 
Creative-Commons-Lizenz: 
cc-by Logo
Dokumentart: 
Article
Dokumentversion: 
Published Version

Datei(en):
Datei
Größe





Publikationen in EconStor sind urheberrechtlich geschützt.