Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/216206 
Autor:innen: 
Erscheinungsjahr: 
2020
Schriftenreihe/Nr.: 
FAU Discussion Papers in Economics No. 05/2020
Verlag: 
Friedrich-Alexander-Universität Erlangen-Nürnberg, Institute for Economics, Nürnberg
Zusammenfassung: 
This paper presents the first large-scale application of deep reinforcement learning to optimize the placement of limit orders at cryptocurrency exchanges. For training and out-of-sample evaluation, we use a virtual limit order exchange to reward agents according to the realized shortfall over a series of time steps. Based on the literature, we generate features that inform the agent about the current market state. Leveraging 18 months of high-frequency data with 300 million historic trades and more than 3.5 million order book states from major exchanges and currency pairs, we empirically compare state-of-the-art deep reinforcement learning algorithms to several benchmarks. We find proximal policy optimization to reliably learn superior order placement strategies when compared to deep double Q-networks and other benchmarks. Further analyses shed light into the black box of the learned execution strategy. Important features are current liquidity costs and queue imbalances, where the latter can be interpreted as predictors of short-term mid-price returns. To preferably execute volume in limit orders to avoid additional market order exchange fees, order placement tends to be more aggressive in expectation of unfavorable price movements.
Schlagwörter: 
Finance
Optimal Execution
Limit Order Markets
Machine learning
Deep Reinforcement Learning
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
501.01 kB





Publikationen in EconStor sind urheberrechtlich geschützt.