Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/216206 
Year of Publication: 
2020
Series/Report no.: 
FAU Discussion Papers in Economics No. 05/2020
Publisher: 
Friedrich-Alexander-Universität Erlangen-Nürnberg, Institute for Economics, Nürnberg
Abstract: 
This paper presents the first large-scale application of deep reinforcement learning to optimize the placement of limit orders at cryptocurrency exchanges. For training and out-of-sample evaluation, we use a virtual limit order exchange to reward agents according to the realized shortfall over a series of time steps. Based on the literature, we generate features that inform the agent about the current market state. Leveraging 18 months of high-frequency data with 300 million historic trades and more than 3.5 million order book states from major exchanges and currency pairs, we empirically compare state-of-the-art deep reinforcement learning algorithms to several benchmarks. We find proximal policy optimization to reliably learn superior order placement strategies when compared to deep double Q-networks and other benchmarks. Further analyses shed light into the black box of the learned execution strategy. Important features are current liquidity costs and queue imbalances, where the latter can be interpreted as predictors of short-term mid-price returns. To preferably execute volume in limit orders to avoid additional market order exchange fees, order placement tends to be more aggressive in expectation of unfavorable price movements.
Subjects: 
Finance
Optimal Execution
Limit Order Markets
Machine learning
Deep Reinforcement Learning
Document Type: 
Working Paper

Files in This Item:
File
Size
501.01 kB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.