Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/239166 
Year of Publication: 
2020
Citation: 
[Journal:] Journal of Risk and Financial Management [ISSN:] 1911-8074 [Volume:] 13 [Issue:] 4 [Publisher:] MDPI [Place:] Basel [Year:] 2020 [Pages:] 1-12
Publisher: 
MDPI, Basel
Abstract: 
We present a deep reinforcement learning framework for an automatic trading of contracts for difference (CfD) on indices at a high frequency. Our contribution proves that reinforcement learning agents with recurrent long short-term memory (LSTM) networks can learn from recent market history and outperform the market. Usually, these approaches depend on a low latency. In a real-world example, we show that an increased model size may compensate for a higher latency. As the noisy nature of economic trends complicates predictions, especially in speculative assets, our approach does not predict courses but instead uses a reinforcement learning agent to learn an overall lucrative trading policy. Therefore, we simulate a virtual market environment, based on historical trading data. Our environment provides a partially observable Markov decision process (POMDP) to reinforcement learners and allows the training of various strategies.
Subjects: 
CfD
contract for difference
deep learning
long short-term memory
LSTM
neural networks
Q-learning
reinforcement learning
RL
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article

Files in This Item:
File
Size
415.04 kB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.