Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/239470 
Year of Publication: 
2021
Citation: 
[Journal:] Journal of Risk and Financial Management [ISSN:] 1911-8074 [Volume:] 14 [Issue:] 2 [Publisher:] MDPI [Place:] Basel [Year:] 2021 [Pages:] 1-15
Publisher: 
MDPI, Basel
Abstract: 
In this paper, we present a study on Reinforcement Learning optimization models for automatic trading, in which we focus on the effects of varying the observation time. Our Reinforcement Learning agents feature a Convolutional Neural Network (CNN) together with Long Short-Term Memory (LSTM) and act on the basis of different observation time spans. Each agent tries to maximize trading profit by buying or selling one of a number of contracts in a simulated market environment for Contracts for Difference (CfD), considering correlations between individual assets by architecture. To decide which action to take on a specific contract, an agent develops a policy which relies on an observation of the whole market for a certain period of time. We investigate whether or not there exists an optimal observation sequence length, and conclude that such a value depends on market dynamics.
Subjects: 
contracts for difference
deep neural networks
machine learning
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.