Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/258036 
Year of Publication: 
2020
Citation: 
[Journal:] Risks [ISSN:] 2227-9091 [Volume:] 8 [Issue:] 3 [Article No.:] 83 [Publisher:] MDPI [Place:] Basel [Year:] 2020 [Pages:] 1-26
Publisher: 
MDPI, Basel
Abstract: 
We define the nagging predictor, which, instead of using bootstrapping to produce a series of i.i.d. predictors, exploits the randomness of neural network calibrations to provide a more stable and accurate predictor than is available from a single neural network run. Convergence results for the family of Tweedie's compound Poisson models, which are usually used for general insurance pricing, are provided. In the context of a French motor third-party liability insurance example, the nagging predictor achieves stability at portfolio level after about 20 runs. At an insurance policy level, we show that for some policies up to 400 neural network runs are required to achieve stability. Since working with 400 neural networks is impractical, we calibrate two meta models to the nagging predictor, one unweighted, and one using the coefficient of variation of the nagging predictor as a weight, finding that these latter meta networks can approximate the nagging predictor well, only with a small loss of accuracy.
Subjects: 
bagging
bootstrap aggregation
neural networks
network aggregation
insurance pricing
regression modeling
Persistent Identifier of the first edition: 
Creative Commons License: 
cc-by Logo
Document Type: 
Article
Appears in Collections:

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.