Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/305595 
Authors: 
Year of Publication: 
2024
Series/Report no.: 
CESifo Working Paper No. 11353
Publisher: 
CESifo GmbH, Munich
Abstract: 
This paper addresses the steep learning curve in Machine Learning faced by non-computer scientists, particularly social scientists, stemming from the absence of a primer on its fundamental principles. I adopt a pedagogical strategy inspired by the adage ”once you understand OLS, you can work your way up to any other estimator,” and apply it to Machine Learning. Focusing on a single-hidden-layer artificial neural network, the paper discusses its mathematical underpinnings, including the pivotal Universal Approximation Theorem—an essential ”existence theorem”. The exposition extends to the algorithmic exploration of solutions, specifically through “feed forward” and “back-propagation”, and rounds up with the practical implementation in Python. The objective of this primer is to equip readers with a solid elementary comprehension of first principles and fire some trailblazers to the forefront of AI and causal machine learning.
Subjects: 
machine learning
deep learning
supervised learning
artificial neural network
perceptron
Python
keras
tensorflow
universal approximation theorem
JEL: 
C01
C87
C00
C60
Document Type: 
Working Paper
Appears in Collections:

Files in This Item:
File
Size





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.