Bitte verwenden Sie diesen Link, um diese Publikation zu zitieren, oder auf sie als Internetquelle zu verweisen: https://hdl.handle.net/10419/302703 
Erscheinungsjahr: 
2024
Schriftenreihe/Nr.: 
IZA Discussion Papers No. 17186
Verlag: 
Institute of Labor Economics (IZA), Bonn
Zusammenfassung: 
We consider the problem of repeatedly choosing policies to maximize social welfare. Welfare is a weighted sum of private utility and public revenue. Earlier outcomes inform later policies. Utility is not observed, but indirectly inferred. Response functions are learned through experimentation. We derive a lower bound on regret, and a matching adversarial upper bound for a variant of the Exp3 algorithm. Cumulative regret grows at a rate of T2/3. This implies that (i) welfare maximization is harder than the multi-armed bandit problem (with a rate of T1/2 for finite policy sets), and (ii) our algorithm achieves the optimal rate. For the stochastic setting, if social welfare is concave, we can achieve a rate of T1/2 (for continuous policy sets), using a dyadic search algorithm. We analyze an extension to nonlinear income taxation, and sketch an extension to commodity taxation. We compare our setting to monopoly pricing (which is easier), and price setting for bilateral trade (which is harder).
Schlagwörter: 
optimal taxation
multi-armed bandits
experimental design
JEL: 
C9
H21
C73
Dokumentart: 
Working Paper

Datei(en):
Datei
Größe
1.97 MB





Publikationen in EconStor sind urheberrechtlich geschützt.