Please use this identifier to cite or link to this item: https://hdl.handle.net/10419/94289 
Authors: 
Year of Publication: 
1999
Series/Report no.: 
Working Paper No. 1999-10
Publisher: 
Rutgers University, Department of Economics, New Brunswick, NJ
Abstract: 
The United States is often taken to be the exemplar of the benefits of a monetary union. Since 1788 Americans, with the exception of the Civil War years, have been able to buy and sell goods, travel, and invest within a vast area without ever having to be concerned about changes in exchange rates. But there was also a recurring cost. A shock, typically in financial or agricultural markets, would hit one region particularly hard. The banking system in that region would lose reserves producing a monetary contraction that would aggravate the effects of the initial disturbance. Often, an interregional debate over monetary institutions would follow. The uncertainty created by the debate would further aggravate the contraction. During these episodes the United States might well have been better off if each region had had its own currency: changes in exchange rates could have secured equilibrium in interregional payments while monetary policy was directed toward internal stability. The United States, to put it differently, was not an optimal currency area. This pattern held until the 1930s when institutional changes, such as increased federal fiscal transfers and bank deposit insurance, changed the game. Political considerations, of course, ruled out separate regional currencies. But thinking about U.S. monetary history in this way clarifies the nature of the business cycle before World War II, and may suggest some lessons for other monetary unions.
JEL: 
N1
Document Type: 
Working Paper

Files in This Item:
File
Size
111.81 kB





Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.