Ƶ

An open access publication of the Ƶ
Fall 2010

Lessons from the financial crisis for monetary policy

Author
C.A.E. Goodhart

C.A.E. Goodhart is Professor Emeritus of Banking and Finance at the London School of Economics (LSE) and Program Director of Regulation & Financial Stability at the Financial Markets Group Research Centre at LSE. He served as an external member on the Bank of England’s Monetary Policy Committee from 1997 to 2000 and was Chief Adviser at the Bank of England prior to coming to LSE. His publications include Monetary Theory and Practice (1984), The Central Bank and The Financial System (1995), and The Evolution of Central Banks (revised edition, 1988).

The years preceding the financial crisis that began on August 9, 2007, were in many respects the most economically successful in human history. These years have been called the Great Moderation, as output growth, inflation, unemployment, and interest rates stabilized at comfortable levels and remained steady. Most developed countries experienced remarkably little volatility. Japan’s lost decade was an exception to the several other emerging Asian countries–especially China, but also South Korea, India, and Indonesia–that grew even faster, taking millions out of poverty.

Expert commentators awarded much of the credit for this improved performance to a new regime of monetary policy. In academia, the extent to which the better outcome was the result of good policy, good luck (for example, the effect China’s entry into world markets had on limiting inflation), or yet other factors is still debated. Under this new regime, a nation’s central bank raised interest rates whenever inflation was forecast to rise above a predetermined low target level, and lowered them if inflation was expected to fall below that level. Indeed, in most countries–excepting the United States– this policy was termed “inflation targetry.” Although policy in the United States was managed in very much the same way as elsewhere, the objectives for the Federal Reserve Board’s policy had been established earlier, in the Humphrey-Hawkins Act of 1978, and included a requirement that the Federal Reserve System maintain maximum sustainable output as well as stable prices.

In practice, this difference in stated objectives made little difference to policy outcomes. Changes in the rate of inflation are believed to be determined by whether output–otherwise called the pressure of demand –is above the “natural” level the economy can generate when employment/unemployment is at a normal level, or equilibrium. The margin between actual and equilibrium output is called the output gap, though this is not accurately measured, and there are continuous disputes about the size, sometimes even the sign, of this gap.

Thus, whether a country (like most) was an explicit inflation targeter or not (like the United States), the standard operational behavior of central banks was essentially the same. Central banks raised interest rates whenever the output gap (actual minus equilibrium) was positive and inflation was expected to be above target. Rates were generally calibrated by a Taylor reaction function, named for John Taylor, the Stanford University economist. In the Taylor reaction function, the current interest rate decision responds to the output gap–as the gap will drive inflation further up (or down) if output is assessed to be above (or below) its equilibrium level–and to the deviation of inflation from its target level.

.  .  .

To read this essay or subscribe to æ岹ܲ, visit the æ岹ܲ access page
Access now