Tuesday, February 05, 2008

Experts

I have not written for the last few days, because I was in Chicago with my daughter, who was visiting colleges and auditioning for a theater program. I stayed with my cousin, Jonathan, who teaches at Kellogg. Jon, Hannah and I had a nice lunch at a deli in the loop, and had a really nice time talking about his recent paper on experts.

The point of Jon (and Nabil I. Al-Najjar's) paper is that the best way to determine expertise is to compare predictions in some meaningful way. It is not sufficient to call someone an expert for choosing some mean outcome (if it rains 20 percent of the time, and a weatherman predicts a 20 percent chance of rain everyday, he will, in a sense, be right, but he adds no information, no expertise).

This got me thinking about Macroeconomic forecasting models. The Wall Street Journal will celebrate an economist's forecast if it is the best for a particular quarter. This is a strange metric: if someone predicts 6 percent GDP growth (an unusually high but not unheard of number) every quarter, in the unusual quarter where growth is that fast, he or she will be lionized in the pages of the country's leading financial newspaper.

A possibly better way to measure macroeconomic forecasts is to look at the mean squared error of forecasts over a reasonable time horizon--say 20 quarters. Perhaps someone has performed the exercise--it would be fun to know how it would shake out. My guess is that the eternal optimists and doomsayers wouldn't look so good.

2 comments:

  1. Why the mean squared error instead of just the mean magnitude of error?

    ReplyDelete
  2. Anonymous6:43 PM

    This comment has been removed by the author.

    ReplyDelete