Tuesday, December 30, 2014

Is Houston really vulnerable to recession? {Updated answer: maybe}

So after reading Paul Krugman's prediction that Texas was vulnerable, I did two things I should have done before.

First, I looked to see whether the share of jobs in the mineral industry in Houston now are any lower than they were in 1986 (the first year for which I could easily download data).  The answer is that, if anything, it is slightly more reliant now.

Second, I plotted the unemployment rates for Houston and the US against real oil prices.  This is what I found:


Two things: Houston's unemployment moves with the business cycle (so the stronger US economy should help it), but also that the relative unemployment rate of Houston fell as the real price of oil rose between 2000 and 2007.

We can summarize this in a regression:

HOUE - USUE = -1.5 -.83 ln(real oil price).

The t-stat on the coefficient on real oil price is 11.  So what this approximately means is that a 50 percentage point drop in real oil price will produce a 0.41 percentage point increase in Houston's unemployment rate.  Now this is all descriptive, and is not a serious model of the region, but it nevertheless provides evidence that Houston's relative employment performance has been affected by oil prices. Given that Houston it is as reliant on energy for jobs as ever, it probably will continue to be affected as well.

Monday, December 29, 2014

The limits of knowledge in economics, Part V

How much does it cost you to live in your house?  If you are a renter, the answer to that question is fairly straightforward (although if your rent includes a gym membership and heat, it is not clearcut what the simple cost of occupying your place is).

But suppose you are an owner.  What is it costing you every month or every year to live in your house?  The truth is, you don't know with a great deal of precision, and neither does anyone else.

There are two ways to look at the issue.  One is to look at something called owner's equivalent rent. In principle, one could determine owner's equivalent by offering her house for rent, and seeing what it would fetch in the rental market.  Needless to say, owners don't do this very often.  Another way to calculate owner's equivalent rent is to find a perfect comparable for an owned house in the renter market, and impute the rent for the owner.  In the next episode of this series, we will discuss the problems with doing that.  It should be fairly obvious that they are large.

The alternative to owner's equivalent rent is user cost, which seeks to compute the cost of owning to those who live in their own houses.  [Pat Hendershott is the guru of user cost.  See a typical paper of his here]. The formula for the user cost of housing is

uct = Vt[((1-m)rt + mit)(1-ty) + τp(1-ty) + d - π 

where uct is user cost at time t, Vt is the value of the house at time t, m is the loan-to-value ratiort is the opportunity cost of equity, it is the mortgage interest rate, ty is the marginal income tax rate, τp is the property tax rate, d is depreciation and π is expected house price appreciation.  This formula calculates the after-tax cash-flow cost of owning (including the opportunity cost of equity), adds depreciation and subtracts expected appreciation.

Of all these elements (and this formula doesn't incorporate everything, but it is close enough), the only thing we know with near certainty is the marginal income tax rate: once one calculates reported taxable income, one can know the marginal tax, which is set by statute, with certainty.

Everything else?  Up in in the air.  On a year-to-year basis, we don't know the value of our houses with certainty.  We don't know with certainty the opportunity cost of equity. While we know the coupon rate of a mortgage, we don't always know its total cost until we extinguish it, because mortgages with fees and points (and sometimes, even prepayment penalties) are amortized over time, and the life of a mortgage is generally considerably shorter than its term, as households refinance their mortgages or sell their houses.  We don't know property taxes until an assessor determines assessed value, which is usually at least a little different from market value.  Depreciation is difficult to measure.  Finally, we are pretty lousy at forecasting the values of our houses.

But let's say we are good at forecasting house prices, and you think the value of your house is going to increase by $5,000 over the next year.  Is this the same as being handed a check for $5,000?  No, because you still need to live somewhere.  If your house goes up by $5,000 in value, so to does your neighbor's.  The only way to cash in on your $5,000 is to downsize.  Pocketing the $5,000 and downsizing may leave you better off, but not as well off as just having $5,000.  So the user cost formula does not exactly get user cost right.


Saturday, December 27, 2014

Is Houston really vulnerable to recession?

My inbox is filling up with dire warnings about the near-term future of Houston's economy.  After all, the price of oil has dropped by 50 percent, and we know how reliant Houston is on energy.  Except, perhaps, it is not.  Let's look at a Bureau of Labor Statistics chart that gives the composition of employment in Harris County, Texas at the end of 2013:



On the one hand, what the chart does't show is that the location quotient for natural resources and mining in Harris County is 2.78, meaning that it is almost three times more reliant on the sector as the rest of the country.  Despite this, however, only about five percent of jobs in Harris County are in that sector.  The county in which Houston sits is actually very well diversified, with 75 percent of its jobs being in the service sector.  Put another way, over the past several years, Harris County has been creating more total jobs every two years as there are jobs in the entire natural resources and mining sector.

NRS jobs do pay well, which means that any reductions in these jobs would have a multiplier effect (but we are generally terrible at estimating regional multipliers).  But clearly something is happening in Houston that makes it attractive to employers that have nothing to do with the energy sector.  My suspicion is that inexpensive housing is one of those things.

I could be completely wrong about this, but it seems to me that the decline in oil prices will more likely bring slower growth--as opposed to recession--to Harris County. 

Friday, December 26, 2014

The limits of knowledge in economics: Part IV

The workhorse model of international trade is the Heckscher-Ohlin Model.  It is essentially the model that formalizes the Ricardian model of trade students in principles of economics learn.  It demonstrates that countries export goods in which they have a comparative advantage; it moreover shows that it is comparative advantage, rather than absolute advantage determines patterns of trade.

In the context of the HO model, comparative advantage is defined by the relative abundance of a production factor.  Let's say country A has 10 units of labor and 10 units of capital, which country B has 8 units of labor and 4 units of capital.  Country A has an absolute advantage in both labor and capital, but B has a comparative advantage in labor, because its labor to capital ratio (i.e., 2) is higher than  country A's (1).

The HO model thus predicts that country A will export a good that needs relatively more capital for production to B, and that country B will export a good that needs relatively more capital for production to A.

Everything works beautifully in a world with two countries, two goods and two factors.  But the world is nothing like that--it has many more countries, goods and factors than 2.  Is this a big deal?

The whole point of economic modeling is to isolate the impact of a particular phenomenon, ceteris paribus.  Ceteris paribus is a favorite phrase in the economist's lexicon, and means "all other things being equal or held constant."  Sometimes asserting ceteris paribus is innocuous; often it is not.

A small change in the HO model creates serious problems.  As Alan Deardorff showed, if the number of goods is greater than the number of factors of production, patterns of trade become indeterminate. The mechanics of the problem are simple: when the numbers of factors equal the number of goods, solving the pattern of trade problem involves equal numbers of equations and unknowns.  This equality disappears when there are more goods than factors, we become unable to determine what is produced where.  Allowing more goods than factors is not a trivial change to the model--it has an enormous impact on the analytical outcome.  It is also a change that better reflects the reality of the world.

This is a fundamental problem with HO that is easiest to explain.  Deardorff takes us through more sophisticated argues that show other problems with the predictions of the HO model.  He expresses concerns that the model:


(1) implies fractions of good produced or trade routes utilized that are (unrealistically?) low;
(2) has a solution that is hypersensitive to [trade costs].

So the fact is we really don't have a theoretical model that predicts patterns of trade well.  Yet we for years made lots of policy decisions based on a model that has lots of limitations. 

[Update in response to comment: Krugman and Helpman are great in reconciling how interindustry trade happens and why countries with similar factors trade with each other.  But the problems outlined by Deardorff about developing a robust general equilibrium model that predicts patterns of trade remain.]




    


Wednesday, December 24, 2014

The limits of knowledge in economics, Part III.

Economists think a lot about preferences, and are more interested in what people do, instead of what people say they will do. When people make work-leisure decisions, consumption decisions, and investment decisions, they are revealing their preferences, and we economists attempt to gather information about the broader economy based on these revelations.

Drawing inferences from revealed preferences can work if individual preferences meet four seemingly simple assumptions (Hal Varian's Microeconomics Analysis provides the clearest exposition I know of micro theory.  There is also a nice discussion here).

(1) Preferences are complete.  This simply says that if I am faced with a choice of two consumption bundles, I can always say that one is at least as good as the other.

(2) Preferences are reflexive.  This simply says that any bundle is always as good as itself.

(3) Preferences are transitive.  This simply says that if bundle A is better than bundle B, and bundle B is better than bundle C, then bundle A is better than bundle C.

(4) Preferences are strongly monotonic.  This simply says I never prefer less to more.

These may seem like mild assumptions, and they would be, except that people change their mind.

Perhaps there is a restaurant you go to on a regular basis.  Its menu stays the same, and the prices stay the same over a reasonable length, so your choice set remains constant.  Yet this week you might have the beef burger, next week the veggie-burger, and the following week the cajun chicken sandwich.  In doing so, you have violated assumptions (1) and (3).

Except that, in a sense, you haven't.  Suppose an element of the choice set is the time at which you eat your sandwich.  Under these circumstances, the menu does change because it is offered at three different times.  So we can preserve our theoretical assumptions.

But now if we want to estimate preferences with precision, we have a difficult problem, because we have to estimate in far more dimensions than the data can support.  So anything we infer about preferences will necessarily be approximations.

This is not to say that the general axiom of revealed preferences isn't a powerful tool to learn things about the economy: Varian's lecture on the subject makes a pretty compelling case that it is.  But it will always be an imprecise powerful tool.


Tuesday, December 23, 2014

The limits of knowledge in economics, Part II

Everyone--and I mean everyone--who does empirical analysis should read Charles Manski's Identification Problems in the Social Sciences (I am happy to say I took advanced econometrics from Manski when I was a Ph.D. student at Wisconsin).  The book reminds us that we are constantly relying on unstated assumptions when we do statistical analysis, and we need to do a better job of stating them.

The focus of this post involves a simple issue: extrapolation.  Let me show two graphs from Manski's chapter:


Suppose one wanted to infer y based on x.  Obviously, as the sample size gets larger, the confidence interval gets smaller, for the set of x that we are able to observe.  Note that in this instance, however, no x between 4 and 6 is sampled.

Most empirical analysis would simply assume that E(y|x=5) is some smooth function that gives weights to E(y|x=4) and E(y|x=6).  Put in English, one would just draw some sort of line between the x,y relation at x=4 and the x,y relation at x=6, and read off an x,y relation for x=5.

But doing this involves an important assumption: that y doesn't go flying off in one direction or another at x=5.  We actually cannot know this, because we have no observations at x=5; indeed, maybe the reason we never observe x=5 is because y is highly unstable at that point.  Just as problematic (perhaps more so) is predicting y when x > 9.

I am pretty sure that it is hard to go a day without reading something that involves someone extrapolating outside the support of observed data.  Sometimes it is necessary to do this, but when we do, we should always say so.


Monday, December 22, 2014

The limits of knowledge in economics, Part I.

I rather like this paragraph from Ed Leamer's Journal of Economic Perspectives piece on the utility and limits of econometrics:

Finally, I think that Angrist and Pischke are way too optimistic about the
prospects for an experimental approach to macroeconomics.  Our understanding
of causal effects in macroeconomics is virtually nil, and will remain so.  Don’t we
know that?  Though many members of our profession have jumped up to support
the $787 billion stimulus program in 2009 as if they knew that was an appropriate
response to the Panic of 2008, the intellectual basis for that opinion is very thin,
especially if you take a close look at how that stimulus bill was written.  
The economists who coined the DSGE acronym combined in three terms the
things economists least understand:  “dynamic,” standing for forward-looking decision
making; “stochastic,” standing for decisions under uncertainty and ambiguity; and “general equilibrium,” standing for the social process that coordinates and
influences the actions of all the players...that’s what we
do.  We seek patterns and tell stories [italics added].
The point is correct: among other things, we have nothing like the necessary degrees of freedom (not exactly the same as observations, but to a lay person, close enough) necessary to identify causal relationships in the macroeconomy with anything like certainty. We are now in our 11th business cycle since World War II, and as much as we like using high frequency data (I, for one, am guilty as charged), that really means in an important sense we have 11 observations about the post-War macroeconomy.

All that said, I did support the stimulus, because I have a Bayesian prior that Keynes was basically right.  In particular, the ideas that high unemployment can result from inadequate aggregate demand, and that in turn that high unemployment is an unrecoverable waste of resources, and, finally, that deficit spending can spur aggregate demand all make sense to me.

At the same time, George Akerlof lists example after example of how new classical macro-theory is rejected by empirical evidence (such as it is), while also showing that the empirical evidence, such as it is, is consistent with Keynesian predictions.  But this is still different from confirming that Keynes was right, something we are unable to do with data.

So back to Leamer. Elsewhere in the paper, he talks about three-valued logic: the ability to use evidence to come to a yes, and no, or an I don't know.   The honest thing to say about macroeconomics is "I don't know."  Alas, while the world is uncertain, policymakers still need to make policy decisions.


Thursday, December 04, 2014

If you think we're post-racial, read the lead article from the November 2014 American Economic Review

Alesina and La Ferrara conclude:

This paper proposes a test for racial bias in capital sentencing in the US over the period 1973-1995. We use the share of judicial errors in rst degree sentencing as an indicator of racial bias of such courts. Using an originally collected dataset, we uncover a bias against minority defendants killing white victims. The bias is present, according to our test, only in Southern States. More precisely, according to our interpretation rst degree courts tend to place less weight on the possibility of condemning an innocent in cases of minority defendants with one or more white victims relative to minority defendants who did not kill whites. The same does not hold for white defendants. This result is not explained by differences in observable characteristics of the crime or of the trial, nor by the ideological orientation of appeal courts.
The paper is really well done. 

Tuesday, November 04, 2014

Jung Hyun Choi and I write about Income Inequality across Cities.

We will be presenting at APPAM:

This paper investigates why the level of income inequality differs across U.S. cites. We also
explore why some cities experienced faster increases in the level of inequality than others.
Using the Decennial Census and the American Community Survey (ACS) from 1980 to 2011,
we explore whether the disparities in the level and the changes in the level of inequality can
be explained by MSA characteristics, including labor market conditions, skill distribution,
residential mobility, racial concentration, industrial composition and unionization. We also
examine how state level policies such as unemployment insurance benefits and minimum
wage level is associated with income inequality.

Our findings hows that negative labor market conditions, concentration of skilled workers
and racial segregation are positively associated with the level of income inequality. The level
of inequality in these cites also tends to rise grow at a faster pace. While differences in the minimum
wage level do not seem to have any association with income inequality across cities, we find some evidence that differences in unemployment insurance benefits and greater unionization lowered increases in the income inequality.

Thursday, October 30, 2014

How people who can't do math get shafted.

I have a car (an Accord, if you must know) that is 17 months old.  When I bought the car, the dealer offered me a car loan at 0 percent interest for 36 months, so I took it.  Even in the world of very low discount rates, accepting the loan allowed me to get a further small effective discount on the car.

The dealer called me today, saying I could trade the car in for a new car and not increase my payment; the payment would simply reset for 36 months.  I told him I needed to do a little math before calling him back.  The math I did was as follows:

Value of Old Car from Kelly Blue Book + PV of 36 months of payments = Cost of New Car.

Cost of New Car - Edmunds Value of New Car = $6000.

Yes, the dealer was trying to fool me into paying $6000 for...nothing.   In my particular case, he could not profit from informational asymmetry.  But for the person with the average math skills in the US?  That might be a different story.  We know this selling tactic must work sometimes, because otherwise I would not have gotten the call.

The first lesson from all this is we really need to do a better job teaching math.

The second lesson is that, in the meantime, we need to protect consumers from these sorts of practices.


Sunday, September 28, 2014

How the price of a Martini reveals the property value of a city

A Hendricks Gibson is basically a commodity (although the bartender does need to know what she is doing). But a good Gibson at the Starlight Lounge in LaCrosse, Wisconsin is $8; at the Roof Garden at the Peninsula Hotel in Beverly Hills is $16; at the King Cole Bar of the St. Regis Hotel in New York is $22. 
Let's say the cost of the cocktail, including labor, but exclusive of real estate, is $7. Then the implicit rent you are paying for sitting in a bar in LaCrosse is $1; in Beverly Hills on a rooftop is $9; and in NYC is $15. If one consults Zillow, one will find that this ratio of 1:9:15 for real estate in LaCrosse, BH and Manhattan is pretty close to the truth.
One key thing--all these drinks are served in competitive markets--there is true thickness in bars in these markets. And the people at the Roof Garden and the King Cole will let you sit a nurse your drink without hassling you about it.  So while having a drink in these lovely spots is very expensive, it is not a rip-off--one just has to pay the rent.
A drink at, say, Disney World, or FedEx field, doesn't count, because if you want to stay at the Park/Game and have a drink, you have to pay a monopoly price for a drink (in the case of the stadium of the Washington Football Team, you might even pay for a drink that is past its expiration date).
Needless to say, further research is necessary.

Saturday, September 27, 2014

How Piketty's care with language can improve economics

When I was a freshman in college, I read Fogel and Engerman's Time on the Cross.  I loathed the book, because it implicitly endorsed the idea that it is OK, "in the interested of science," to dehumanize those African-Americans that were placed in bondage by viewing them as capital (I loathed it for other reasons as well, but that is for another time.) It also contributed to the broad view currently within much of mainstream economics that it is (1) acceptable to treat human beings as objects, and (2) that it is embarrassing to embrace humanity.  I was embarrassed that the book helped Fogel ultimately won the Nobel Prize in economics.

I thought of Fogel and Engerman again when a recent review in the Economist of Edward Bishop's The Half Has Never Been Told complained that a book based on the perspective of slaves could not be objective.  (To the Economist's credit, it repudiated the review and apologized for allowing it to be printed, but also kept a link to it so that readers could see how misguided it was).  Again, an allegiance to "scientific detachment" led to a bizarre view of an evil institution.  A "detached" view of slavery helps legitimize its practice, and thus is not in any way objective.

And so we come to Thomas Piketty's Capital in the 21st Century.  I have some issues with the book, but I love the first third of it.  I particularly like his treatment of "human capital:"
There are many reasons to exclude human capital from our definition of capital.  The most obvious is that human capital cannot be owned by another person or traded on a market.
The language of economics often treats people as commodities: the phrases "representative agent" and "human capital" are examples of this.   Sometimes these phrases are useful abstractions, but they also contribute to the sometimes pernicious indifference of mainstream economics to issues of justice.  Piketty's take on human capital might make us a little less indifferent.