Who said markets have to clear?

May 19, 2014 4 comments

Steve Randy Waldman has a good post which is nevertheless completely wrong.   I’m normally a fan of interfluidity, so I feel like I have to respond.

The basic argument, coming mostly from macro- and international economics types is that micro-economic involves just as many silly unrealistic assumptions as macro-economics… unless we take our lessons from macro!

This is an argument that I’ve seen come up several times online now, but so far every example people want to make are the simplifications that non-micro specialists make about microeconomics, not the actual study of micro as it is done by micro-specialists.

Waldman’s post is a great example of this.   He goes on at length to explain that consumer surplus and welfare are not the same things.   Who knew!   Of course, I forgot to mention consumer surplus specifically in that post (I knew I forgot a few measures of efficiency, but so it goes).    This is supposed to be his case against market clearing.   Hmmm.   See the problem with that is that “market clearing” is not an assumption of microeconomics.   Oh sure, there are microeconomic models in which market clearing is assumed.   In fact there’s a name for the class of models assuming market clearing: general equilibrium (all markets clear at endogenous prices).   That’s just the microeconomic model used in the micro-foundations of macroeconomics.   It’s not the fault of the microeconomists that you macro-types are using a model you don’t like.

He then goes on to rant about the problems with using willingness-to-pay as a measure of surplus.   To which I ask: who’s using willingness to pay?   In an intermediate-level microeconomics class, your prof should have told you that willingness to pay is not well defined unless the utility function is quasi-linear (a very special functional form!).   As an aside, the demand curve also requires quasi-linearity.

Don’t get me wrong… there really are understudied problems in micro.   “Dynamic Efficiency” which Waldman mentions in passing happens to be one–there really isn’t any neutral measure of dynamic efficiency: how to trade production today for production tomorrow is inherently bound up in the problem of choice, and we know that the time-separable constant discount factor utility which we use for the purpose doesn’t actually work (it can’t explain lifetime savings data, for example).   It’s just that  no one knows what to do about that, but this is an active area of microeconomic research.   Weirdly, though, this isn’t the issue that is animating Waldman.

So we in micro have our problems, but these are never the problems we’re criticized for.   Instead Waldman (and others) criticize us for the dumb simplifying assumptions macroeconomists make and then blame us for the resulting jumble.

Advertisements

Too much Efficiency

May 14, 2014 2 comments

Recently, I wrote about the confusion, common among economists, between optimality and efficiency.   Of course, everyone with an economics education knows the difference, but the point I made then (and now) is that there is a tendency to muddle the distinction in practice.  Today I want to talk about (at least part of) the reason.

To illustrate the issue, let me ask a question:  Is gift exchange (as opposed to market exchange) efficient, or is it inefficient?   This is an important question since anthropologists tell us that gift exchange is a common mechanism for primitive economies and also because certain political groups favor gift exchange over market exchange as a kind of means to escape inequalities (as an aside, I don’t understand how anyone could think that gift exchange would possibly decrease inequality, rather than make it much much worse… those without social networks would be shut out entirely!  But I digress).

The typical answer that you might hear from an economist is “of course gift exchange is inefficient!”  I agree, that’s probably correct.   I’m also reasonably sure that gift exchange is efficient.   I believe that both those statements are correct. So what’s going on here?   How can gift exchange be both efficient and inefficient?   Easy.   There’s at least two very different notions of efficiency involved.

In fact, here off the top of my head are some of the notions which economists sometimes refer to as “efficient” (altho, I make no claim that this list is exhaustive):

  1. Pareto Efficiency:  no one can be made better off without making some else worse off.
  2. Optimality:  the preferred outcome of a hypothetical social planner.
  3. Allocative Efficiency:  only the proper balance of goods is traded (i.e. marginal benefit = p).
  4. Production Efficiency:  the proper balance and number of goods are produced (i.e. p = marginal cost at the production frontier).
  5. Cyclical Efficiency:  my own term… a catchall for everything from Okun gaps to sub-optimal inflation targets.
  6. Informational Efficiency:  all available information is used.
  7. Dynamic Efficiency:  the time-path of consumption is not strictly dominated by another (i.e. compared to a time-separable, constant discount utility).

So, back to my example: gift exchange.   In which sense is gift exchange inefficient?   That’s easy.  The inefficiency is informational and allocative.  That is to say that people end up with the wrong goods and they end up with the wrong goods in part because there is information available in the population which is not being utilized (namely, the fact that I know what I want, but you probably don’t).   This is the standard answer, and it’s very intuitive for the average non-economist.

So in which sense is gift exchange efficient?   It turns out that gift exchange is Pareto efficient.   Don’t believe me?   Think that’s impossible since everyone supposedly has the wrong allocations in a gift exchange economy?   Ah ha!   The thing is, Pareto efficiency is evaluated at constant allocation.   So, let’s evaluate the problem at constant allocation: person i has allocation x_i.   In case ME, x_i is arrived at through pure market exchange.   In case GE, x_i is arrived at through pure gift exchange.   So here’s the issue:  is u_i(x_i) higher in case ME or in case GE?   The right answer is GE, so everyone is made better off in the gift exchange counter-factual.   Frankly, I think that anyone who’s ever gotten the gift they wanted ought to know that such gifts are cherished and experimental studies have backed that observation up.

We can always ask why this is the case.  Personally, I think it’s as simple as noting that reciprocity is a more powerful human emotion than greed and unlike the other notions of efficiency, Pareto efficiency depends on that sort of thing.

Of course, the fact that gift exchange is alloctively inefficient and informationally inefficient means that it’s not a very good mechanism for running an economy.   Economist-as-engineers really ought to care about such things, and we do!  Still, it’s a reminder that we should always be careful to keep in mind which notion of efficiency we are talking about.

The Game Theory Explanation for Labeling of Terror Groups

The new right wing meme seems to be that the Obama administration–specifically Hilary Clinton as Secretary of State–was too slow to label the group Boko Haram a terror organization.   I was just watching Michele Bachmann on CNN dismiss the argument, so this is unlikely to be the next “Benghazi”.  Still, just in case, let’s ask ourselves if it makes sense to label groups terror organizations early (early as in before clear acts of terror).

The question is whether of not extremist rhetoric in itself or clear acts of insurrection in themselves (i.e. acts  not clearly directed at civilians for the purpose of instilling terror among a population or sub-population)… whether or not such a group engaged in these activities ought to be labeled as terrorist by an outside group not directly involved in the conflict (such as the US concerning the conflict in Nigeria).

All that matters here is that some people will view the US here as a neutral party and some subset of those who do view the US as neutral might be swayed by the US’s designation of the Group (I don’t want to keep referring to them as Boko Haram because I want my argument to be more general than that).

Under these conditions this is how I see the problem:

  1. The Group has to weight the costs and benefits of its actions… the benefits are tough to quantify, since they involve the Group’s views on its goals and the probabilities of success moving toward those goals.  The costs are much easier to quantify, though:  public opinion, changes in funding and political support or changes in the military situation.
  2. The point I want to make is that if the US labels a group a “terror organization” that will specifically affect the Group’s calculus on the cost side of the cost-benefit calculation.   Specifically, a group so labelled almost certainly has fewer legitimate forms of funding and public support may suffer.

With these two assumptions it is clear to me that labeling such a group, then, can be  a kind of self-fulfilling prophecy.

In a backward-looking sense, the Group–so labelled–is certainly worse off.   Legitimate funding sources may dry up since many outsiders who sympathize with the group’s goals may hesitate to associate themselves with terrorists.  More than that, the US and Europe have used legal sanctions to shut down terror funding networks.

In a forward-looking sense, though, the Group is now much more likely to commit acts of terror.   After all, if the Group is weighing a tactic (say, kidnapping school-girls to sell into slavery) then that Group will view its costs for this action as lower since some of the penalties for the action have already been applied:  funding has already been constricted, at least some public opinion has already turned.

This is always and everywhere the problem with administering the punishment before the crime.   The crime itself becomes costless.   This is a consequence of the one-shot deviation principle–if the punishment phase of a repeated game is coming regardless of one’s own actions, then the best response is to play the best response to that punishment which is the stage’s Nash equilibrium.   So the only equilibrium of the game is the repeated stage-Nash outcome.    For those unfamiliar with repeated games, the stage-Nash outcome is the worst possible outcome without any possibility of cooperation.  That is, the game becomes a prisoner’s dilemma, over and over again.

It’s always tempting to think that preemptive penalties yesterday could have stopped today’s tragedies, but the under-appreciated cost of preemptive actions is the risk of causing those very tragedies.

 

 

Embedding Cruel Biases in Policy Choices: Credit Checks

Here’s a question that perhaps someone can answer for me:  why do businesses check the credit history of potential hires?   Seems obvious, right?   Except it’s not… when you stop to think about it, it’s a mean and idiotic practice with no reasonable justification.

Of course, credit history is nothing more or less than a measurement of financial stress.   It’s not a perfect measurement, to be sure… someone with (otherwise) good finances could have a poor credit score simply from not paying bills on time which could have been paid; or someone in financial stress can manage to maintain good credit ratings by borrowing from informal channels (family and friends) which don’t report to credit agencies.

Either way, the difference between your credit score and your financial stress is a matter of measurement error.   In the first example, the individual could borrow more–a rational bank would like to lend more, if given perfect information–but that particular individual probably needs the additional (unrelated) service of auto-billpay.   If the bank could swap auto-billpay for credit, it would surely do so.   In the second case the individual is in distress, and it is only the lack of communication with credit agencies which obscures this.

So, your credit score is a measure of your financial stress.  So what?

So why do businesses use credit checks to screen their hires?

If someone is unemployed, especially if that someone has been for quite a long time, that someone is almost certainly under financial stress.   Credit checks (I’m sure among other things) build in a bias in the system against the long-term unemployed.   That’s cruel.   Why do this?

Financial stress is unlikely to be correlated with future productivity.   Why would it be?   If you wanted to discriminate against the long-term unemployed, all you need do is look at their work history–that’s not it (not that that’d be a good reason, anyway).

I think the assumption is that credit history will be correlated with trustworthiness… but again, why?   The potential hire here wouldn’t be paying their employer, it’s the other way around.   So even if the hire is someone who skips paying their bills on time, now could that affect their employment.

I’m racking my brain to think up a rational reason for this cruelty and mostly coming up short.   The only explanation I have is one I try to stay away from–classism.  It is those people, the unwashed masses, they are the ones with poor credit scores.   We don’t want their kind here.

As I said, not very convincing.

As a social issue, the people who struggle to pay their bills and who are credit risks as a result are precisely the people we should be prioritizing in terms of getting them back to work.  Even if businesses have a rational reason for doing this–one which I haven’t thought of–as a society, we ought to be discouraging it.

The Consumption Model of Inequality

With all the talk about inequality recently, I thought it was time for me to lay out my model of the political dynamics around inequality.   So let’s forget briefly about IMF studies and Piketty and simply ask ourselves how we can use the machinery of economics to understand the political cleavages it engenders.   As an aside, while I’ve never seen this model in the literature or anyone’s course notes, I’d nevertheless be shocked if I’m the first to think in these terms… I just don’t know who to credit with the idea (I think the idea is so simple and obvious that few bother to go through details).

The basic idea is that I’m going to view equality as something that makes agents more satisfied, in the sense that measurements of inequality, such as the Gini coefficient, enter into the agent’s utility directly.   So, if there’s a vector of “normal” goods, x_i for each i, and G is the Gini coefficient, then agent i has utility U_i(x_i,G), where dU_i/dG < 0 (inequality is a bad).  I’m implicitly viewing this as a static model, but it would be a simple matter to include time.

So, the economics here simply stem from the fact that the level of inequality is shared by all agents–that is, it is a pure public good (non-rival, non-excludable).    Beyond that simple insight, there’s only one other thing we need to know, which is how wealth is redistributed to reduce inequality.   You can use a simple mapping, G’ = R(T,G), where G’ < G (this would make the problem a standard public good, which is good enough to account for half the problem).

Or, to be more realistic… if w is the vector of each agent’s wealth (w_i… for simplicity arrange i so that w_i < w_j for i < j, so that w is effectively the lorentz curve and let W be aggregate wealth), then G = Sum_i 2*[ 1 – N*w_i/(i * W) ].   Then a valid redistribution maps w’ = R(w) such that the properties(i)  W’ = W, (ii) w_i < w_j  ==>  w’_i < w’_j and (iii) G’ < G all hold.   This means, graphically, that R maps the lorentz curve to a (weakly) higher lorentz curve keeping total wealth constant.

Public goods models are not particularly trivial to solve, although we know in general that inequality will be “overproduced” in the simple version of this model (with G’ = R(G,T)).

In the more complex version (with w’ = R(w)… effectively this version models models the technology for reducing inequality directly), there are two effects.   The under-provision of public goods is still an issue here… but only for those rich enough to pay net taxes (those for whom w’_i – w_i = t_i > 0… put these agents into a new set, I).   The set I is a function of how much redistribution is actually done, but it is only agents in I for whom the public goods game is non-trivial (those outside I, by definition, receive lower levels of inequality without paying net taxes… a win-win situation for them).   Generally (but not universally) as I expands there are more resources available to redistribute and there are fewer people to redistribute towards.   A marginal (the richest agent not paying net tax) agent i by definition balances the benefit of reducing inequality with her own tax bill from that more aggressive redistribution.

So here’s what’s interesting… this model (simple intuitively as it is… tho difficult to solve) exhibits tipping points.   Don’t believe me?   Consider this thought experiment… increase W by adding to w_i only of i in I.   Givewn the right initial setup, nothing will happen until G rises enough that the set I expands… basically at some point those not in I will demand to (on net) contribute to reducing inequality.

Of course, the details depend on R and how R is chosen (simple majority voting?), but the framework for thinking about the politics of inequality are here.   Note that if Piketty or the IMF are correct, then this model will understate the degree to which equality is under-provided.

Efficiency, Optimality and Values

May 4, 2014 2 comments

For the record, I see this post as a continuation and (yet another) response to the Sargent-Smith/KrugmanHouse debate over Sargent’s equity-efficiency assertion which I’ve commented on before.   The latest round of posts related to the topic have this debate trending in what I think is an odd direction putting me in the awkward position of defending a concept, Pareto efficiency, which I’d be more comfortable criticizing.

My proximate purpose  is to respond to Simon Wren-Lewis’s new post which I think illustrates the problem–this is, I think, one of the biggest confusions among economists about our own subject (not that Wren-Lewis is necessarily confused here, but he’s at least bringing up the problem).   The key graf:

Why is there this emphasis on only looking at Pareto improvements? I think you would have to work quite hard to argue that it was intrinsic to economic theory – it would be, and is, quite possible to do economics without it. (Many economists use social welfare functions.) But one thing that is intrinsic to economic theory is the diminishing marginal utility of consumption. Couple that with the idea of representative agents that macro uses all the time (who share the same preferences), and you have a natural bias towards equality. Focusing just on Pareto improvements neutralises that possibility. Now I mention this not to imply that the emphasis put on Pareto improvements in textbooks and elsewhere is a right wing plot – I do not know enough to argue that. But it should make those (mainstream or heterodox) who believe that economics is inherently conservative pause for thought.    

The problem is that Pareto efficiency and optimality are not the same things, cannot (or should not) be used interchangably.  In fairness, when I’m being a sloppy, I do the same thing; but it’s important to remind ourselves why this is a mistake.

So to remind ourselves, what is optimality and what is efficiency?

Optimality is the solution to a kind of thought experiment; the answer to the question of what would a benevolent, god-like social planner do if it had complete control of the economy (or “constrained optimal” if the social planner must work under constraints).   The advantage of the approach is that it produces unambiguous outcomes (often a single point in allocation-space).   The disadvantage is that  the planner’s problem is by definition not values-neutral.   Why do I say it’s not value’s neutral?  Because you need to define the planner’s objective function (i.e. the social welfare function) and the social welfare function defines the trade-offs the social planner is willing to make, for example, when balancing equity/efficiency.   “All I care about is Bill Gates’ wealth” is an acceptable, if odd, social welfare function, as is “complete equity at all costs”.   The general case is somewhere in between these two.

Efficiency means that there are no unexploited gains (no one can be made better off except at the cost of making another worse off):  contra Wren-Lewis, I want to argue that this is very much a values-neutral idea.   To see why consider this little factoid: regardless of the social welfare function you choose, the solution to every planner’s problem is Pareto efficient.  The converse is not necessarily true (Pareto efficiency is a necessary, not sufficient condition of optimality… as an aside this is where the confusion–I think–comes from, since economists often refer to the planner’s solution as “efficient”).   So here’s the thing, for every point in the Pareto set there is a social planner who would choose that point as the optimum (or you might say there’s a set of values which corresponds to each point in the Pareto set).   That’s the sense in which Pareto efficiency is values-neutral.

What Wren-Lewis is arguing about is slightly different issue.   Is the search for Pareto improvements also values neutral?  I think most economists would say ‘yes’.   After all, no social planner would be any worse than indifferent to a Pareto improvement (a valid social welfare function is weakly increasing in the well-being of every individual agent in the economy).

Does that actually make a Pareto improvement values neutral, however?   No, of course not (this is what I think Wren-Lewis has in mind, but I’m only guessing).   A Pareto-improvement shifts the outcome in allocation-space, but as a general matter a Pareto improvement “picks a direction”–different social planners will disagree that it is the correct direction to take.   Some social planner’s would even prefer to take away from some agents to give to others.   To put it more simply, if you keep exploiting Pareto inefficiencies randomly until you reach an efficient outcome, is the result optimal?   The answer is that with probability one, it will not be.

I’m not sure if I have any other comments to make… just a reminder to myself and others to be careful regarding efficiency and optimality.   I do suspect that the “confusion” here reflects a preference among some economists for the “core” solution concept of cooperative games… but I need to think about that a bit before I make that argument.  So I’ll leave this post here for now.

Who’s in the echo chamber?

April 29, 2014 1 comment

Via Krugman, I see this post by Chris House talking about an efficiency-equity trade-off.  House, of course, is just writing from the standard, near consensus view within economics.   The thing is, though, there is no evidence or theory (not depending on modelling choices) within economics which supports the view that there is necessarily a trade-off.

Let me be clear.  I’m not saying that there is no trade-off, I’m saying that House and Sargent (whose speech to some Berkeley undergrads started this whole blog debate) are making a claim which, while commonly believed by many economists, has no other justification.

Sargent’s speech lists 12 principles of economics that everyone ought to know, the principle in question is his assertion that:

There are tradeoffs between equality and efficiency

Again, I’m not saying this is wrong, I’m saying there is no justification.

So, what is House’s case?   Basically this:

The truth is that if we want to really attack the problem of income inequality (promote equality and help the poor) then we are going to have to take stuff away from richer people and channel it to poorer people. This kind of action will most likely have consequences for markets and these consequences will be unsavory.

Taking stuff away from the rich and giving it to the poor equals unsavory consequences… and you can justify this generally, without invoking a model-specific result, right Chris?

I’ve written about this before, but this particular argument is one that I perhaps didn’t adequately deal with, so here goes.   House is saying that in order to correct for a… let’s call it a maldistribution… will require taxes and transfers.   Taxes and transfers have efficiency costs, ergo equality-efficiency tradeoff.   QED.

That’s not how it works, Chris.

The way it actually works in economics when we want to study efficiency is that we imagine a god-like social planner and ask ourselves “what would the social planner do”… so what would a social planner view as a maldistribution of wealth?   (I’m presuming that utility is weakly increasing in wealth, not income, btw)  You can view this in several ways, but the simplest intuition is just this:  GDP, which House is implicitly using as a measure of well-being (although it’s nothing of the sort), is “additive”, but all else equal, social welfare is “multiplicative” (as a result of convexity).    Maximizing a sum (or equivalently an arithmetic mean) would leave a social planner indifferent to distribution… there is no maldistribution in House’s world.   Social welfare on the other hand is maximized at the point of equality (or equivalently the geometric mean).

The reason that there would be any “optimal” inequality at all, then, is that there is an informational rent associated with figuring out which factors are most productive and encouraging those factors to be active.   This is the heart of Mirlees’ optimal taxation result.   So House has things backward: a sufficiently god-like social planner would make sure that everyone has equal wealth, ceteris paribus and any deviation from that result has to be justified on informational grounds.   Inequality can only be justified as constrained efficient, not to be confused as efficient.

More than that, though, is that its just not clear that taxes must necessarily cause inefficiency.  It’s not at all difficult to tell a story in which wealth taxes, for example, can encourage capital formation by, say, encouraging complementary human capital to accumulate or by encouraging productive capital over “frivolous” capital (by that I mean things like McMansions).   Taxing productive capital may increase its after-tax cost, but the redistribution it funds can also increase the value of its production stream.

So, are efficiency and equality free of any tradeoff?   Not necessarily, and certainly that’s not what I’m saying.    No, the point is that Sargent’s “principle” is not some immutable law of nature, but a model-dependent best-guess.   You might say that the sign of the tradeoff is ambiguous in theory… and it is at this point that I should mention that the only empirical evidence I know of which directly test the sign of that tradeoff are those IMF studies that suggest that equality and efficiency move together.

So, Chris, if you’re reading this, I leave you with the following wise words:

Talking in an echo chamber can be fun but public intellectuals like [House and Sargent] have a greater responsibility to self-censor than most because they have large audiences. They have a responsibility to the public and also a responsibility to their… readers who take their statements to heart

Just sayin’…