Archive

Archive for the ‘foundations of econ’ Category

Economic Rights are Positive, Political Rights are Negative

Simon Wren-Lewis has a great post up which relates to a point I’ve wanted to make for a long time…  It’s not really a new point, per se, (I heard similar points being made with respect to “right-to-work” laws and there is this from Brad DeLong in a similar issue), but… well… Simon just has a great thought experiment:

Employees are already beset by red tape if they try to improve their working conditions. Now the UK government wants to increase the regulatory burden on them further, by proposing that employee organisations need a majority of all their members to vote for strike action before a strike becomes legal, even though those voting against strike action can still free ride on their colleagues by going to work during any strike and benefiting from any improvement in conditions obtained. Shouldn’t we instead be going back to a free market where employees are able to collectively withhold their labour as they wish?
I doubt if you have ever read a paragraph that applies language in this way. Yet why should laws that apply to employers be regarded as a regulatory burden, but laws that apply to employees are not?
Here’s how I’d make the same point more generally:  Economic rights are always positive rights–rights provided by the government to expand citizen’s choice set–while political rights are always negative rights–that is, freedom from interference.
The fundamental unit of an economy–the economy’s “atom”, if you will–is the Transaction.  I do something for you, then you do something for me.  That’s what makes an economic system economic.
But a transaction cannot be defined for an individual.  Transactions always involve two (at least!).   That’s key.  There’s you, there’s me and we’re trying to trade something.  So what does this have to do with economic rights?
Consider freedom of contract.  There’s you, there’s me and we’re trying to come to agreement.  Obviously, we can come to agreement without government interference.  If we do, though, what happens when both of us are caught off guard by how events actually play out?  Well, we renegotiate.  But the possibility that we will renegotiate itself makes certain contracts/agreements undesirable, because each of us knows that we may be at a disadvantage if we have to renegotiate (this is called the hold-up problem).
Doesn’t sound like a big deal?  Consider the example of the humble spot transaction.   There’s no government, so suppose I have a horse you need to get around on (since there are no roads anymore) and you have a fistful of gold (since there’s no currency without government to provide it).  I’d like to trade my horse for your gold.  We meet, I get off my horse, you hand me your gold… now, what’s to stop me from jumping back on my horse and riding away?  Now I have both horse and gold.
Stealing the horse is a “spot-renegotiation” because it happened while the transaction was proceeding.   It just so happens that at one point during the transaction, I had all the bargaining power because I was physically in possession of both goods.   If we can’t manage to trade horse for gold simultaneously, this will always happen to one of us or the other, so we probably won’t even bother trying.   Really, this is a point you should already be familiar with from movies or TV when the good guys need to trade something with the bad:   doesn’t something always go wrong?
On the other hand, if there’s a government, guys with guns come to my home, take the horse back and lock me in jail.  That makes it easier for us to come to come to an agreement in the first place so we can trade that horse for gold.  The government provides the ability to make contracts  which will be upheld by our counter-parties because the government will actively use force to make sure everyone lives up to their agreements.  It’s what I called the “visible hand” of the market in previous posts.
This need not be a particularly anti-libertarian view.  After all,  this is the reason that libertarian philosophers like Nozick argue for minarchy rather than anarchy.  The government needs to exist to provide economic rights (at a minimum), because economic rights are always positive rights–they only exist when the government provides them.   Whenever people transact, there must always be someone to say to them, “live up to your agreement!”   And that someone is “the government” whether they want the designation or not.
The libertarian mistake is thinking that economic rights flow from a principle of non-interference.  Government is a necessary, if silent, collaborator in every transaction because government is what makes everyone play by the rules.  Non-interference is a property of negative rights and only negative rights are political rights.
Political rights govern our interactions with government itself.   “Free speech” doesn’t mean I can say what I  want without consequence.   I can get fired for trashing my company in front of clients, and rightly so.  Free speech only means that the government has no standing to punish me for my views about government.   Political rights are the activities with which government can’t interfere.
Libertarians are trying to have it both ways.  They want economic rights (ownership, contract) to be supreme over political rights (suffrage, speech, religion), but they also want non-interference to be supreme over positive rights.   Positive rights over negative rights over positive rights.
Nozick’s Wilt Chamberlain thought experiment tries to get around this problem by simply ignoring issues of ownership (an economic right) in his theory of distributive justice.  It’s not that he’s wrong precisely, but instead it’s weird to say “a distribution is fair if it resulted from non-interference”… right after assuming that all ownership rights are sorted out and agreed upon by all parties.    Agreeing on and sorting out those property rights is exactly the reason government exists.   It’s almost like saying “assume government isn’t needed, ergo non-interference by government is best”.  Well, duh.   It’s easy to miss this, because Nozick concentrates on Chamberlain’s human capital–which no one objects to his owning–and ignores everything else.
So, libertarians, this is your challenge.   Choose which is more important to you, economic rights above political rights, or non-interference/negative rights above positive rights.  Those positions are in direct conflict.
I’ve ranted on long enough.  So let me leave it at that.
Advertisements

Noah Smith catches the Demand-Denialist Bug

I like Noah Smith, but his scientific-skepticism meme-immune system appears to be very weak.  The latest case in point is Noah’s post defending the use of Search Theory in Macroeconomics against John Quiggin who is rightly pointing out that Search Theory is incapable of explaining cyclical unemployment.  I’m not really going to add to what Quiggin wrote, instead I’m only interested in Noah’s response.  Before I go on, I should link to Noah’s excellent critique of Kartik Arthreya’s Big Ideas in Macroeconomics to which Quiggin is responding.

Conceding that Search Theory doesn’t explain all the employment patterns, Noah goes on to criticize “demand” explanations:

This is a simple answer… Economists are used to thinking in terms of supply and demand, so the AD-AS model comes naturally to mind… so we look at the economy and say “it’s a demand problem”.

But on a deeper level, that’s unsatisfying – to me, at least.

…what causes aggregate demand curves to shift?…how does aggregate demand affect unemployment? The usual explanation for this isdownward-sticky nominal wages. But why are nominal wages downward-sticky? There are a number of explanations, and again, these differences will have practical consequences.

… is an AD-AS model really a good way of modeling the macroeconomy?… The idea of abandoning the friendly old X of supply-and-demand is scary, I know, but maybe it just isn’t the best descriptor of booms and recessions,..

… I’m not really satisfied by the practice of putting “demand in the gaps”. If “demand” is not to be just another form of economic phlogiston, we need a consistent, predictive characterization of how it behaves…

Wow is that a lot of BS jammed into a short space.   Noah is a strong proponent of a more empirical and predictive macroeconomics, which I agree with!  but this post suggests that Noah doesn’t understand the other side of the problem, model selection and Okkam’s razor.

How do you know which model is the correct one?  You can’t just say that it’s the model that survived empirical tests because there are an infinite number of possible models at any given time which have survived those tests.   All that data you’ve collected tells you exactly nothing until you figure out which of those continuum of possible models you should treat as the preferred one.   Okkam’s razor plays the keystone role in scientific methodology as the selection criterion.   (If you were a philosopher of science you’d spend a lot of time trying to justify Okkam’s razor…Karl Popper believed it gave the most easily falsified model among the alternative… but as a practical scientist you can just accept it.)

Now that we’ve all accepted that Okkam’s razor must be used to winnow our choice of models, we should spend some time thinking about how to use Okkam’s razor to do this in practice.   That would require a post in itself, so instead let me just mention one particular criterion I use:  At any given time, who is the claimant?   In science, the burden of proof is always on the claimant because the claimant’s model at any given time  is almost always less simple than the accepted model given the field’s accepted information set.

As a heuristic, the claimant’s model generally does not pass Okkam’s razor’s test until new information is found and added to the field’s information set.   It’s possible (and does happen) that a heretofore unknown or unnoticed model is simpler than the accepted one, but that’s rarer than you might think and not generally how science proceeds.

With all that out of the way, what’s my problem with Noah’s post?  Two things:

1)  Demand is not phlogiston

For those not in the know, phlogiston was an hypothetical substance which made up fire.  The theory was rendered obsolete by the discovery of combustion.

Basically what Noah is saying here is that maybe demand, like phlogiston, is a hypothetical piece of a theory and that piece may be unnecessary.   Now science certainly does produce phlogiston-like theories from time to time, these theories tend to be the result of trying to tweak systemic models:   you have a theory of elements (at the time of phlogiston a sort of proto-elemental atomic theory) and a substance (fire) which you can’t explain.  So add an element to you model to explain the substance.

The first thing to point out is that demand is a reductionist phenomenon in the strictest sense.   The smallest unit of a macroeconomy (the atom, if you will) is the transaction.  But a single transaction has a well-defined demand:  how much the buyer is willing to trade for the item being transacted.   So the neoclassicals are the claimants here:  they’re saying that there is an emergent phenomenon in which demand becomes irrelevant for the macroeconomy.   They are using an updated version of  Say’s Law to argue that demand goes away, not that it never existed–that would be crazy.

Show me the evidence that it doesn’t exist, then we can talk.   Yes, that’s hard.   Tough… you’re the one making an outlandish claim, now live with it.

The second thing to notice is that phlogiston isn’t even phlogiston as Noah means it… rather phlogiston is a perfectly reasonable and testable scientific hypothesis, the study of which led to our understanding of oxidation.

2)  You don’t need sticky prices to get demand curves

You don’t need sticky prices to get aggregate demand, rather sticky prices are the simplest (in modeling terms) way to get rid of Say’s law while otherwise keeping the market clearing assumption intact.  Now market clearing is not necessarily a good assumption, but even more than the sticky prices are, it is a standard one.

Of course, no microeconomist worth half his or her salt would ever think market clearing is necessary because market clearing doesn’t always happen in the real world (look around).  Store shelves are rarely bare, there are usually tables empty (or people waiting in line) at restaurants and some people pay outlandish prices for tickets to sporting events from scalpers even as some seats go unfilled.   You can talk all you want about how sticky prices are a bad assumption, but the real problem here is that it’s silly that macroeconomists insist on market clearing.

This is a long winded way of saying that anything which breaks Say’s Law can substitute for the sticky-price assumption: 1) nominal debt not indexed to inflation, 2) demand for financial assests, or 3) non-stantionarity and knightian uncertainties.   I’m sure I’m missing some other possibilities.

These are all “reductionist” explanations and once again, that’s my point.   It is the neoclassicist demand-deniers who are flipping the script here and insisting on a systemic explanation for why demand should disappear in the aggregate.

I can go on, but this post is already getting too long.  For my take on AS/AD in particular, see this.  I think that answers Noah’s implicit objection.

Scientific Welfare Theory

May 30, 2014 11 comments

Steve Waldman has a good post up on welfare economics.   That’s a topic I wrote about recently and I agree with almost everything he writes; and in fact I’ve dived into these specific issues in previous posts.  I do have two complaints to make, though.

First, I can’t agree with this paragraph:

There is nothing inherently more scientific about using an ordinal rather than a cardinal quantity to describe an economic construct. Chemists’ free energy, or the force required to maintain the pressure differential of a vacuum, are cardinal measures of constructs as invisible as utility and with a much stronger claim to validity as “science”.

So long as we understand utility as a ranking of actions, not as a measure of welfare, then the only reasonably scientific approach is to use ordinal utility.   Intensity absolutely  is not a property of utility–nor should it be!–but is absolutely related to welfare.   In other words, this is Waldman making the mistake he’s accusing others of making.   There are several reasons for this,

  1. Science is only interested in things which are measurable, and “intensity of preference” is not measurable (advances in neuroeconomics aside)…
  2. At least it is not measurable in the absence of a smoothly alternative i.e. I might be able to measure “willingness to pay” in terms of money if I can set up an experiment where participants are forced to reveal how much money they’d be willing to trade.   Then, you just measure utility in terms of money.   That’s a consistent theory of utility intensity.
  3. The problem is that the theory in (2) can  be duplicated as an ordinal theory just by recognizing that the bundle which the agent is bidding on is x=(1,m-p), where m is the money she started with, p is the revealed willingness to pay and ‘1’ represents getting the good.   So the bid, p, solves u(1,m-p)>u(0,m).   With that, I can order these two vectors.
  4. More succinctly, the economist can expect to observe “buy” or “don’t buy” at p, which is a binary relationship and intensity is at best bounded by p.

On the other hand, attributing some value to the psychological satisfaction of consuming something–such as attributing an intensity to preference–is the very goal of welfare economics.   Yes, that’s hard, but it’s hard precisely because that psychological satisfaction isn’t observable to the economist/scientist.

My second issue with the post is this:

But before we part, let’s think a bit about what it would mean if we find that we have little basis for interpersonal welfare comparisons. Or more precisely, let’s think about what it does not mean. To claim that we have little basis for judging whether taking a slice of bread from one person and giving it to another “improves aggregate welfare” is very different from claiming that it can not or does not improve aggregate welfare. The latter claim is as “unscientific” as the former. One can try to dress a confession of ignorance in normative garb and advocate some kind of a precautionary principle, primum non nocere in the face of an absence of evidence. But strict precautionary principles are not followed even in medicine, and are celebrated much less in economics. They are defensible only in the rare and special circumstance where the costs of an error are so catastrophic that near perfect certainty is required before plausibly beneficial actions are attempted. If the best “scientific” economics can do is say nothing about interpersonal welfare comparison, that is neither evidence for nor evidence against policies which, like all nontrivial policies, benefit some and harm others, including policies of outright redistribution.

Let’s ignore the deep issues with ordinal vs. cardinal utility for the moment (I’ll return to that issue in a moment) and start with the social planner (SP)–remember from my previous post that the social planner is a thought experiment!–who wants to maximize welfare.  The social planner’s objective function, of course, depends on the values we assign to the social planner (as an aside, I have research which, I think, cleverly gets around this problem).  That is, welfare can take any form depending only on the biases of the researcher and what he/she can get away with.   Let’s think about that…

Suppose the SP’s welfare maximization problem takes the basic form; W(u1,u2,…,uN)–for simplicity let’s assume that I’ve ordered the population so that wealth is ordered w1>w2>…>wN–and let’s only consider small perturbations in allocation space (i.e. 1 gives a slice of bread to N, slightly decreasing u1 and increasing uN).  The question for the moment is whether this is consistent with welfare maximization.  The answer is that it almost certainly is.

Why?  because for a small disturbance W(.) is approximately linear, because all functions are approximately linear with respect to small disturbances.   So W(.) looks like a weighted sum W = a1.u1 + a2.u2 + … + aN.uN, approximately, around feasible allocation x = (x1, x2, …, xN) whose change is small enough not to affect potential complications like incentives or aggregate production.   I claim that moving that epsilon from x1 to xN must increase W  as long as a1 is not too much larger than aN… this is just because the utilities ui are concave so that a small decrease to the rich’s allocation doesn’t decrease W much, but a small change to the poor’s allocation increases W a lot (to see this, just write the new W, W’ as W’ = W – (a1.du1 – aN.du2) where du1 is the decease in u1) .   Remember that this is in complete generality.

Oh!   I hear you saying… you’re forgetting about the ordinal vs cardinal objection!   Unscientific!

Not quite.   I do have to make some modifications, though.  First, recall that the SP is a thought experiment and it’s perfectly reasonable for a thought experiment to have a telescope directed into the souls of agents–so long as we make our assumptions clear.   Second, I can get around this problem in practice as well.   Instead of u1(x1), u2(x2)…uN(xN), suppose the SP takes as given u1(w(x1; p)), u2(w(x2;p)),…uN(w(xN;p)), where w(x;p) is a function which imputes wealth from the market value, p, of the consumption bundle x.

Now, the only function of utility in the SP’s decision problem is to account for the marginal utility of wealth.    That can be accounted for simply by making W(w1,w2,….,wN) concave in all it’s arguments.   But that’s just the same problem as I had above with utility!   In other words, as long as we remember to keep track of concavity, wealth is a proxy for utility as far as a social planner is concerned.   Using wealth as a proxy for utility this way gets me around the ordinal/cardinal objection.

Now, as soon as I specify the form W takes, I move back out of the realm of science and back into the realm of values, but the point is that there are conclusions that we can draw without any other assumptions.

  1. There is a strong bias towards redistribution from the rich to the poor in any social welfare problem, ordinal utility or no.
  2. How much redistribution, though, is basically never a scientific problem… it depends on the form W takes, which depends on the values assigned to the SP.
  3. The only concave welfare function which does not support redistribution is the linear welfare function: i.e. W = b1.w1 + b2.w2 + …. + bN.wN.   But notice that this is indifferent to distribution!

In effect, this linear welfare function is the one which most conservative economists are implicitly using.   For example, it is literally the only welfare function in which the opening up of a harberger triangle would be a decisive issue in public policy.   Yet–you might have noticed if you followed my discussion above–the linear social welfare function, even for small changes, can only be a true representation of the SP’s problem if the marginal utility of wealth is constant, when it should be diminishing.

That’s the problem.

Who said markets have to clear?

May 19, 2014 4 comments

Steve Randy Waldman has a good post which is nevertheless completely wrong.   I’m normally a fan of interfluidity, so I feel like I have to respond.

The basic argument, coming mostly from macro- and international economics types is that micro-economic involves just as many silly unrealistic assumptions as macro-economics… unless we take our lessons from macro!

This is an argument that I’ve seen come up several times online now, but so far every example people want to make are the simplifications that non-micro specialists make about microeconomics, not the actual study of micro as it is done by micro-specialists.

Waldman’s post is a great example of this.   He goes on at length to explain that consumer surplus and welfare are not the same things.   Who knew!   Of course, I forgot to mention consumer surplus specifically in that post (I knew I forgot a few measures of efficiency, but so it goes).    This is supposed to be his case against market clearing.   Hmmm.   See the problem with that is that “market clearing” is not an assumption of microeconomics.   Oh sure, there are microeconomic models in which market clearing is assumed.   In fact there’s a name for the class of models assuming market clearing: general equilibrium (all markets clear at endogenous prices).   That’s just the microeconomic model used in the micro-foundations of macroeconomics.   It’s not the fault of the microeconomists that you macro-types are using a model you don’t like.

He then goes on to rant about the problems with using willingness-to-pay as a measure of surplus.   To which I ask: who’s using willingness to pay?   In an intermediate-level microeconomics class, your prof should have told you that willingness to pay is not well defined unless the utility function is quasi-linear (a very special functional form!).   As an aside, the demand curve also requires quasi-linearity.

Don’t get me wrong… there really are understudied problems in micro.   “Dynamic Efficiency” which Waldman mentions in passing happens to be one–there really isn’t any neutral measure of dynamic efficiency: how to trade production today for production tomorrow is inherently bound up in the problem of choice, and we know that the time-separable constant discount factor utility which we use for the purpose doesn’t actually work (it can’t explain lifetime savings data, for example).   It’s just that  no one knows what to do about that, but this is an active area of microeconomic research.   Weirdly, though, this isn’t the issue that is animating Waldman.

So we in micro have our problems, but these are never the problems we’re criticized for.   Instead Waldman (and others) criticize us for the dumb simplifying assumptions macroeconomists make and then blame us for the resulting jumble.

Too much Efficiency

May 14, 2014 2 comments

Recently, I wrote about the confusion, common among economists, between optimality and efficiency.   Of course, everyone with an economics education knows the difference, but the point I made then (and now) is that there is a tendency to muddle the distinction in practice.  Today I want to talk about (at least part of) the reason.

To illustrate the issue, let me ask a question:  Is gift exchange (as opposed to market exchange) efficient, or is it inefficient?   This is an important question since anthropologists tell us that gift exchange is a common mechanism for primitive economies and also because certain political groups favor gift exchange over market exchange as a kind of means to escape inequalities (as an aside, I don’t understand how anyone could think that gift exchange would possibly decrease inequality, rather than make it much much worse… those without social networks would be shut out entirely!  But I digress).

The typical answer that you might hear from an economist is “of course gift exchange is inefficient!”  I agree, that’s probably correct.   I’m also reasonably sure that gift exchange is efficient.   I believe that both those statements are correct. So what’s going on here?   How can gift exchange be both efficient and inefficient?   Easy.   There’s at least two very different notions of efficiency involved.

In fact, here off the top of my head are some of the notions which economists sometimes refer to as “efficient” (altho, I make no claim that this list is exhaustive):

  1. Pareto Efficiency:  no one can be made better off without making some else worse off.
  2. Optimality:  the preferred outcome of a hypothetical social planner.
  3. Allocative Efficiency:  only the proper balance of goods is traded (i.e. marginal benefit = p).
  4. Production Efficiency:  the proper balance and number of goods are produced (i.e. p = marginal cost at the production frontier).
  5. Cyclical Efficiency:  my own term… a catchall for everything from Okun gaps to sub-optimal inflation targets.
  6. Informational Efficiency:  all available information is used.
  7. Dynamic Efficiency:  the time-path of consumption is not strictly dominated by another (i.e. compared to a time-separable, constant discount utility).

So, back to my example: gift exchange.   In which sense is gift exchange inefficient?   That’s easy.  The inefficiency is informational and allocative.  That is to say that people end up with the wrong goods and they end up with the wrong goods in part because there is information available in the population which is not being utilized (namely, the fact that I know what I want, but you probably don’t).   This is the standard answer, and it’s very intuitive for the average non-economist.

So in which sense is gift exchange efficient?   It turns out that gift exchange is Pareto efficient.   Don’t believe me?   Think that’s impossible since everyone supposedly has the wrong allocations in a gift exchange economy?   Ah ha!   The thing is, Pareto efficiency is evaluated at constant allocation.   So, let’s evaluate the problem at constant allocation: person i has allocation x_i.   In case ME, x_i is arrived at through pure market exchange.   In case GE, x_i is arrived at through pure gift exchange.   So here’s the issue:  is u_i(x_i) higher in case ME or in case GE?   The right answer is GE, so everyone is made better off in the gift exchange counter-factual.   Frankly, I think that anyone who’s ever gotten the gift they wanted ought to know that such gifts are cherished and experimental studies have backed that observation up.

We can always ask why this is the case.  Personally, I think it’s as simple as noting that reciprocity is a more powerful human emotion than greed and unlike the other notions of efficiency, Pareto efficiency depends on that sort of thing.

Of course, the fact that gift exchange is alloctively inefficient and informationally inefficient means that it’s not a very good mechanism for running an economy.   Economist-as-engineers really ought to care about such things, and we do!  Still, it’s a reminder that we should always be careful to keep in mind which notion of efficiency we are talking about.

Efficiency, Optimality and Values

May 4, 2014 2 comments

For the record, I see this post as a continuation and (yet another) response to the Sargent-Smith/KrugmanHouse debate over Sargent’s equity-efficiency assertion which I’ve commented on before.   The latest round of posts related to the topic have this debate trending in what I think is an odd direction putting me in the awkward position of defending a concept, Pareto efficiency, which I’d be more comfortable criticizing.

My proximate purpose  is to respond to Simon Wren-Lewis’s new post which I think illustrates the problem–this is, I think, one of the biggest confusions among economists about our own subject (not that Wren-Lewis is necessarily confused here, but he’s at least bringing up the problem).   The key graf:

Why is there this emphasis on only looking at Pareto improvements? I think you would have to work quite hard to argue that it was intrinsic to economic theory – it would be, and is, quite possible to do economics without it. (Many economists use social welfare functions.) But one thing that is intrinsic to economic theory is the diminishing marginal utility of consumption. Couple that with the idea of representative agents that macro uses all the time (who share the same preferences), and you have a natural bias towards equality. Focusing just on Pareto improvements neutralises that possibility. Now I mention this not to imply that the emphasis put on Pareto improvements in textbooks and elsewhere is a right wing plot – I do not know enough to argue that. But it should make those (mainstream or heterodox) who believe that economics is inherently conservative pause for thought.    

The problem is that Pareto efficiency and optimality are not the same things, cannot (or should not) be used interchangably.  In fairness, when I’m being a sloppy, I do the same thing; but it’s important to remind ourselves why this is a mistake.

So to remind ourselves, what is optimality and what is efficiency?

Optimality is the solution to a kind of thought experiment; the answer to the question of what would a benevolent, god-like social planner do if it had complete control of the economy (or “constrained optimal” if the social planner must work under constraints).   The advantage of the approach is that it produces unambiguous outcomes (often a single point in allocation-space).   The disadvantage is that  the planner’s problem is by definition not values-neutral.   Why do I say it’s not value’s neutral?  Because you need to define the planner’s objective function (i.e. the social welfare function) and the social welfare function defines the trade-offs the social planner is willing to make, for example, when balancing equity/efficiency.   “All I care about is Bill Gates’ wealth” is an acceptable, if odd, social welfare function, as is “complete equity at all costs”.   The general case is somewhere in between these two.

Efficiency means that there are no unexploited gains (no one can be made better off except at the cost of making another worse off):  contra Wren-Lewis, I want to argue that this is very much a values-neutral idea.   To see why consider this little factoid: regardless of the social welfare function you choose, the solution to every planner’s problem is Pareto efficient.  The converse is not necessarily true (Pareto efficiency is a necessary, not sufficient condition of optimality… as an aside this is where the confusion–I think–comes from, since economists often refer to the planner’s solution as “efficient”).   So here’s the thing, for every point in the Pareto set there is a social planner who would choose that point as the optimum (or you might say there’s a set of values which corresponds to each point in the Pareto set).   That’s the sense in which Pareto efficiency is values-neutral.

What Wren-Lewis is arguing about is slightly different issue.   Is the search for Pareto improvements also values neutral?  I think most economists would say ‘yes’.   After all, no social planner would be any worse than indifferent to a Pareto improvement (a valid social welfare function is weakly increasing in the well-being of every individual agent in the economy).

Does that actually make a Pareto improvement values neutral, however?   No, of course not (this is what I think Wren-Lewis has in mind, but I’m only guessing).   A Pareto-improvement shifts the outcome in allocation-space, but as a general matter a Pareto improvement “picks a direction”–different social planners will disagree that it is the correct direction to take.   Some social planner’s would even prefer to take away from some agents to give to others.   To put it more simply, if you keep exploiting Pareto inefficiencies randomly until you reach an efficient outcome, is the result optimal?   The answer is that with probability one, it will not be.

I’m not sure if I have any other comments to make… just a reminder to myself and others to be careful regarding efficiency and optimality.   I do suspect that the “confusion” here reflects a preference among some economists for the “core” solution concept of cooperative games… but I need to think about that a bit before I make that argument.  So I’ll leave this post here for now.

Who’s in the echo chamber?

April 29, 2014 1 comment

Via Krugman, I see this post by Chris House talking about an efficiency-equity trade-off.  House, of course, is just writing from the standard, near consensus view within economics.   The thing is, though, there is no evidence or theory (not depending on modelling choices) within economics which supports the view that there is necessarily a trade-off.

Let me be clear.  I’m not saying that there is no trade-off, I’m saying that House and Sargent (whose speech to some Berkeley undergrads started this whole blog debate) are making a claim which, while commonly believed by many economists, has no other justification.

Sargent’s speech lists 12 principles of economics that everyone ought to know, the principle in question is his assertion that:

There are tradeoffs between equality and efficiency

Again, I’m not saying this is wrong, I’m saying there is no justification.

So, what is House’s case?   Basically this:

The truth is that if we want to really attack the problem of income inequality (promote equality and help the poor) then we are going to have to take stuff away from richer people and channel it to poorer people. This kind of action will most likely have consequences for markets and these consequences will be unsavory.

Taking stuff away from the rich and giving it to the poor equals unsavory consequences… and you can justify this generally, without invoking a model-specific result, right Chris?

I’ve written about this before, but this particular argument is one that I perhaps didn’t adequately deal with, so here goes.   House is saying that in order to correct for a… let’s call it a maldistribution… will require taxes and transfers.   Taxes and transfers have efficiency costs, ergo equality-efficiency tradeoff.   QED.

That’s not how it works, Chris.

The way it actually works in economics when we want to study efficiency is that we imagine a god-like social planner and ask ourselves “what would the social planner do”… so what would a social planner view as a maldistribution of wealth?   (I’m presuming that utility is weakly increasing in wealth, not income, btw)  You can view this in several ways, but the simplest intuition is just this:  GDP, which House is implicitly using as a measure of well-being (although it’s nothing of the sort), is “additive”, but all else equal, social welfare is “multiplicative” (as a result of convexity).    Maximizing a sum (or equivalently an arithmetic mean) would leave a social planner indifferent to distribution… there is no maldistribution in House’s world.   Social welfare on the other hand is maximized at the point of equality (or equivalently the geometric mean).

The reason that there would be any “optimal” inequality at all, then, is that there is an informational rent associated with figuring out which factors are most productive and encouraging those factors to be active.   This is the heart of Mirlees’ optimal taxation result.   So House has things backward: a sufficiently god-like social planner would make sure that everyone has equal wealth, ceteris paribus and any deviation from that result has to be justified on informational grounds.   Inequality can only be justified as constrained efficient, not to be confused as efficient.

More than that, though, is that its just not clear that taxes must necessarily cause inefficiency.  It’s not at all difficult to tell a story in which wealth taxes, for example, can encourage capital formation by, say, encouraging complementary human capital to accumulate or by encouraging productive capital over “frivolous” capital (by that I mean things like McMansions).   Taxing productive capital may increase its after-tax cost, but the redistribution it funds can also increase the value of its production stream.

So, are efficiency and equality free of any tradeoff?   Not necessarily, and certainly that’s not what I’m saying.    No, the point is that Sargent’s “principle” is not some immutable law of nature, but a model-dependent best-guess.   You might say that the sign of the tradeoff is ambiguous in theory… and it is at this point that I should mention that the only empirical evidence I know of which directly test the sign of that tradeoff are those IMF studies that suggest that equality and efficiency move together.

So, Chris, if you’re reading this, I leave you with the following wise words:

Talking in an echo chamber can be fun but public intellectuals like [House and Sargent] have a greater responsibility to self-censor than most because they have large audiences. They have a responsibility to the public and also a responsibility to their… readers who take their statements to heart

Just sayin’…