Archive

Archive for the ‘Micro’ Category

Noah Smith catches the Demand-Denialist Bug

I like Noah Smith, but his scientific-skepticism meme-immune system appears to be very weak.  The latest case in point is Noah’s post defending the use of Search Theory in Macroeconomics against John Quiggin who is rightly pointing out that Search Theory is incapable of explaining cyclical unemployment.  I’m not really going to add to what Quiggin wrote, instead I’m only interested in Noah’s response.  Before I go on, I should link to Noah’s excellent critique of Kartik Arthreya’s Big Ideas in Macroeconomics to which Quiggin is responding.

Conceding that Search Theory doesn’t explain all the employment patterns, Noah goes on to criticize “demand” explanations:

This is a simple answer… Economists are used to thinking in terms of supply and demand, so the AD-AS model comes naturally to mind… so we look at the economy and say “it’s a demand problem”.

But on a deeper level, that’s unsatisfying – to me, at least.

…what causes aggregate demand curves to shift?…how does aggregate demand affect unemployment? The usual explanation for this isdownward-sticky nominal wages. But why are nominal wages downward-sticky? There are a number of explanations, and again, these differences will have practical consequences.

… is an AD-AS model really a good way of modeling the macroeconomy?… The idea of abandoning the friendly old X of supply-and-demand is scary, I know, but maybe it just isn’t the best descriptor of booms and recessions,..

… I’m not really satisfied by the practice of putting “demand in the gaps”. If “demand” is not to be just another form of economic phlogiston, we need a consistent, predictive characterization of how it behaves…

Wow is that a lot of BS jammed into a short space.   Noah is a strong proponent of a more empirical and predictive macroeconomics, which I agree with!  but this post suggests that Noah doesn’t understand the other side of the problem, model selection and Okkam’s razor.

How do you know which model is the correct one?  You can’t just say that it’s the model that survived empirical tests because there are an infinite number of possible models at any given time which have survived those tests.   All that data you’ve collected tells you exactly nothing until you figure out which of those continuum of possible models you should treat as the preferred one.   Okkam’s razor plays the keystone role in scientific methodology as the selection criterion.   (If you were a philosopher of science you’d spend a lot of time trying to justify Okkam’s razor…Karl Popper believed it gave the most easily falsified model among the alternative… but as a practical scientist you can just accept it.)

Now that we’ve all accepted that Okkam’s razor must be used to winnow our choice of models, we should spend some time thinking about how to use Okkam’s razor to do this in practice.   That would require a post in itself, so instead let me just mention one particular criterion I use:  At any given time, who is the claimant?   In science, the burden of proof is always on the claimant because the claimant’s model at any given time  is almost always less simple than the accepted model given the field’s accepted information set.

As a heuristic, the claimant’s model generally does not pass Okkam’s razor’s test until new information is found and added to the field’s information set.   It’s possible (and does happen) that a heretofore unknown or unnoticed model is simpler than the accepted one, but that’s rarer than you might think and not generally how science proceeds.

With all that out of the way, what’s my problem with Noah’s post?  Two things:

1)  Demand is not phlogiston

For those not in the know, phlogiston was an hypothetical substance which made up fire.  The theory was rendered obsolete by the discovery of combustion.

Basically what Noah is saying here is that maybe demand, like phlogiston, is a hypothetical piece of a theory and that piece may be unnecessary.   Now science certainly does produce phlogiston-like theories from time to time, these theories tend to be the result of trying to tweak systemic models:   you have a theory of elements (at the time of phlogiston a sort of proto-elemental atomic theory) and a substance (fire) which you can’t explain.  So add an element to you model to explain the substance.

The first thing to point out is that demand is a reductionist phenomenon in the strictest sense.   The smallest unit of a macroeconomy (the atom, if you will) is the transaction.  But a single transaction has a well-defined demand:  how much the buyer is willing to trade for the item being transacted.   So the neoclassicals are the claimants here:  they’re saying that there is an emergent phenomenon in which demand becomes irrelevant for the macroeconomy.   They are using an updated version of  Say’s Law to argue that demand goes away, not that it never existed–that would be crazy.

Show me the evidence that it doesn’t exist, then we can talk.   Yes, that’s hard.   Tough… you’re the one making an outlandish claim, now live with it.

The second thing to notice is that phlogiston isn’t even phlogiston as Noah means it… rather phlogiston is a perfectly reasonable and testable scientific hypothesis, the study of which led to our understanding of oxidation.

2)  You don’t need sticky prices to get demand curves

You don’t need sticky prices to get aggregate demand, rather sticky prices are the simplest (in modeling terms) way to get rid of Say’s law while otherwise keeping the market clearing assumption intact.  Now market clearing is not necessarily a good assumption, but even more than the sticky prices are, it is a standard one.

Of course, no microeconomist worth half his or her salt would ever think market clearing is necessary because market clearing doesn’t always happen in the real world (look around).  Store shelves are rarely bare, there are usually tables empty (or people waiting in line) at restaurants and some people pay outlandish prices for tickets to sporting events from scalpers even as some seats go unfilled.   You can talk all you want about how sticky prices are a bad assumption, but the real problem here is that it’s silly that macroeconomists insist on market clearing.

This is a long winded way of saying that anything which breaks Say’s Law can substitute for the sticky-price assumption: 1) nominal debt not indexed to inflation, 2) demand for financial assests, or 3) non-stantionarity and knightian uncertainties.   I’m sure I’m missing some other possibilities.

These are all “reductionist” explanations and once again, that’s my point.   It is the neoclassicist demand-deniers who are flipping the script here and insisting on a systemic explanation for why demand should disappear in the aggregate.

I can go on, but this post is already getting too long.  For my take on AS/AD in particular, see this.  I think that answers Noah’s implicit objection.

Scientific Welfare Theory

May 30, 2014 11 comments

Steve Waldman has a good post up on welfare economics.   That’s a topic I wrote about recently and I agree with almost everything he writes; and in fact I’ve dived into these specific issues in previous posts.  I do have two complaints to make, though.

First, I can’t agree with this paragraph:

There is nothing inherently more scientific about using an ordinal rather than a cardinal quantity to describe an economic construct. Chemists’ free energy, or the force required to maintain the pressure differential of a vacuum, are cardinal measures of constructs as invisible as utility and with a much stronger claim to validity as “science”.

So long as we understand utility as a ranking of actions, not as a measure of welfare, then the only reasonably scientific approach is to use ordinal utility.   Intensity absolutely  is not a property of utility–nor should it be!–but is absolutely related to welfare.   In other words, this is Waldman making the mistake he’s accusing others of making.   There are several reasons for this,

  1. Science is only interested in things which are measurable, and “intensity of preference” is not measurable (advances in neuroeconomics aside)…
  2. At least it is not measurable in the absence of a smoothly alternative i.e. I might be able to measure “willingness to pay” in terms of money if I can set up an experiment where participants are forced to reveal how much money they’d be willing to trade.   Then, you just measure utility in terms of money.   That’s a consistent theory of utility intensity.
  3. The problem is that the theory in (2) can  be duplicated as an ordinal theory just by recognizing that the bundle which the agent is bidding on is x=(1,m-p), where m is the money she started with, p is the revealed willingness to pay and ‘1’ represents getting the good.   So the bid, p, solves u(1,m-p)>u(0,m).   With that, I can order these two vectors.
  4. More succinctly, the economist can expect to observe “buy” or “don’t buy” at p, which is a binary relationship and intensity is at best bounded by p.

On the other hand, attributing some value to the psychological satisfaction of consuming something–such as attributing an intensity to preference–is the very goal of welfare economics.   Yes, that’s hard, but it’s hard precisely because that psychological satisfaction isn’t observable to the economist/scientist.

My second issue with the post is this:

But before we part, let’s think a bit about what it would mean if we find that we have little basis for interpersonal welfare comparisons. Or more precisely, let’s think about what it does not mean. To claim that we have little basis for judging whether taking a slice of bread from one person and giving it to another “improves aggregate welfare” is very different from claiming that it can not or does not improve aggregate welfare. The latter claim is as “unscientific” as the former. One can try to dress a confession of ignorance in normative garb and advocate some kind of a precautionary principle, primum non nocere in the face of an absence of evidence. But strict precautionary principles are not followed even in medicine, and are celebrated much less in economics. They are defensible only in the rare and special circumstance where the costs of an error are so catastrophic that near perfect certainty is required before plausibly beneficial actions are attempted. If the best “scientific” economics can do is say nothing about interpersonal welfare comparison, that is neither evidence for nor evidence against policies which, like all nontrivial policies, benefit some and harm others, including policies of outright redistribution.

Let’s ignore the deep issues with ordinal vs. cardinal utility for the moment (I’ll return to that issue in a moment) and start with the social planner (SP)–remember from my previous post that the social planner is a thought experiment!–who wants to maximize welfare.  The social planner’s objective function, of course, depends on the values we assign to the social planner (as an aside, I have research which, I think, cleverly gets around this problem).  That is, welfare can take any form depending only on the biases of the researcher and what he/she can get away with.   Let’s think about that…

Suppose the SP’s welfare maximization problem takes the basic form; W(u1,u2,…,uN)–for simplicity let’s assume that I’ve ordered the population so that wealth is ordered w1>w2>…>wN–and let’s only consider small perturbations in allocation space (i.e. 1 gives a slice of bread to N, slightly decreasing u1 and increasing uN).  The question for the moment is whether this is consistent with welfare maximization.  The answer is that it almost certainly is.

Why?  because for a small disturbance W(.) is approximately linear, because all functions are approximately linear with respect to small disturbances.   So W(.) looks like a weighted sum W = a1.u1 + a2.u2 + … + aN.uN, approximately, around feasible allocation x = (x1, x2, …, xN) whose change is small enough not to affect potential complications like incentives or aggregate production.   I claim that moving that epsilon from x1 to xN must increase W  as long as a1 is not too much larger than aN… this is just because the utilities ui are concave so that a small decrease to the rich’s allocation doesn’t decrease W much, but a small change to the poor’s allocation increases W a lot (to see this, just write the new W, W’ as W’ = W – (a1.du1 – aN.du2) where du1 is the decease in u1) .   Remember that this is in complete generality.

Oh!   I hear you saying… you’re forgetting about the ordinal vs cardinal objection!   Unscientific!

Not quite.   I do have to make some modifications, though.  First, recall that the SP is a thought experiment and it’s perfectly reasonable for a thought experiment to have a telescope directed into the souls of agents–so long as we make our assumptions clear.   Second, I can get around this problem in practice as well.   Instead of u1(x1), u2(x2)…uN(xN), suppose the SP takes as given u1(w(x1; p)), u2(w(x2;p)),…uN(w(xN;p)), where w(x;p) is a function which imputes wealth from the market value, p, of the consumption bundle x.

Now, the only function of utility in the SP’s decision problem is to account for the marginal utility of wealth.    That can be accounted for simply by making W(w1,w2,….,wN) concave in all it’s arguments.   But that’s just the same problem as I had above with utility!   In other words, as long as we remember to keep track of concavity, wealth is a proxy for utility as far as a social planner is concerned.   Using wealth as a proxy for utility this way gets me around the ordinal/cardinal objection.

Now, as soon as I specify the form W takes, I move back out of the realm of science and back into the realm of values, but the point is that there are conclusions that we can draw without any other assumptions.

  1. There is a strong bias towards redistribution from the rich to the poor in any social welfare problem, ordinal utility or no.
  2. How much redistribution, though, is basically never a scientific problem… it depends on the form W takes, which depends on the values assigned to the SP.
  3. The only concave welfare function which does not support redistribution is the linear welfare function: i.e. W = b1.w1 + b2.w2 + …. + bN.wN.   But notice that this is indifferent to distribution!

In effect, this linear welfare function is the one which most conservative economists are implicitly using.   For example, it is literally the only welfare function in which the opening up of a harberger triangle would be a decisive issue in public policy.   Yet–you might have noticed if you followed my discussion above–the linear social welfare function, even for small changes, can only be a true representation of the SP’s problem if the marginal utility of wealth is constant, when it should be diminishing.

That’s the problem.

Who said markets have to clear?

May 19, 2014 4 comments

Steve Randy Waldman has a good post which is nevertheless completely wrong.   I’m normally a fan of interfluidity, so I feel like I have to respond.

The basic argument, coming mostly from macro- and international economics types is that micro-economic involves just as many silly unrealistic assumptions as macro-economics… unless we take our lessons from macro!

This is an argument that I’ve seen come up several times online now, but so far every example people want to make are the simplifications that non-micro specialists make about microeconomics, not the actual study of micro as it is done by micro-specialists.

Waldman’s post is a great example of this.   He goes on at length to explain that consumer surplus and welfare are not the same things.   Who knew!   Of course, I forgot to mention consumer surplus specifically in that post (I knew I forgot a few measures of efficiency, but so it goes).    This is supposed to be his case against market clearing.   Hmmm.   See the problem with that is that “market clearing” is not an assumption of microeconomics.   Oh sure, there are microeconomic models in which market clearing is assumed.   In fact there’s a name for the class of models assuming market clearing: general equilibrium (all markets clear at endogenous prices).   That’s just the microeconomic model used in the micro-foundations of macroeconomics.   It’s not the fault of the microeconomists that you macro-types are using a model you don’t like.

He then goes on to rant about the problems with using willingness-to-pay as a measure of surplus.   To which I ask: who’s using willingness to pay?   In an intermediate-level microeconomics class, your prof should have told you that willingness to pay is not well defined unless the utility function is quasi-linear (a very special functional form!).   As an aside, the demand curve also requires quasi-linearity.

Don’t get me wrong… there really are understudied problems in micro.   “Dynamic Efficiency” which Waldman mentions in passing happens to be one–there really isn’t any neutral measure of dynamic efficiency: how to trade production today for production tomorrow is inherently bound up in the problem of choice, and we know that the time-separable constant discount factor utility which we use for the purpose doesn’t actually work (it can’t explain lifetime savings data, for example).   It’s just that  no one knows what to do about that, but this is an active area of microeconomic research.   Weirdly, though, this isn’t the issue that is animating Waldman.

So we in micro have our problems, but these are never the problems we’re criticized for.   Instead Waldman (and others) criticize us for the dumb simplifying assumptions macroeconomists make and then blame us for the resulting jumble.

Too much Efficiency

May 14, 2014 2 comments

Recently, I wrote about the confusion, common among economists, between optimality and efficiency.   Of course, everyone with an economics education knows the difference, but the point I made then (and now) is that there is a tendency to muddle the distinction in practice.  Today I want to talk about (at least part of) the reason.

To illustrate the issue, let me ask a question:  Is gift exchange (as opposed to market exchange) efficient, or is it inefficient?   This is an important question since anthropologists tell us that gift exchange is a common mechanism for primitive economies and also because certain political groups favor gift exchange over market exchange as a kind of means to escape inequalities (as an aside, I don’t understand how anyone could think that gift exchange would possibly decrease inequality, rather than make it much much worse… those without social networks would be shut out entirely!  But I digress).

The typical answer that you might hear from an economist is “of course gift exchange is inefficient!”  I agree, that’s probably correct.   I’m also reasonably sure that gift exchange is efficient.   I believe that both those statements are correct. So what’s going on here?   How can gift exchange be both efficient and inefficient?   Easy.   There’s at least two very different notions of efficiency involved.

In fact, here off the top of my head are some of the notions which economists sometimes refer to as “efficient” (altho, I make no claim that this list is exhaustive):

  1. Pareto Efficiency:  no one can be made better off without making some else worse off.
  2. Optimality:  the preferred outcome of a hypothetical social planner.
  3. Allocative Efficiency:  only the proper balance of goods is traded (i.e. marginal benefit = p).
  4. Production Efficiency:  the proper balance and number of goods are produced (i.e. p = marginal cost at the production frontier).
  5. Cyclical Efficiency:  my own term… a catchall for everything from Okun gaps to sub-optimal inflation targets.
  6. Informational Efficiency:  all available information is used.
  7. Dynamic Efficiency:  the time-path of consumption is not strictly dominated by another (i.e. compared to a time-separable, constant discount utility).

So, back to my example: gift exchange.   In which sense is gift exchange inefficient?   That’s easy.  The inefficiency is informational and allocative.  That is to say that people end up with the wrong goods and they end up with the wrong goods in part because there is information available in the population which is not being utilized (namely, the fact that I know what I want, but you probably don’t).   This is the standard answer, and it’s very intuitive for the average non-economist.

So in which sense is gift exchange efficient?   It turns out that gift exchange is Pareto efficient.   Don’t believe me?   Think that’s impossible since everyone supposedly has the wrong allocations in a gift exchange economy?   Ah ha!   The thing is, Pareto efficiency is evaluated at constant allocation.   So, let’s evaluate the problem at constant allocation: person i has allocation x_i.   In case ME, x_i is arrived at through pure market exchange.   In case GE, x_i is arrived at through pure gift exchange.   So here’s the issue:  is u_i(x_i) higher in case ME or in case GE?   The right answer is GE, so everyone is made better off in the gift exchange counter-factual.   Frankly, I think that anyone who’s ever gotten the gift they wanted ought to know that such gifts are cherished and experimental studies have backed that observation up.

We can always ask why this is the case.  Personally, I think it’s as simple as noting that reciprocity is a more powerful human emotion than greed and unlike the other notions of efficiency, Pareto efficiency depends on that sort of thing.

Of course, the fact that gift exchange is alloctively inefficient and informationally inefficient means that it’s not a very good mechanism for running an economy.   Economist-as-engineers really ought to care about such things, and we do!  Still, it’s a reminder that we should always be careful to keep in mind which notion of efficiency we are talking about.

The Game Theory Explanation for Labeling of Terror Groups

The new right wing meme seems to be that the Obama administration–specifically Hilary Clinton as Secretary of State–was too slow to label the group Boko Haram a terror organization.   I was just watching Michele Bachmann on CNN dismiss the argument, so this is unlikely to be the next “Benghazi”.  Still, just in case, let’s ask ourselves if it makes sense to label groups terror organizations early (early as in before clear acts of terror).

The question is whether of not extremist rhetoric in itself or clear acts of insurrection in themselves (i.e. acts  not clearly directed at civilians for the purpose of instilling terror among a population or sub-population)… whether or not such a group engaged in these activities ought to be labeled as terrorist by an outside group not directly involved in the conflict (such as the US concerning the conflict in Nigeria).

All that matters here is that some people will view the US here as a neutral party and some subset of those who do view the US as neutral might be swayed by the US’s designation of the Group (I don’t want to keep referring to them as Boko Haram because I want my argument to be more general than that).

Under these conditions this is how I see the problem:

  1. The Group has to weight the costs and benefits of its actions… the benefits are tough to quantify, since they involve the Group’s views on its goals and the probabilities of success moving toward those goals.  The costs are much easier to quantify, though:  public opinion, changes in funding and political support or changes in the military situation.
  2. The point I want to make is that if the US labels a group a “terror organization” that will specifically affect the Group’s calculus on the cost side of the cost-benefit calculation.   Specifically, a group so labelled almost certainly has fewer legitimate forms of funding and public support may suffer.

With these two assumptions it is clear to me that labeling such a group, then, can be  a kind of self-fulfilling prophecy.

In a backward-looking sense, the Group–so labelled–is certainly worse off.   Legitimate funding sources may dry up since many outsiders who sympathize with the group’s goals may hesitate to associate themselves with terrorists.  More than that, the US and Europe have used legal sanctions to shut down terror funding networks.

In a forward-looking sense, though, the Group is now much more likely to commit acts of terror.   After all, if the Group is weighing a tactic (say, kidnapping school-girls to sell into slavery) then that Group will view its costs for this action as lower since some of the penalties for the action have already been applied:  funding has already been constricted, at least some public opinion has already turned.

This is always and everywhere the problem with administering the punishment before the crime.   The crime itself becomes costless.   This is a consequence of the one-shot deviation principle–if the punishment phase of a repeated game is coming regardless of one’s own actions, then the best response is to play the best response to that punishment which is the stage’s Nash equilibrium.   So the only equilibrium of the game is the repeated stage-Nash outcome.    For those unfamiliar with repeated games, the stage-Nash outcome is the worst possible outcome without any possibility of cooperation.  That is, the game becomes a prisoner’s dilemma, over and over again.

It’s always tempting to think that preemptive penalties yesterday could have stopped today’s tragedies, but the under-appreciated cost of preemptive actions is the risk of causing those very tragedies.

 

 

The Consumption Model of Inequality

With all the talk about inequality recently, I thought it was time for me to lay out my model of the political dynamics around inequality.   So let’s forget briefly about IMF studies and Piketty and simply ask ourselves how we can use the machinery of economics to understand the political cleavages it engenders.   As an aside, while I’ve never seen this model in the literature or anyone’s course notes, I’d nevertheless be shocked if I’m the first to think in these terms… I just don’t know who to credit with the idea (I think the idea is so simple and obvious that few bother to go through details).

The basic idea is that I’m going to view equality as something that makes agents more satisfied, in the sense that measurements of inequality, such as the Gini coefficient, enter into the agent’s utility directly.   So, if there’s a vector of “normal” goods, x_i for each i, and G is the Gini coefficient, then agent i has utility U_i(x_i,G), where dU_i/dG < 0 (inequality is a bad).  I’m implicitly viewing this as a static model, but it would be a simple matter to include time.

So, the economics here simply stem from the fact that the level of inequality is shared by all agents–that is, it is a pure public good (non-rival, non-excludable).    Beyond that simple insight, there’s only one other thing we need to know, which is how wealth is redistributed to reduce inequality.   You can use a simple mapping, G’ = R(T,G), where G’ < G (this would make the problem a standard public good, which is good enough to account for half the problem).

Or, to be more realistic… if w is the vector of each agent’s wealth (w_i… for simplicity arrange i so that w_i < w_j for i < j, so that w is effectively the lorentz curve and let W be aggregate wealth), then G = Sum_i 2*[ 1 – N*w_i/(i * W) ].   Then a valid redistribution maps w’ = R(w) such that the properties(i)  W’ = W, (ii) w_i < w_j  ==>  w’_i < w’_j and (iii) G’ < G all hold.   This means, graphically, that R maps the lorentz curve to a (weakly) higher lorentz curve keeping total wealth constant.

Public goods models are not particularly trivial to solve, although we know in general that inequality will be “overproduced” in the simple version of this model (with G’ = R(G,T)).

In the more complex version (with w’ = R(w)… effectively this version models models the technology for reducing inequality directly), there are two effects.   The under-provision of public goods is still an issue here… but only for those rich enough to pay net taxes (those for whom w’_i – w_i = t_i > 0… put these agents into a new set, I).   The set I is a function of how much redistribution is actually done, but it is only agents in I for whom the public goods game is non-trivial (those outside I, by definition, receive lower levels of inequality without paying net taxes… a win-win situation for them).   Generally (but not universally) as I expands there are more resources available to redistribute and there are fewer people to redistribute towards.   A marginal (the richest agent not paying net tax) agent i by definition balances the benefit of reducing inequality with her own tax bill from that more aggressive redistribution.

So here’s what’s interesting… this model (simple intuitively as it is… tho difficult to solve) exhibits tipping points.   Don’t believe me?   Consider this thought experiment… increase W by adding to w_i only of i in I.   Givewn the right initial setup, nothing will happen until G rises enough that the set I expands… basically at some point those not in I will demand to (on net) contribute to reducing inequality.

Of course, the details depend on R and how R is chosen (simple majority voting?), but the framework for thinking about the politics of inequality are here.   Note that if Piketty or the IMF are correct, then this model will understate the degree to which equality is under-provided.

No, there is no trade-off between equality and efficiency

April 21, 2014 1 comment

Matt Yglesias has a good post up knocking down Tom Sargent’s claim (now circulating the econo-blogosphere, although the speech was in 2007) that “There are trade-offs between equality and efficiency“.

The thing of it is that not only is Sargent wrong here–although the sentiment is common among professional economists–but more importantly is that there really isn’t any reason to believe that this is right… just some vague sense that proper incentives require paying the most skilled among us more.  So basically, Yglesias is letting Sargent off much too easy.

To show why, I’ll go through all the interpretations of Sargent’s claim one-by-one and explain why each is wrong.    I could do much more than this: one of my thesis projects is directly relevant here, although that work’s not really ready for daylight.

  1. Efficiency requires a particular distribution.   Nope.    In standard theory, the set of Pareto efficient allocations turns out to contain any distribution of wealth/utility between the agents.  One person has 100% of the wealth?   There’s an efficient allocation like that.   A different person has 100% of the wealth?  Also one like that.   Complete equality?   Yep, there’s one like that, too.  This is always true in any trading situation.   The only thing that causes Pareto inefficiency are market distortions.
  2. Efforts to correct for the distribution result in inefficient allocations.  Nope.  The proof for the Second Welfare Theorem in fact requires redistributing wealth before trading.  Then, after this redistribution is completed, it is shown that any efficient allocation can be attained.   You like perfectly equal, efficient outcomes?   The Second Welfare Theorem says that there is a redistribution which will provide that efficient outcome.  The statement precisely is that the efficient market will produce the efficient outcome as a “price quasi-equilibrium with transfers” (from Mas-Colell if you’re curious).
  3. Dynamic Inefficiencies from redistribution?   This is the point that Yglesias is in effect debunking.  So, I’ll leave that to him and send you back to that post.   I will add to his argument only that wealth is itself a market distortion.  How can I say that?   Well, I’d say go look at my thesis, but that’s out (for now)… so instead just think about it in terms of Piketty’s point: if the rate of capital accumulation, r, is greater than the economy’s growth rate, g, then it must be the case that wealth (i.e. claims to ownership) explodes in the limit.  That is, one  person eventually owns everything.
  4. Countries with unequal wealth grow faster, and do so for a longer time?   No, on both counts.   Don’t ask me, though, just ask the IMF.  Oh snap.   The sign seems to go in the opposite direction.  Ouch.   In fairness, Sargent didn’t know about this line of research which hadn’t been published yet.   But then, maybe that’s why he shouldn’t make strong claims to impressionable college students who go home with the wrong lesson which they then hold tight to for the rest of their lives.
  5. Wealth rewards the exceptional for being awesome.   Heh.  No.   And anyway, economics isn’t a morality play and outcomes aren’t rewards for anything.  If I were Joe Stiglitz I might even argue that the current economy is one in which fortunes are amassed mostly through rent-seeking, anyway.

So, yeah, there’s no support in economic theory for Sargent’s claim.   He’s just saying something that he believes without any theoretical or empirical support.

Now it IS generally true that most taxes will have a dead-weight loss… that people will react to the tax in a way that results in less economic activity.   A tax can distort the market.  Interestingly, though, there are taxes which in theory mimic an efficient “lump sum” tax.   I’m thinking an idealized consumption tax in particular.

I would also emphasize that the Second Welfare Theorem’s redistribution has a flavor of “wealth” redistribution, not income redistribution.   That’s important.   That’s all the caveats that occur to me at the moment.