Archive

Archive for the ‘teaching’ Category

A Classical Refutation of the Glazier’s Fallacy

July 25, 2014 1 comment

John Quiggin has a good post up on the Glazier’s Fallacy (which I’ve more commonly heard referred to as the broken-window fallacy).

Here’s the original argument from Henry Hazlitt:

A young hoodlum, say, heaves a brick through the window of a baker’s shop. The shopkeeper runs out furious, but the boy is gone. A crowd gathers, and begins to stare with quiet satisfaction at the gaping hole in the window and the shattered glass over the bread and pies. After a while the crowd feels the need for philosophic reflection. And several of its members are almost certain to remind each other or the baker that, after all, the misfortune has its bright side. It will make business for some glazier. As they begin to think of this they elaborate upon it. How much does a new plate glass window cost? Fifty dollars? That will be quite a sum. After all, if windows were never broken, what would happen to the glass business? Then, of course, the thing is endless. The glazier will have $50 more to spend with other merchants, and these in turn will have $50 more to spend with still other merchants, and so ad infinitum. The smashed window will go on providing money and employment in ever-widening circles. The logical conclusion from all this would be, if the crowd drew it, that the little hoodlum who threw the brick, far from being a public menace, was a public benefactor.
Now let us take another look. The crowd is at least right in its first conclusion. This little act of vandalism will in the first instance mean more business for some glazier. The glazier will be no more unhappy to learn of the incident than an undertaker to learn of a death. But the shopkeeper will be out $50 that he was planning to spend for a new suit. Because he has had to replace a window, he will have to go without the suit (or some equivalent need or luxury). Instead of having a window and $50 he now has merely a window. Or, as he was planning to buy the suit that very afternoon, instead of having both a window and a suit he must be content with the window and no suit. If we think of him as a part of the community, the community has lost a new suit that might otherwise have come into being, and is just that much poorer.
The glazier’s gain of business, in short, is merely the tailor’s loss of business. No new “employment” has been added. The people in the crowd were thinking only of two parties to the transaction, the baker and the glazier. They had forgotten the potential third party involved, the tailor. They forgot him precisely because he will not now enter the scene. They will see the new window in the next day or two. They will never see the extra suit, precisely because it will never be made. They see only what is immediately visible to the eye.

Quiggin’s response is basically Keynesian (which is fine by me):

Suppose that the glazier, having been out of work for some time, has worn out his clothes. Having fixed the window and been paid, he may take his $50 and buy a new suit. To make the story stop here, we’ll suppose that the tailor is a miser (a vice traditionally associated with the clothing industry, as with Silas Marner), and puts the money under his mattress. So, in this version of the story, the glazier and the tailor are both paid, and the social product is increased by a new window and a new suit.

What if the window had not been broken? Under the assumptions made so far, the shopkeeper would buy a new suit for $50, the tailor would hoard the money and the glazier would remain unemployed. The shopkeeper is better off, since (before the window was broken) he preferred a new suit to a new window. On the other hand, the glazier is worse off, since he gets no work and no suit. For society as a whole, both output and employment have increased.

So, the seeming refutation of the glazier’s fallacy falls apart on closer examination. On the one hand, Hazlitt uses language that implies the existence of unemployment. On the other hand, he is implicitly assuming that private and social opportunity cost are the same. The Second Lesson tells us that this won’t be true in general if the economy is in recession.

It’s a good response, but this argument isn’t going to move anyone who’s already inclined to dislike Keynes.  Instead, I think it’s better to deconstruct Hazlitt’s argument from a classical perspective.   As I see it, the problem with the Classicalists is that they view income as exogenous.  The most important thing that Keynes showed, however, was that income is endogenous–fixed at the level of spending.   Once that point is understood, it’s clear that Hazlitt’s scenario is a very special case even in the Classical tradition.

So, let me change Hazlitt’s thought experiment slightly.

First, suppose it is not the shop window which has been smashed, but a window at the shopkeeper’s home and suppose further that the shopkeeper doesn’t like to hang out at his now drafty house.  Technically, we say that the window is a complement for home-leisure activities which the shopkeeper likes to engage in when his home is intact.

In this case, the relative opportunity cost of working is lower (his alternative to working are home-leisure activities in a now drafty house), and so on the margin standard theory would imply that he would substitute work for leisure.   Longer hours working means that the shopkeeper’s income is higher, which presumably you can again talk about his propensity of spend that extra income on goods-other-than-home-leisure.

The rest of the story is the same, but in my scenario it is only the labor-leisure tradeoff, rather than unemployment, which does the work.

Advertisements

Scientific Welfare Theory

May 30, 2014 11 comments

Steve Waldman has a good post up on welfare economics.   That’s a topic I wrote about recently and I agree with almost everything he writes; and in fact I’ve dived into these specific issues in previous posts.  I do have two complaints to make, though.

First, I can’t agree with this paragraph:

There is nothing inherently more scientific about using an ordinal rather than a cardinal quantity to describe an economic construct. Chemists’ free energy, or the force required to maintain the pressure differential of a vacuum, are cardinal measures of constructs as invisible as utility and with a much stronger claim to validity as “science”.

So long as we understand utility as a ranking of actions, not as a measure of welfare, then the only reasonably scientific approach is to use ordinal utility.   Intensity absolutely  is not a property of utility–nor should it be!–but is absolutely related to welfare.   In other words, this is Waldman making the mistake he’s accusing others of making.   There are several reasons for this,

  1. Science is only interested in things which are measurable, and “intensity of preference” is not measurable (advances in neuroeconomics aside)…
  2. At least it is not measurable in the absence of a smoothly alternative i.e. I might be able to measure “willingness to pay” in terms of money if I can set up an experiment where participants are forced to reveal how much money they’d be willing to trade.   Then, you just measure utility in terms of money.   That’s a consistent theory of utility intensity.
  3. The problem is that the theory in (2) can  be duplicated as an ordinal theory just by recognizing that the bundle which the agent is bidding on is x=(1,m-p), where m is the money she started with, p is the revealed willingness to pay and ‘1’ represents getting the good.   So the bid, p, solves u(1,m-p)>u(0,m).   With that, I can order these two vectors.
  4. More succinctly, the economist can expect to observe “buy” or “don’t buy” at p, which is a binary relationship and intensity is at best bounded by p.

On the other hand, attributing some value to the psychological satisfaction of consuming something–such as attributing an intensity to preference–is the very goal of welfare economics.   Yes, that’s hard, but it’s hard precisely because that psychological satisfaction isn’t observable to the economist/scientist.

My second issue with the post is this:

But before we part, let’s think a bit about what it would mean if we find that we have little basis for interpersonal welfare comparisons. Or more precisely, let’s think about what it does not mean. To claim that we have little basis for judging whether taking a slice of bread from one person and giving it to another “improves aggregate welfare” is very different from claiming that it can not or does not improve aggregate welfare. The latter claim is as “unscientific” as the former. One can try to dress a confession of ignorance in normative garb and advocate some kind of a precautionary principle, primum non nocere in the face of an absence of evidence. But strict precautionary principles are not followed even in medicine, and are celebrated much less in economics. They are defensible only in the rare and special circumstance where the costs of an error are so catastrophic that near perfect certainty is required before plausibly beneficial actions are attempted. If the best “scientific” economics can do is say nothing about interpersonal welfare comparison, that is neither evidence for nor evidence against policies which, like all nontrivial policies, benefit some and harm others, including policies of outright redistribution.

Let’s ignore the deep issues with ordinal vs. cardinal utility for the moment (I’ll return to that issue in a moment) and start with the social planner (SP)–remember from my previous post that the social planner is a thought experiment!–who wants to maximize welfare.  The social planner’s objective function, of course, depends on the values we assign to the social planner (as an aside, I have research which, I think, cleverly gets around this problem).  That is, welfare can take any form depending only on the biases of the researcher and what he/she can get away with.   Let’s think about that…

Suppose the SP’s welfare maximization problem takes the basic form; W(u1,u2,…,uN)–for simplicity let’s assume that I’ve ordered the population so that wealth is ordered w1>w2>…>wN–and let’s only consider small perturbations in allocation space (i.e. 1 gives a slice of bread to N, slightly decreasing u1 and increasing uN).  The question for the moment is whether this is consistent with welfare maximization.  The answer is that it almost certainly is.

Why?  because for a small disturbance W(.) is approximately linear, because all functions are approximately linear with respect to small disturbances.   So W(.) looks like a weighted sum W = a1.u1 + a2.u2 + … + aN.uN, approximately, around feasible allocation x = (x1, x2, …, xN) whose change is small enough not to affect potential complications like incentives or aggregate production.   I claim that moving that epsilon from x1 to xN must increase W  as long as a1 is not too much larger than aN… this is just because the utilities ui are concave so that a small decrease to the rich’s allocation doesn’t decrease W much, but a small change to the poor’s allocation increases W a lot (to see this, just write the new W, W’ as W’ = W – (a1.du1 – aN.du2) where du1 is the decease in u1) .   Remember that this is in complete generality.

Oh!   I hear you saying… you’re forgetting about the ordinal vs cardinal objection!   Unscientific!

Not quite.   I do have to make some modifications, though.  First, recall that the SP is a thought experiment and it’s perfectly reasonable for a thought experiment to have a telescope directed into the souls of agents–so long as we make our assumptions clear.   Second, I can get around this problem in practice as well.   Instead of u1(x1), u2(x2)…uN(xN), suppose the SP takes as given u1(w(x1; p)), u2(w(x2;p)),…uN(w(xN;p)), where w(x;p) is a function which imputes wealth from the market value, p, of the consumption bundle x.

Now, the only function of utility in the SP’s decision problem is to account for the marginal utility of wealth.    That can be accounted for simply by making W(w1,w2,….,wN) concave in all it’s arguments.   But that’s just the same problem as I had above with utility!   In other words, as long as we remember to keep track of concavity, wealth is a proxy for utility as far as a social planner is concerned.   Using wealth as a proxy for utility this way gets me around the ordinal/cardinal objection.

Now, as soon as I specify the form W takes, I move back out of the realm of science and back into the realm of values, but the point is that there are conclusions that we can draw without any other assumptions.

  1. There is a strong bias towards redistribution from the rich to the poor in any social welfare problem, ordinal utility or no.
  2. How much redistribution, though, is basically never a scientific problem… it depends on the form W takes, which depends on the values assigned to the SP.
  3. The only concave welfare function which does not support redistribution is the linear welfare function: i.e. W = b1.w1 + b2.w2 + …. + bN.wN.   But notice that this is indifferent to distribution!

In effect, this linear welfare function is the one which most conservative economists are implicitly using.   For example, it is literally the only welfare function in which the opening up of a harberger triangle would be a decisive issue in public policy.   Yet–you might have noticed if you followed my discussion above–the linear social welfare function, even for small changes, can only be a true representation of the SP’s problem if the marginal utility of wealth is constant, when it should be diminishing.

That’s the problem.

Too much Efficiency

May 14, 2014 2 comments

Recently, I wrote about the confusion, common among economists, between optimality and efficiency.   Of course, everyone with an economics education knows the difference, but the point I made then (and now) is that there is a tendency to muddle the distinction in practice.  Today I want to talk about (at least part of) the reason.

To illustrate the issue, let me ask a question:  Is gift exchange (as opposed to market exchange) efficient, or is it inefficient?   This is an important question since anthropologists tell us that gift exchange is a common mechanism for primitive economies and also because certain political groups favor gift exchange over market exchange as a kind of means to escape inequalities (as an aside, I don’t understand how anyone could think that gift exchange would possibly decrease inequality, rather than make it much much worse… those without social networks would be shut out entirely!  But I digress).

The typical answer that you might hear from an economist is “of course gift exchange is inefficient!”  I agree, that’s probably correct.   I’m also reasonably sure that gift exchange is efficient.   I believe that both those statements are correct. So what’s going on here?   How can gift exchange be both efficient and inefficient?   Easy.   There’s at least two very different notions of efficiency involved.

In fact, here off the top of my head are some of the notions which economists sometimes refer to as “efficient” (altho, I make no claim that this list is exhaustive):

  1. Pareto Efficiency:  no one can be made better off without making some else worse off.
  2. Optimality:  the preferred outcome of a hypothetical social planner.
  3. Allocative Efficiency:  only the proper balance of goods is traded (i.e. marginal benefit = p).
  4. Production Efficiency:  the proper balance and number of goods are produced (i.e. p = marginal cost at the production frontier).
  5. Cyclical Efficiency:  my own term… a catchall for everything from Okun gaps to sub-optimal inflation targets.
  6. Informational Efficiency:  all available information is used.
  7. Dynamic Efficiency:  the time-path of consumption is not strictly dominated by another (i.e. compared to a time-separable, constant discount utility).

So, back to my example: gift exchange.   In which sense is gift exchange inefficient?   That’s easy.  The inefficiency is informational and allocative.  That is to say that people end up with the wrong goods and they end up with the wrong goods in part because there is information available in the population which is not being utilized (namely, the fact that I know what I want, but you probably don’t).   This is the standard answer, and it’s very intuitive for the average non-economist.

So in which sense is gift exchange efficient?   It turns out that gift exchange is Pareto efficient.   Don’t believe me?   Think that’s impossible since everyone supposedly has the wrong allocations in a gift exchange economy?   Ah ha!   The thing is, Pareto efficiency is evaluated at constant allocation.   So, let’s evaluate the problem at constant allocation: person i has allocation x_i.   In case ME, x_i is arrived at through pure market exchange.   In case GE, x_i is arrived at through pure gift exchange.   So here’s the issue:  is u_i(x_i) higher in case ME or in case GE?   The right answer is GE, so everyone is made better off in the gift exchange counter-factual.   Frankly, I think that anyone who’s ever gotten the gift they wanted ought to know that such gifts are cherished and experimental studies have backed that observation up.

We can always ask why this is the case.  Personally, I think it’s as simple as noting that reciprocity is a more powerful human emotion than greed and unlike the other notions of efficiency, Pareto efficiency depends on that sort of thing.

Of course, the fact that gift exchange is alloctively inefficient and informationally inefficient means that it’s not a very good mechanism for running an economy.   Economist-as-engineers really ought to care about such things, and we do!  Still, it’s a reminder that we should always be careful to keep in mind which notion of efficiency we are talking about.

The Game Theory Explanation for Labeling of Terror Groups

The new right wing meme seems to be that the Obama administration–specifically Hilary Clinton as Secretary of State–was too slow to label the group Boko Haram a terror organization.   I was just watching Michele Bachmann on CNN dismiss the argument, so this is unlikely to be the next “Benghazi”.  Still, just in case, let’s ask ourselves if it makes sense to label groups terror organizations early (early as in before clear acts of terror).

The question is whether of not extremist rhetoric in itself or clear acts of insurrection in themselves (i.e. acts  not clearly directed at civilians for the purpose of instilling terror among a population or sub-population)… whether or not such a group engaged in these activities ought to be labeled as terrorist by an outside group not directly involved in the conflict (such as the US concerning the conflict in Nigeria).

All that matters here is that some people will view the US here as a neutral party and some subset of those who do view the US as neutral might be swayed by the US’s designation of the Group (I don’t want to keep referring to them as Boko Haram because I want my argument to be more general than that).

Under these conditions this is how I see the problem:

  1. The Group has to weight the costs and benefits of its actions… the benefits are tough to quantify, since they involve the Group’s views on its goals and the probabilities of success moving toward those goals.  The costs are much easier to quantify, though:  public opinion, changes in funding and political support or changes in the military situation.
  2. The point I want to make is that if the US labels a group a “terror organization” that will specifically affect the Group’s calculus on the cost side of the cost-benefit calculation.   Specifically, a group so labelled almost certainly has fewer legitimate forms of funding and public support may suffer.

With these two assumptions it is clear to me that labeling such a group, then, can be  a kind of self-fulfilling prophecy.

In a backward-looking sense, the Group–so labelled–is certainly worse off.   Legitimate funding sources may dry up since many outsiders who sympathize with the group’s goals may hesitate to associate themselves with terrorists.  More than that, the US and Europe have used legal sanctions to shut down terror funding networks.

In a forward-looking sense, though, the Group is now much more likely to commit acts of terror.   After all, if the Group is weighing a tactic (say, kidnapping school-girls to sell into slavery) then that Group will view its costs for this action as lower since some of the penalties for the action have already been applied:  funding has already been constricted, at least some public opinion has already turned.

This is always and everywhere the problem with administering the punishment before the crime.   The crime itself becomes costless.   This is a consequence of the one-shot deviation principle–if the punishment phase of a repeated game is coming regardless of one’s own actions, then the best response is to play the best response to that punishment which is the stage’s Nash equilibrium.   So the only equilibrium of the game is the repeated stage-Nash outcome.    For those unfamiliar with repeated games, the stage-Nash outcome is the worst possible outcome without any possibility of cooperation.  That is, the game becomes a prisoner’s dilemma, over and over again.

It’s always tempting to think that preemptive penalties yesterday could have stopped today’s tragedies, but the under-appreciated cost of preemptive actions is the risk of causing those very tragedies.

 

 

The Consumption Model of Inequality

With all the talk about inequality recently, I thought it was time for me to lay out my model of the political dynamics around inequality.   So let’s forget briefly about IMF studies and Piketty and simply ask ourselves how we can use the machinery of economics to understand the political cleavages it engenders.   As an aside, while I’ve never seen this model in the literature or anyone’s course notes, I’d nevertheless be shocked if I’m the first to think in these terms… I just don’t know who to credit with the idea (I think the idea is so simple and obvious that few bother to go through details).

The basic idea is that I’m going to view equality as something that makes agents more satisfied, in the sense that measurements of inequality, such as the Gini coefficient, enter into the agent’s utility directly.   So, if there’s a vector of “normal” goods, x_i for each i, and G is the Gini coefficient, then agent i has utility U_i(x_i,G), where dU_i/dG < 0 (inequality is a bad).  I’m implicitly viewing this as a static model, but it would be a simple matter to include time.

So, the economics here simply stem from the fact that the level of inequality is shared by all agents–that is, it is a pure public good (non-rival, non-excludable).    Beyond that simple insight, there’s only one other thing we need to know, which is how wealth is redistributed to reduce inequality.   You can use a simple mapping, G’ = R(T,G), where G’ < G (this would make the problem a standard public good, which is good enough to account for half the problem).

Or, to be more realistic… if w is the vector of each agent’s wealth (w_i… for simplicity arrange i so that w_i < w_j for i < j, so that w is effectively the lorentz curve and let W be aggregate wealth), then G = Sum_i 2*[ 1 – N*w_i/(i * W) ].   Then a valid redistribution maps w’ = R(w) such that the properties(i)  W’ = W, (ii) w_i < w_j  ==>  w’_i < w’_j and (iii) G’ < G all hold.   This means, graphically, that R maps the lorentz curve to a (weakly) higher lorentz curve keeping total wealth constant.

Public goods models are not particularly trivial to solve, although we know in general that inequality will be “overproduced” in the simple version of this model (with G’ = R(G,T)).

In the more complex version (with w’ = R(w)… effectively this version models models the technology for reducing inequality directly), there are two effects.   The under-provision of public goods is still an issue here… but only for those rich enough to pay net taxes (those for whom w’_i – w_i = t_i > 0… put these agents into a new set, I).   The set I is a function of how much redistribution is actually done, but it is only agents in I for whom the public goods game is non-trivial (those outside I, by definition, receive lower levels of inequality without paying net taxes… a win-win situation for them).   Generally (but not universally) as I expands there are more resources available to redistribute and there are fewer people to redistribute towards.   A marginal (the richest agent not paying net tax) agent i by definition balances the benefit of reducing inequality with her own tax bill from that more aggressive redistribution.

So here’s what’s interesting… this model (simple intuitively as it is… tho difficult to solve) exhibits tipping points.   Don’t believe me?   Consider this thought experiment… increase W by adding to w_i only of i in I.   Givewn the right initial setup, nothing will happen until G rises enough that the set I expands… basically at some point those not in I will demand to (on net) contribute to reducing inequality.

Of course, the details depend on R and how R is chosen (simple majority voting?), but the framework for thinking about the politics of inequality are here.   Note that if Piketty or the IMF are correct, then this model will understate the degree to which equality is under-provided.

Efficiency, Optimality and Values

May 4, 2014 2 comments

For the record, I see this post as a continuation and (yet another) response to the Sargent-Smith/KrugmanHouse debate over Sargent’s equity-efficiency assertion which I’ve commented on before.   The latest round of posts related to the topic have this debate trending in what I think is an odd direction putting me in the awkward position of defending a concept, Pareto efficiency, which I’d be more comfortable criticizing.

My proximate purpose  is to respond to Simon Wren-Lewis’s new post which I think illustrates the problem–this is, I think, one of the biggest confusions among economists about our own subject (not that Wren-Lewis is necessarily confused here, but he’s at least bringing up the problem).   The key graf:

Why is there this emphasis on only looking at Pareto improvements? I think you would have to work quite hard to argue that it was intrinsic to economic theory – it would be, and is, quite possible to do economics without it. (Many economists use social welfare functions.) But one thing that is intrinsic to economic theory is the diminishing marginal utility of consumption. Couple that with the idea of representative agents that macro uses all the time (who share the same preferences), and you have a natural bias towards equality. Focusing just on Pareto improvements neutralises that possibility. Now I mention this not to imply that the emphasis put on Pareto improvements in textbooks and elsewhere is a right wing plot – I do not know enough to argue that. But it should make those (mainstream or heterodox) who believe that economics is inherently conservative pause for thought.    

The problem is that Pareto efficiency and optimality are not the same things, cannot (or should not) be used interchangably.  In fairness, when I’m being a sloppy, I do the same thing; but it’s important to remind ourselves why this is a mistake.

So to remind ourselves, what is optimality and what is efficiency?

Optimality is the solution to a kind of thought experiment; the answer to the question of what would a benevolent, god-like social planner do if it had complete control of the economy (or “constrained optimal” if the social planner must work under constraints).   The advantage of the approach is that it produces unambiguous outcomes (often a single point in allocation-space).   The disadvantage is that  the planner’s problem is by definition not values-neutral.   Why do I say it’s not value’s neutral?  Because you need to define the planner’s objective function (i.e. the social welfare function) and the social welfare function defines the trade-offs the social planner is willing to make, for example, when balancing equity/efficiency.   “All I care about is Bill Gates’ wealth” is an acceptable, if odd, social welfare function, as is “complete equity at all costs”.   The general case is somewhere in between these two.

Efficiency means that there are no unexploited gains (no one can be made better off except at the cost of making another worse off):  contra Wren-Lewis, I want to argue that this is very much a values-neutral idea.   To see why consider this little factoid: regardless of the social welfare function you choose, the solution to every planner’s problem is Pareto efficient.  The converse is not necessarily true (Pareto efficiency is a necessary, not sufficient condition of optimality… as an aside this is where the confusion–I think–comes from, since economists often refer to the planner’s solution as “efficient”).   So here’s the thing, for every point in the Pareto set there is a social planner who would choose that point as the optimum (or you might say there’s a set of values which corresponds to each point in the Pareto set).   That’s the sense in which Pareto efficiency is values-neutral.

What Wren-Lewis is arguing about is slightly different issue.   Is the search for Pareto improvements also values neutral?  I think most economists would say ‘yes’.   After all, no social planner would be any worse than indifferent to a Pareto improvement (a valid social welfare function is weakly increasing in the well-being of every individual agent in the economy).

Does that actually make a Pareto improvement values neutral, however?   No, of course not (this is what I think Wren-Lewis has in mind, but I’m only guessing).   A Pareto-improvement shifts the outcome in allocation-space, but as a general matter a Pareto improvement “picks a direction”–different social planners will disagree that it is the correct direction to take.   Some social planner’s would even prefer to take away from some agents to give to others.   To put it more simply, if you keep exploiting Pareto inefficiencies randomly until you reach an efficient outcome, is the result optimal?   The answer is that with probability one, it will not be.

I’m not sure if I have any other comments to make… just a reminder to myself and others to be careful regarding efficiency and optimality.   I do suspect that the “confusion” here reflects a preference among some economists for the “core” solution concept of cooperative games… but I need to think about that a bit before I make that argument.  So I’ll leave this post here for now.

Greed and the Visible Hand

April 7, 2014 2 comments

Great article by John Paul Rollert (JPR) over at the Atlantic today–basically a history of the “Greed is Good”-meme.  Go read it.

I want to make a related point, which I’ll call the visible hand.

Adam Smith didn’t believe that greed is good in itself and he didn’t believe that the unregulated market will produce the best outcomes.  What he believed is that (h/t JPR)

…the moral logic of free markets was a law of unintended consequences.

More than that;

We get what we want in a complex commercial society—indeed, we get to have a complex commercial society—not because we seize things outright, but because we pursue them in a way that acknowledges legal and cultural constraints.

An economy is a complex web of trading, with every single trade between at least two consenting people.  Each trade is good for the participants, but why should that trade be good for society?

It is the laws and customs of society which guarantee that each transaction at least does no harm.  A mugging is a transaction, of a sort… a private transaction of safety for money.  Once the victim has a gun pointed at her head, handing over money to have that gun taken away makes her better off, just as the mugger is better off.  But, a law which makes this sort of transaction impossible, or unprofitable, could indeed make her happy; if it would mean that she never has a gun pointed to her head.

That’s the visible hand: the laws, norms and customs–that is, the infrastructure of the economy–which make sure our economic interactions lead to broad prosperity, specialization and free exchange, rather than exploitation and coercion.

It’s a visible hand because someone, somewhere has written those laws and paid the police and courts to enforce them.  Someone has raised the taxes that make that possible.  Someone has put a lot of thought into how the economy works on its deepest levels.

At its deepest levels an economy requires trust.  A commitment to the common good.  Greed will destroy all of our prosperity if we let it.   How do I know that?  Consider the game board below:

Mid_Q5_soln

This game is a simple schematic representing the payoff to each of the two players in the game of trade, when it is possible for them to try and cheat each other (‘v’ is the value of the good to the buyer, ‘c’ the cost of the good and ‘p’ it’s price; assume v>p>c so that trade is optimal).   Notice that the unique Nash equilibrium of this game is that there should be no trade.

Trade happens.  Why?  It’s not because we trade on a turn-table that guarantees that the money changes hands only at the exact instant the good does.  It’s because if I steal your things the police will come and throw me in jail.   If I try to pay the police to let me go, my sentence gets longer instead.   Trade happens because someone somewhere is not acting like homo economicus.

Ayn Rand‘s vision of markets free from interference which glorify and reward the most exceptional is both bad economics and bad morality.  In Rand’s perfect economy the powerful must (figuratively speaking) stab the rest of us in the back for the wealth we hold in our pockets.   There is no other way, since there could be no trade.   All for the glory of the powerful!  The rest of us view greed as a vice because we wisely seek the prosperity that only cooperation can bring.

The invisible hand needs the visible hand to be there, in the background, making sure that markets really do make us better off.