Brad DeLong tries to make sense of Hayek. At the risk of making an @$$ of myself, I have to respectfully disagree. The way I see it, libertarian-ish types have been trying to “claim” descendance from Adam Smith for generations. The problem is that, if Smith is “classical liberalism” incarnate, then libertarians have no more claim to him then the socialists do.
To try and illustrate why, I’m including a cladogram of the major currents of thought in the philosophy of political economy.
Maybe this is right, maybe it’s wrong. But let’s just assume it’s right for the moment. I want to discuss the branches I’ve labeled 1-4.
- The classical liberalism of Adam Smith. It is “classical” because Smith more-or-less invents the subject as it is now understood. Smith has some libertarian-ish views, or libertarians would not try to claim him, which include things like opposition to monopoly (in the day, entirely government created) and a belief in the effectiveness of market solutions (“invisible hand”). He also had some non-libertarian views such as the dangers of private collusion (“[capitalists] seldom meet… but the conversation ends in a conspiracy against the public”) or the benefits of social well-being (“… some principles in [man’s] nature… interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it”).
- Marx. It’s hard for me to look at Marx and not see Adam Smith: look how close they are! The truth is that Marx is best thought of as one branch of the first major split in classical liberalism. I want to get back to that split for (4), but for now just think of socialism as classical liberalism with a heavy emphasis on the well-being of the worker-class. This strain of liberalism has a complicated relationship with government (Marx himself is probably closest to the Left Anarchists).
- Same thing as with Marx, the Laissez-Faire branch of liberalism is one of two major branches which lead away from classical liberal thought. In this case, though, there is a heavy emphasis on the well-being of capitalists and a complicated view of monopoly. Unlike libertarians out there, I tend to view this strain as all but dead as an intellectual force, but there are lines of influence from these ideas to more modern theories.
- Ricardo and the direct intellectual descendants of Smith. Ricardo himself may have leaned right-ish a bit, but I think it’s fair to characterize his view as opposed to landed aristocracy. As such he imbodies both socialist (pro-labor) and laissez-faire (pro-capital) views. Ricardo may have leaned more towards capital politically, but his theory of comparative advantage was built assuming that labor is the only important input.
My takeaway from this is that the most important cleavage in liberal thought has to do with the factor of production that each strain identifies with.
Laissez-Faire thought identifies with the capitalists and quickly incorporates Nietzeschian notions of “supermen” to justify the super-wages which the capitalists earn within this otherwise liberal tradition. The incorporation of Nietzesche makes the Laissez-Faire strain the least classical, since Adam Smith himself is (I think, but I could be wrong) borrowing from Kant as his moral philosophy guide.
Socialist thought identifies with labor, but quickly recognizes that labor needs to band together in some way (recognizing a power disparity between owner and worker). Thus, along this strain there is a mighty back and forth over the role of government where the importance of the debate is often obscured by the fact that all these strains agree on the goal of an empowered workforce.
Then, there’s the neoliberal tradition. Direct descendants of Smith, this is basically the mainstream of the economics profession. What separates this tradition is precisely the indifference between labor and capital which are handled interchangeably. I don’t include every branch here, but I’d place Polyani along with the instutionalists.
Hayek is not here, obviously, along the neoliberal branch. Instead it’s hard for me to view the Austrian school as anything other than the last surviving branch of the laissez-faire strain.
Now maybe all this is wrong. Although, all I’m really doing here is classifying schools of thought using a standard clade-like (i.e. evolutionary) approach–strains of thought are related which are most alike. But if I’m am wrong, I’d like to know why. Rearrange the family tree and show me.
For some quick background, the thruster in question works by pumping microwaves into a cavity where they resonate. The cavity itself is shaped or otherwise constructed so that (it is claimed) the photon pressure on one end is greater than the photon pressure on the other end. No microwaves escape, but the differential force applied between the plates is the thrust. And the idea is “impossible” of course because the design doesn’t “push” on anything… Kind of like when you were a kid and you jumped in a wagon because you wanted to ride it and tried to get it to move by shifting your body weight (didn’t anyone else try this?), it didn’t work then, so why does a slightly fancier version of the same idea seem to work now?
I like this question because it allows me to stretch my too-long-unused physics/mad-scientist mental-muscles.
So, the first candidate explanation I have will be the easiest to test: some of the microwaves are leaking. Contrary to the media reports, propellant-less thruster designs are easy to do. Turn on your laser pointer, and it produces thrust. That thrust, of course, is just caused by the photon recoil–really pretty standard physics. If the container you’re using leaks photons preferentially in one direction you’ll get an anomalous thrust reading. Although it would work, shining lasers in space would make for a rather pathetic engine design.
This possibility is made more plausible by the fact that the three groups reporting results on this device all find very different results for thrust produced… something like a factor of 10,000 between the strongest and weakest results. More than that, it’s the least trusted sources claiming the greatest thrust from their engine design. Could it simply be that the Chinese group (who found the largest effect) didn’t check how much photon’s were leaking?
Having said all that, the second candidate explanation, that is the one which xkcd is lampooning, isn’t as crazy as it sounds. First, it’s always important to remember that “conservation of momentum” doesn’t exist as an immutable law of physics. Instead, Noether’s Theorem tells us that what we call “conservation of momentum” is really the result of a symmetry found in nature. Particularly, symmetry with respect to translation: position x is the same as position x + dx as far as the laws of physics goes.
So break that symmetry: x is the same as x + dx only for certain values of dx. That would mean that on a certain scale (translations smaller than the dx, say) the physics has a preferred dimension. That may sound a little weird, but really it’s not: a ladder has the exact same symmetry, it’s only invariant to translations which are multiples of the step length. So keep in mind that when people say “pushing against the quantum vacuum”, that a step-ladder analogy may be appropriate.
The question is why… exactly how does a specially shaped cavity create a metaphorical step-ladder? Honestly, it’s a little surprising, but it occurs to me that there is a well known effect which can explain it: the Casimir effect. Basically, if you set up two parallel metal plates, the photon’s vacuum energy between the plates is different than the vacuum energy of empty space around the plates. That generates a force between the plates. That effect has been seen in the lab.
Conversely, by making the plates assymetric, the vacuum state is distorted in the neighborhood of each plate slightly differently so that in equilibrium there is a smoothly varying value of the vacuum energy as you move from one plate to another. That’s the step-ladder and it just means that near one plate virtual particles pop up at a slightly higher frequency than at the other one. The analog of the ladder-climber, then, is the added microwave energy. Those photons interact with the virtual photons of the vacuum and with the right configuration will “push” the Casimir-virtual photons out of their equilibrium spatial distribution–and that’s your thrust.
I think that must be what people have in mind and I do find the explanation surprisingly plausible. Plausible but still far fetched. But it’s also exciting… it’s not just because it’s propellant-free (that’s easy), but also because it’s a closed system. Hell, use this engine to hover and it’s indistinguishable from anti-gravity.
Still, I’m skeptical. Although the above (second) explanation is my own, I’m inclined to think the first (boring) explanation is far more likely.
John Quiggin has a good post up on the Glazier’s Fallacy (which I’ve more commonly heard referred to as the broken-window fallacy).
Here’s the original argument from Henry Hazlitt:
A young hoodlum, say, heaves a brick through the window of a baker’s shop. The shopkeeper runs out furious, but the boy is gone. A crowd gathers, and begins to stare with quiet satisfaction at the gaping hole in the window and the shattered glass over the bread and pies. After a while the crowd feels the need for philosophic reflection. And several of its members are almost certain to remind each other or the baker that, after all, the misfortune has its bright side. It will make business for some glazier. As they begin to think of this they elaborate upon it. How much does a new plate glass window cost? Fifty dollars? That will be quite a sum. After all, if windows were never broken, what would happen to the glass business? Then, of course, the thing is endless. The glazier will have $50 more to spend with other merchants, and these in turn will have $50 more to spend with still other merchants, and so ad infinitum. The smashed window will go on providing money and employment in ever-widening circles. The logical conclusion from all this would be, if the crowd drew it, that the little hoodlum who threw the brick, far from being a public menace, was a public benefactor.
Now let us take another look. The crowd is at least right in its first conclusion. This little act of vandalism will in the first instance mean more business for some glazier. The glazier will be no more unhappy to learn of the incident than an undertaker to learn of a death. But the shopkeeper will be out $50 that he was planning to spend for a new suit. Because he has had to replace a window, he will have to go without the suit (or some equivalent need or luxury). Instead of having a window and $50 he now has merely a window. Or, as he was planning to buy the suit that very afternoon, instead of having both a window and a suit he must be content with the window and no suit. If we think of him as a part of the community, the community has lost a new suit that might otherwise have come into being, and is just that much poorer.
The glazier’s gain of business, in short, is merely the tailor’s loss of business. No new “employment” has been added. The people in the crowd were thinking only of two parties to the transaction, the baker and the glazier. They had forgotten the potential third party involved, the tailor. They forgot him precisely because he will not now enter the scene. They will see the new window in the next day or two. They will never see the extra suit, precisely because it will never be made. They see only what is immediately visible to the eye.
Quiggin’s response is basically Keynesian (which is fine by me):
Suppose that the glazier, having been out of work for some time, has worn out his clothes. Having fixed the window and been paid, he may take his $50 and buy a new suit. To make the story stop here, we’ll suppose that the tailor is a miser (a vice traditionally associated with the clothing industry, as with Silas Marner), and puts the money under his mattress. So, in this version of the story, the glazier and the tailor are both paid, and the social product is increased by a new window and a new suit.
What if the window had not been broken? Under the assumptions made so far, the shopkeeper would buy a new suit for $50, the tailor would hoard the money and the glazier would remain unemployed. The shopkeeper is better off, since (before the window was broken) he preferred a new suit to a new window. On the other hand, the glazier is worse off, since he gets no work and no suit. For society as a whole, both output and employment have increased.
So, the seeming refutation of the glazier’s fallacy falls apart on closer examination. On the one hand, Hazlitt uses language that implies the existence of unemployment. On the other hand, he is implicitly assuming that private and social opportunity cost are the same. The Second Lesson tells us that this won’t be true in general if the economy is in recession.
It’s a good response, but this argument isn’t going to move anyone who’s already inclined to dislike Keynes. Instead, I think it’s better to deconstruct Hazlitt’s argument from a classical perspective. As I see it, the problem with the Classicalists is that they view income as exogenous. The most important thing that Keynes showed, however, was that income is endogenous–fixed at the level of spending. Once that point is understood, it’s clear that Hazlitt’s scenario is a very special case even in the Classical tradition.
So, let me change Hazlitt’s thought experiment slightly.
First, suppose it is not the shop window which has been smashed, but a window at the shopkeeper’s home and suppose further that the shopkeeper doesn’t like to hang out at his now drafty house. Technically, we say that the window is a complement for home-leisure activities which the shopkeeper likes to engage in when his home is intact.
In this case, the relative opportunity cost of working is lower (his alternative to working are home-leisure activities in a now drafty house), and so on the margin standard theory would imply that he would substitute work for leisure. Longer hours working means that the shopkeeper’s income is higher, which presumably you can again talk about his propensity of spend that extra income on goods-other-than-home-leisure.
The rest of the story is the same, but in my scenario it is only the labor-leisure tradeoff, rather than unemployment, which does the work.
Simon Wren-Lewis has a great post up which relates to a point I’ve wanted to make for a long time… It’s not really a new point, per se, (I heard similar points being made with respect to “right-to-work” laws and there is this from Brad DeLong in a similar issue), but… well… Simon just has a great thought experiment:
Employees are already beset by red tape if they try to improve their working conditions. Now the UK government wants to increase the regulatory burden on them further, by proposing that employee organisations need a majority of all their members to vote for strike action before a strike becomes legal, even though those voting against strike action can still free ride on their colleagues by going to work during any strike and benefiting from any improvement in conditions obtained. Shouldn’t we instead be going back to a free market where employees are able to collectively withhold their labour as they wish?I doubt if you have ever read a paragraph that applies language in this way. Yet why should laws that apply to employers be regarded as a regulatory burden, but laws that apply to employees are not?
I like Noah Smith, but his scientific-skepticism meme-immune system appears to be very weak. The latest case in point is Noah’s post defending the use of Search Theory in Macroeconomics against John Quiggin who is rightly pointing out that Search Theory is incapable of explaining cyclical unemployment. I’m not really going to add to what Quiggin wrote, instead I’m only interested in Noah’s response. Before I go on, I should link to Noah’s excellent critique of Kartik Arthreya’s Big Ideas in Macroeconomics to which Quiggin is responding.
Conceding that Search Theory doesn’t explain all the employment patterns, Noah goes on to criticize “demand” explanations:
This is a simple answer… Economists are used to thinking in terms of supply and demand, so the AD-AS model comes naturally to mind… so we look at the economy and say “it’s a demand problem”.
But on a deeper level, that’s unsatisfying – to me, at least.
…what causes aggregate demand curves to shift?…how does aggregate demand affect unemployment? The usual explanation for this isdownward-sticky nominal wages. But why are nominal wages downward-sticky? There are a number of explanations, and again, these differences will have practical consequences.
… is an AD-AS model really a good way of modeling the macroeconomy?… The idea of abandoning the friendly old X of supply-and-demand is scary, I know, but maybe it just isn’t the best descriptor of booms and recessions,..
… I’m not really satisfied by the practice of putting “demand in the gaps”. If “demand” is not to be just another form of economic phlogiston, we need a consistent, predictive characterization of how it behaves…
Wow is that a lot of BS jammed into a short space. Noah is a strong proponent of a more empirical and predictive macroeconomics, which I agree with! but this post suggests that Noah doesn’t understand the other side of the problem, model selection and Okkam’s razor.
How do you know which model is the correct one? You can’t just say that it’s the model that survived empirical tests because there are an infinite number of possible models at any given time which have survived those tests. All that data you’ve collected tells you exactly nothing until you figure out which of those continuum of possible models you should treat as the preferred one. Okkam’s razor plays the keystone role in scientific methodology as the selection criterion. (If you were a philosopher of science you’d spend a lot of time trying to justify Okkam’s razor…Karl Popper believed it gave the most easily falsified model among the alternative… but as a practical scientist you can just accept it.)
Now that we’ve all accepted that Okkam’s razor must be used to winnow our choice of models, we should spend some time thinking about how to use Okkam’s razor to do this in practice. That would require a post in itself, so instead let me just mention one particular criterion I use: At any given time, who is the claimant? In science, the burden of proof is always on the claimant because the claimant’s model at any given time is almost always less simple than the accepted model given the field’s accepted information set.
As a heuristic, the claimant’s model generally does not pass Okkam’s razor’s test until new information is found and added to the field’s information set. It’s possible (and does happen) that a heretofore unknown or unnoticed model is simpler than the accepted one, but that’s rarer than you might think and not generally how science proceeds.
With all that out of the way, what’s my problem with Noah’s post? Two things:
1) Demand is not phlogiston
For those not in the know, phlogiston was an hypothetical substance which made up fire. The theory was rendered obsolete by the discovery of combustion.
Basically what Noah is saying here is that maybe demand, like phlogiston, is a hypothetical piece of a theory and that piece may be unnecessary. Now science certainly does produce phlogiston-like theories from time to time, these theories tend to be the result of trying to tweak systemic models: you have a theory of elements (at the time of phlogiston a sort of proto-elemental atomic theory) and a substance (fire) which you can’t explain. So add an element to you model to explain the substance.
The first thing to point out is that demand is a reductionist phenomenon in the strictest sense. The smallest unit of a macroeconomy (the atom, if you will) is the transaction. But a single transaction has a well-defined demand: how much the buyer is willing to trade for the item being transacted. So the neoclassicals are the claimants here: they’re saying that there is an emergent phenomenon in which demand becomes irrelevant for the macroeconomy. They are using an updated version of Say’s Law to argue that demand goes away, not that it never existed–that would be crazy.
Show me the evidence that it doesn’t exist, then we can talk. Yes, that’s hard. Tough… you’re the one making an outlandish claim, now live with it.
The second thing to notice is that phlogiston isn’t even phlogiston as Noah means it… rather phlogiston is a perfectly reasonable and testable scientific hypothesis, the study of which led to our understanding of oxidation.
2) You don’t need sticky prices to get demand curves
You don’t need sticky prices to get aggregate demand, rather sticky prices are the simplest (in modeling terms) way to get rid of Say’s law while otherwise keeping the market clearing assumption intact. Now market clearing is not necessarily a good assumption, but even more than the sticky prices are, it is a standard one.
Of course, no microeconomist worth half his or her salt would ever think market clearing is necessary because market clearing doesn’t always happen in the real world (look around). Store shelves are rarely bare, there are usually tables empty (or people waiting in line) at restaurants and some people pay outlandish prices for tickets to sporting events from scalpers even as some seats go unfilled. You can talk all you want about how sticky prices are a bad assumption, but the real problem here is that it’s silly that macroeconomists insist on market clearing.
This is a long winded way of saying that anything which breaks Say’s Law can substitute for the sticky-price assumption: 1) nominal debt not indexed to inflation, 2) demand for financial assests, or 3) non-stantionarity and knightian uncertainties. I’m sure I’m missing some other possibilities.
These are all “reductionist” explanations and once again, that’s my point. It is the neoclassicist demand-deniers who are flipping the script here and insisting on a systemic explanation for why demand should disappear in the aggregate.
I can go on, but this post is already getting too long. For my take on AS/AD in particular, see this. I think that answers Noah’s implicit objection.
First, I want to apologize for my recent light posting–a combination of being busy and not feeling well meant that something had to give. Still the last few weeks have been the busiest this blog has ever seen (more attention than I really want, honestly… I’d prefer just to influence the thinking of the more important bloggers) and I don’t think ignoring all that traffic shows respect for my readership. So I’m going to try to up my game and post more regularly this week.
This post, just like my last several (here and here) is again a response to Steve Waldman (indirectly) and his series on social welfare economics. This time I mostly agree with what Waldman wrote… I have some nitpicks with the fifth post in the series, but that’s all.
I’m actually responding to a commenter who suggested in my last post that Waldman is trying to justify a universal basic income. But I also support a UBI! Probably for different reasons than Steve, too. That’s what I want to talk about… I’ll present my thinking here in list form.
1. Efficiency of Unconditional Institutions
Let’s think in terms of two basic institutional structures which might make up the welfare state, two packages of benefits as it were. I’ll call them “conditional” benefits and “unconditional”. These labels are pretty self-explanatory, but let me pedantically spell them out.
There is a state of the world s and a set of all states S. Conditional benefits means that for some s and s’ in S, there are associated benefit levels, b_s and b_s’, which are not the same. Unconditional benefits means that the benefit level, b, is the same in all states of the world S. Simple. Here’s the thing, though: who decides if the state of the world is s or s’?
If the government decides, than conditional benefits mean larger bureaucracies and more fraud. If beneficiaries decide than you have to worry about creating “truth telling mechanisms“… that is you need to think about the cost of encouraging people not to game the system, to reveal their true type as an economist might put it. The problem with this second approach is that any type revelation mechanism requires what we call information rents. Which in non-economese means that the benefits packages have to be less targeted or less generous than would be idea.
2 Interlocking Macroeconomic Institutions
This is a point I haven’t heard elsewhere, but I think is the single greatest benefit to a UBI… It plays really well with another macroeconomic institution I support: nominal GDP targeting monetary policy.
Consider this. Fund a UBI through a simple flat tax on all income (for simplicity). So, you earn UBI benefits of w, then the after-tax benefit is (1-t)w and the tax collected is (t x NGDP… I’ll write NGDP as PY from now on). If the population is N, then Nw = tPY. So, if the growth target for nominal income is adjusted for population growth, then the funding stream for UBI is stable when the UBI grows at the same rate as NGDP. Another way to say the same thing is that wage pressure in the economy always grows at the rate of nominal income growth.
Yet a UBI would be a powerful counter-cyclical automatic stabilizer: rising in importance whenever NGDP falls below target. That reduces how hard the central bank has to work
That’s not all, though. Suppose NGDP is on target, but there is a supply shock, what happens? Well, inflation rises as a share of nominal growth. The purchasing of the UBI erodes relative to trend growth. Marginal workers reenter the labor force. So potential GDP growth rises again. Also note that when the UBI is set “too high”, that’s just a kind of adverse supply shock.
3 Work Paternalism is bad
Unlike Adam Ozimek, I don’t think it’s OK for libertarians to denounce all forms of paternalism… except when it comes to telling people that they necessarily have to work. Un. Cool.
Why do most people drop out of the labor force? 1) Personal reasons like becoming mothers (or, increasingly, fathers)… that’s something you ought to support if that’s the choice people want to make; 2) Professional reasons like retiring or career changes–these shouldn’t be tied to strict cutoffs like a retirement age but based on personal choices and normally libertarians are the first to point this out; 3) education–a UBI makes it easier to be a student and that’s a good thing for the economy. These are all people who are making better choices because of UBI.
That leaves only two theoretical groups, those who are demoralized by lengthy spells of unemployment (“discouraged workers”) and (theoretically, at least) there are the “parasites”.
In terms of the demoralized, I think people have the sign wrong: a UBI would help keep morale up, because low morale is one of the consequences of living without an income. More than that an income can help fund hobbies and hobbies can help prevent skill degradation by giving them an outlet to use some of those skills even without a job.
As for the parasites… to a good approximation, I don’t think they exist. Adam and other libertarians, on the other hand, seem to think that everyone who chooses not to work in a UBI regime must be a parasite. That’s ridiculous. A UBI, at least at the level people talk about, would be barely enough, or perhaps not even enough, to live on. People who would use their benefits to buy pot and video games would find no money left for rent and groceries, while who use their benefits for rent and groceries would find no money left to enjoy their free time. These are just not choices that actual human beings would make.
At any rate, that’s my thinking on a UBI. There is one more reason for me supporting a UBI, but that’s an issue which I think will come up when I’m ready to discuss my thesis online.
Steve Waldman has a good post up on welfare economics. That’s a topic I wrote about recently and I agree with almost everything he writes; and in fact I’ve dived into these specific issues in previous posts. I do have two complaints to make, though.
First, I can’t agree with this paragraph:
There is nothing inherently more scientific about using an ordinal rather than a cardinal quantity to describe an economic construct. Chemists’ free energy, or the force required to maintain the pressure differential of a vacuum, are cardinal measures of constructs as invisible as utility and with a much stronger claim to validity as “science”.
So long as we understand utility as a ranking of actions, not as a measure of welfare, then the only reasonably scientific approach is to use ordinal utility. Intensity absolutely is not a property of utility–nor should it be!–but is absolutely related to welfare. In other words, this is Waldman making the mistake he’s accusing others of making. There are several reasons for this,
- Science is only interested in things which are measurable, and “intensity of preference” is not measurable (advances in neuroeconomics aside)…
- At least it is not measurable in the absence of a smoothly alternative i.e. I might be able to measure “willingness to pay” in terms of money if I can set up an experiment where participants are forced to reveal how much money they’d be willing to trade. Then, you just measure utility in terms of money. That’s a consistent theory of utility intensity.
- The problem is that the theory in (2) can be duplicated as an ordinal theory just by recognizing that the bundle which the agent is bidding on is x=(1,m-p), where m is the money she started with, p is the revealed willingness to pay and ‘1’ represents getting the good. So the bid, p, solves u(1,m-p)>u(0,m). With that, I can order these two vectors.
- More succinctly, the economist can expect to observe “buy” or “don’t buy” at p, which is a binary relationship and intensity is at best bounded by p.
On the other hand, attributing some value to the psychological satisfaction of consuming something–such as attributing an intensity to preference–is the very goal of welfare economics. Yes, that’s hard, but it’s hard precisely because that psychological satisfaction isn’t observable to the economist/scientist.
My second issue with the post is this:
But before we part, let’s think a bit about what it would mean if we find that we have little basis for interpersonal welfare comparisons. Or more precisely, let’s think about what it does not mean. To claim that we have little basis for judging whether taking a slice of bread from one person and giving it to another “improves aggregate welfare” is very different from claiming that it can not or does not improve aggregate welfare. The latter claim is as “unscientific” as the former. One can try to dress a confession of ignorance in normative garb and advocate some kind of a precautionary principle, primum non nocere in the face of an absence of evidence. But strict precautionary principles are not followed even in medicine, and are celebrated much less in economics. They are defensible only in the rare and special circumstance where the costs of an error are so catastrophic that near perfect certainty is required before plausibly beneficial actions are attempted. If the best “scientific” economics can do is say nothing about interpersonal welfare comparison, that is neither evidence for nor evidence against policies which, like all nontrivial policies, benefit some and harm others, including policies of outright redistribution.
Let’s ignore the deep issues with ordinal vs. cardinal utility for the moment (I’ll return to that issue in a moment) and start with the social planner (SP)–remember from my previous post that the social planner is a thought experiment!–who wants to maximize welfare. The social planner’s objective function, of course, depends on the values we assign to the social planner (as an aside, I have research which, I think, cleverly gets around this problem). That is, welfare can take any form depending only on the biases of the researcher and what he/she can get away with. Let’s think about that…
Suppose the SP’s welfare maximization problem takes the basic form; W(u1,u2,…,uN)–for simplicity let’s assume that I’ve ordered the population so that wealth is ordered w1>w2>…>wN–and let’s only consider small perturbations in allocation space (i.e. 1 gives a slice of bread to N, slightly decreasing u1 and increasing uN). The question for the moment is whether this is consistent with welfare maximization. The answer is that it almost certainly is.
Why? because for a small disturbance W(.) is approximately linear, because all functions are approximately linear with respect to small disturbances. So W(.) looks like a weighted sum W = a1.u1 + a2.u2 + … + aN.uN, approximately, around feasible allocation x = (x1, x2, …, xN) whose change is small enough not to affect potential complications like incentives or aggregate production. I claim that moving that epsilon from x1 to xN must increase W as long as a1 is not too much larger than aN… this is just because the utilities ui are concave so that a small decrease to the rich’s allocation doesn’t decrease W much, but a small change to the poor’s allocation increases W a lot (to see this, just write the new W, W’ as W’ = W – (a1.du1 – aN.du2) where du1 is the decease in u1) . Remember that this is in complete generality.
Oh! I hear you saying… you’re forgetting about the ordinal vs cardinal objection! Unscientific!
Not quite. I do have to make some modifications, though. First, recall that the SP is a thought experiment and it’s perfectly reasonable for a thought experiment to have a telescope directed into the souls of agents–so long as we make our assumptions clear. Second, I can get around this problem in practice as well. Instead of u1(x1), u2(x2)…uN(xN), suppose the SP takes as given u1(w(x1; p)), u2(w(x2;p)),…uN(w(xN;p)), where w(x;p) is a function which imputes wealth from the market value, p, of the consumption bundle x.
Now, the only function of utility in the SP’s decision problem is to account for the marginal utility of wealth. That can be accounted for simply by making W(w1,w2,….,wN) concave in all it’s arguments. But that’s just the same problem as I had above with utility! In other words, as long as we remember to keep track of concavity, wealth is a proxy for utility as far as a social planner is concerned. Using wealth as a proxy for utility this way gets me around the ordinal/cardinal objection.
Now, as soon as I specify the form W takes, I move back out of the realm of science and back into the realm of values, but the point is that there are conclusions that we can draw without any other assumptions.
- There is a strong bias towards redistribution from the rich to the poor in any social welfare problem, ordinal utility or no.
- How much redistribution, though, is basically never a scientific problem… it depends on the form W takes, which depends on the values assigned to the SP.
- The only concave welfare function which does not support redistribution is the linear welfare function: i.e. W = b1.w1 + b2.w2 + …. + bN.wN. But notice that this is indifferent to distribution!
In effect, this linear welfare function is the one which most conservative economists are implicitly using. For example, it is literally the only welfare function in which the opening up of a harberger triangle would be a decisive issue in public policy. Yet–you might have noticed if you followed my discussion above–the linear social welfare function, even for small changes, can only be a true representation of the SP’s problem if the marginal utility of wealth is constant, when it should be diminishing.
That’s the problem.