Scientific Welfare Theory

Steve Waldman has a good post up on welfare economics.   That’s a topic I wrote about recently and I agree with almost everything he writes; and in fact I’ve dived into these specific issues in previous posts.  I do have two complaints to make, though.

First, I can’t agree with this paragraph:

There is nothing inherently more scientific about using an ordinal rather than a cardinal quantity to describe an economic construct. Chemists’ free energy, or the force required to maintain the pressure differential of a vacuum, are cardinal measures of constructs as invisible as utility and with a much stronger claim to validity as “science”.

So long as we understand utility as a ranking of actions, not as a measure of welfare, then the only reasonably scientific approach is to use ordinal utility.   Intensity absolutely  is not a property of utility–nor should it be!–but is absolutely related to welfare.   In other words, this is Waldman making the mistake he’s accusing others of making.   There are several reasons for this,

  1. Science is only interested in things which are measurable, and “intensity of preference” is not measurable (advances in neuroeconomics aside)…
  2. At least it is not measurable in the absence of a smoothly alternative i.e. I might be able to measure “willingness to pay” in terms of money if I can set up an experiment where participants are forced to reveal how much money they’d be willing to trade.   Then, you just measure utility in terms of money.   That’s a consistent theory of utility intensity.
  3. The problem is that the theory in (2) can  be duplicated as an ordinal theory just by recognizing that the bundle which the agent is bidding on is x=(1,m-p), where m is the money she started with, p is the revealed willingness to pay and ‘1’ represents getting the good.   So the bid, p, solves u(1,m-p)>u(0,m).   With that, I can order these two vectors.
  4. More succinctly, the economist can expect to observe “buy” or “don’t buy” at p, which is a binary relationship and intensity is at best bounded by p.

On the other hand, attributing some value to the psychological satisfaction of consuming something–such as attributing an intensity to preference–is the very goal of welfare economics.   Yes, that’s hard, but it’s hard precisely because that psychological satisfaction isn’t observable to the economist/scientist.

My second issue with the post is this:

But before we part, let’s think a bit about what it would mean if we find that we have little basis for interpersonal welfare comparisons. Or more precisely, let’s think about what it does not mean. To claim that we have little basis for judging whether taking a slice of bread from one person and giving it to another “improves aggregate welfare” is very different from claiming that it can not or does not improve aggregate welfare. The latter claim is as “unscientific” as the former. One can try to dress a confession of ignorance in normative garb and advocate some kind of a precautionary principle, primum non nocere in the face of an absence of evidence. But strict precautionary principles are not followed even in medicine, and are celebrated much less in economics. They are defensible only in the rare and special circumstance where the costs of an error are so catastrophic that near perfect certainty is required before plausibly beneficial actions are attempted. If the best “scientific” economics can do is say nothing about interpersonal welfare comparison, that is neither evidence for nor evidence against policies which, like all nontrivial policies, benefit some and harm others, including policies of outright redistribution.

Let’s ignore the deep issues with ordinal vs. cardinal utility for the moment (I’ll return to that issue in a moment) and start with the social planner (SP)–remember from my previous post that the social planner is a thought experiment!–who wants to maximize welfare.  The social planner’s objective function, of course, depends on the values we assign to the social planner (as an aside, I have research which, I think, cleverly gets around this problem).  That is, welfare can take any form depending only on the biases of the researcher and what he/she can get away with.   Let’s think about that…

Suppose the SP’s welfare maximization problem takes the basic form; W(u1,u2,…,uN)–for simplicity let’s assume that I’ve ordered the population so that wealth is ordered w1>w2>…>wN–and let’s only consider small perturbations in allocation space (i.e. 1 gives a slice of bread to N, slightly decreasing u1 and increasing uN).  The question for the moment is whether this is consistent with welfare maximization.  The answer is that it almost certainly is.

Why?  because for a small disturbance W(.) is approximately linear, because all functions are approximately linear with respect to small disturbances.   So W(.) looks like a weighted sum W = a1.u1 + a2.u2 + … + aN.uN, approximately, around feasible allocation x = (x1, x2, …, xN) whose change is small enough not to affect potential complications like incentives or aggregate production.   I claim that moving that epsilon from x1 to xN must increase W  as long as a1 is not too much larger than aN… this is just because the utilities ui are concave so that a small decrease to the rich’s allocation doesn’t decrease W much, but a small change to the poor’s allocation increases W a lot (to see this, just write the new W, W’ as W’ = W – (a1.du1 – aN.du2) where du1 is the decease in u1) .   Remember that this is in complete generality.

Oh!   I hear you saying… you’re forgetting about the ordinal vs cardinal objection!   Unscientific!

Not quite.   I do have to make some modifications, though.  First, recall that the SP is a thought experiment and it’s perfectly reasonable for a thought experiment to have a telescope directed into the souls of agents–so long as we make our assumptions clear.   Second, I can get around this problem in practice as well.   Instead of u1(x1), u2(x2)…uN(xN), suppose the SP takes as given u1(w(x1; p)), u2(w(x2;p)),…uN(w(xN;p)), where w(x;p) is a function which imputes wealth from the market value, p, of the consumption bundle x.

Now, the only function of utility in the SP’s decision problem is to account for the marginal utility of wealth.    That can be accounted for simply by making W(w1,w2,….,wN) concave in all it’s arguments.   But that’s just the same problem as I had above with utility!   In other words, as long as we remember to keep track of concavity, wealth is a proxy for utility as far as a social planner is concerned.   Using wealth as a proxy for utility this way gets me around the ordinal/cardinal objection.

Now, as soon as I specify the form W takes, I move back out of the realm of science and back into the realm of values, but the point is that there are conclusions that we can draw without any other assumptions.

  1. There is a strong bias towards redistribution from the rich to the poor in any social welfare problem, ordinal utility or no.
  2. How much redistribution, though, is basically never a scientific problem… it depends on the form W takes, which depends on the values assigned to the SP.
  3. The only concave welfare function which does not support redistribution is the linear welfare function: i.e. W = b1.w1 + b2.w2 + …. + bN.wN.   But notice that this is indifferent to distribution!

In effect, this linear welfare function is the one which most conservative economists are implicitly using.   For example, it is literally the only welfare function in which the opening up of a harberger triangle would be a decisive issue in public policy.   Yet–you might have noticed if you followed my discussion above–the linear social welfare function, even for small changes, can only be a true representation of the SP’s problem if the marginal utility of wealth is constant, when it should be diminishing.

That’s the problem.

  1. June 1, 2014 at 3:37 am

    “So long as we understand utility as a ranking of actions, not as a measure of welfare, then the only reasonably scientific approach is to use ordinal utility.”

    I’d be curious as to how you square this with choice under uncertainty. The standard approach to choices with uncertain outcomes is to say that an individual chooses to maximize their expected utility. But with this the utility function is no longer merely ordinal, as other ordinally equivalent utility functions would describe different behavior.

    I’ve always found the expected utility approach somewhat lacking — for one thing the choice of the mean as a stand in for a distribution of outcomes is really a quite arbitrary one in many cases. But I don’t see any alternatives that could be built on purely ordinal utility functions — are you aware of any work along these lines?

    • BSEconomist
      June 1, 2014 at 10:03 am

      That’s a really good question, probably deserving of a post in and of itself.

      I think the short answer is that expected utility theory is bunk. Literally. As in demonstrated not to work in the lab–the Allais paradox and all that.

      That point out of the way… the specific issue is that you can’t sum ordinal utilities across states if the utilities are ordinal. True. But the question we have to ask ourselves is whether or not utility should be state separable at all. The fact that one of our best theories of choice under uncertainty–prospect theory–has no axiomatic foundation says everything you need to know about the difficulty with separating utility by states. Human beings just don’t think like that.

      My thinking here is that we can probe the state dependence of utility just by asking people about their preferences for gambles–that’s pretty standard, really. That gets rid of the ordinality problem…sort of. The choice problem still involves both the cardinal nature of the utility and the risk of the gamble in a way that the scientist has little or no hope of disentangling.

      For instance, I can offer someone a safe $50, or a 10% chance for a nice watch (worth $500 on the market)… does my experiment tell me how much the consumer wants the watch or does it tell me how she discounts that value because of the odds?

      In some sense, the standard formulation from the textbooks gets around the problem. Replace the watch with $500. Now her choice tells me something about her risk tolerance. If she took the gamble she’s risk neutral, right? But I can say something unambiguous about risk tolerance in this case only because money is continuously varying–which means I can treat the ordinal problem as a cardinal one–so that I can attribute her choice entirely to her tolerance for risk.

      Mas-Colell (standard graduate micro text) actually has a section using this property to recreate cardinal utilities from ordinal ones, but that only works if the expected utility framework holds, which it famously does not.

      Not a very satisfying answer, but then there is still active research in this area.

    • BSEconomist
      June 1, 2014 at 10:18 am

      That last response was so rambling I forgot to answer the question! I dropped some good starting points (allais paradox, prospect theory).

      If father of the ordinal approach, though, is the best place to start: Armen Alchian… although of course he’s not doing research anymore since he died last year. There was also a guy recently hired by Stanford who did his thesis on a axiomization of the allais paradox… I don’t remember his name, though… and I never got around to reading that paper.

      Then there’s Bewley who did some work on knightian uncertainty (“unknown unknowns”) and comes up with something that looks a bit like prospect theory–although his papers are some of the most difficult to read I’ve ever come across.

      There’s probably more out there, but I’d suggest searching some of those names/terms in google scholar and in no time you’ll know more about the subject than I do.

  2. June 3, 2014 at 1:36 am

    “I think the short answer is that expected utility theory is bunk. Literally. As in demonstrated not to work in the lab–the Allais paradox and all that.”

    While I agree with this, my agreement is tempered by the fact that I am quite confident I will never see in my lifetime a model of human choice that is not demonstrated to not work in the lab, or at least I’ll never see one that isn’t a far more useless theory of anything. Even without uncertainty there are cases (e.g. framing effects) where utility functions are insufficient to describe human behavior, ordinal or otherwise. But in spite of this I think it’s at least as big of a problem that utility frameworks are a (failed!) attempt at a theory of anything, and that more restrictive frameworks that often (but not always) make good predictions about human behavior would be preferable.

    I looked at prospect theory but it looks like it still requires cardinal utility functions to make predictions. Ultimately I find cardinal utility functions reasonable just because I know first hand that preferences have intensities, and it is entirely plausible that they will be measurable in the future and we regularly gauge the intensities of others preferences in unscientific ways in day-to-day interactions. However I don’t think that is relevant to macroeconomics; otherwise we would redistribute from the easy going to the picky and demanding. Any social utility function is a statement of assumptions about what is ethical, not a statement about the relative strengths of preferences.

    • BSEconomist
      June 3, 2014 at 7:38 am

      First, it’s important not to confuse expected utility and utility. The reason that the Allais paradox exists at all is that we can probe revealed preferences–an ordinal concept of utility–and compare the results to the predictions of expected utility theory–a cardinal concept of utility. That the two disagree is a mark in favor of the ordinal approach not against it.

      The fact that prospect theory is also a cardinal theory since there’s not really an alternative theory for choice under uncertainty which is ordinal.

      Nevertheless, there are ordinal theories out there passing empirical tests–the discrete choice model comes immediately to mind.

      As for your last point, my post outlines how one might go about developing a theory of social welfare which is not vulnerable to your objection. Wealth is a proxy for utility level–a higher level of wealth implies a higher level of utility–and in the ordinal utility view, that is everything we need to know. This is actually a more consistent view, since ordinal utilities across populations cannot simply be summed so therefore it makes no sense to say that redistribution goes to the picky and demanding. Yet wealth can be summed, and as a proxy for utility that means that redistribution should always go from the rich to the poor.

  3. June 21, 2014 at 9:33 am

    There’s an easier way to pee in Steve’s Cheerios. I, like you, think he’s a great guy, who lately has wandered further off the end of the branch, the but remember where he always wants to go:

    Universal Income

    My plan is Guaranteed Income / Choose Your Boss, which for any amount of welfare redistribution maximizes the purchasing power of said welfare within the bounds of maximizing choice of labor:

    View story at

    I’d go so far as to use Uber for Welfare as slave reparations:

    View story at

    Steve doesn’t LIKE admitting that requiring work will increase consumption for the poor, but he will admit it.

    When he’s cornered, he will then say, well maximizing consumption for the poor isn’t the goal.

    And that’s the deal right there. In all of his take a slice of bread, give a slice of bread machinations, he DOESN’T want to have to be efficient – at all.

    Steve’s ultimate fallback position if you push, is that being efficient isn’t that big a deal, all that matters is reducing inequality. He’s similarly not interested in maximizing for faster technology adoption, productivity gains.

    What Steve wants is to let every human be able to decide to work or not, and still live a life that is “nearly” as good as everyone else.

    To be charitable, it’s more like Steve HOPES that if everyone is given a livable income, they will still go out and produce, and he’s prepared to admit later if it doesn’t work out.

    But he really ought to just keep repeating that one thing, bc all the other stuff, if he finds it gets in the way of that one end experiment he wants to do, he’ll throw it overboard.

    • BSEconomist
      June 21, 2014 at 6:02 pm

      Assuming that Steve’s posts on welfare are, as you say, intended to push his preferred policy (universal basic income) then I have to say that I’m all on board with that. In fact, UBI is my own preferred way of organizing the welfare state… I have my own reasons for that, which perhaps I’ll find the time to write a post about some day.

  4. June 21, 2014 at 9:49 pm

    Again, every bit of logic for UBI requires Choose Your Boss (the work requirement).

    Any resistance to CYB undermines the argument for UBI.

    It’s never about free money, it’s ALWAYS about wage subsidies. The “good” is always amplified when prices fall in poor areas (bc of work requirement).

    • BSEconomist
      June 23, 2014 at 8:46 am

      The disagreement is at least partly that I don’t think it is wise to base support on any sort of conditionality–especially work. As I hinted in my last response I have other reasons to support UBI, but the lack of any kind of work requirement, however benign, is part of the reason I support it.

      I think that I’ll try to write up that post laying out my case sometime in the next week. Then we can discuss the problem on the basis of my actual views concerning UBI or CYB or whatever.

  1. July 11, 2014 at 2:56 pm
  2. July 13, 2014 at 1:41 pm

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: