Archive

Archive for the ‘philosophy of science’ Category

How I see the world: politics and political economy

August 8, 2014 Leave a comment

Brad DeLong tries to make sense of Hayek.   At the risk of making an @$$ of myself, I have to respectfully disagree.  The way I see it, libertarian-ish types have been trying to “claim” descendance from Adam Smith for generations.  The problem is that, if Smith is “classical liberalism” incarnate, then libertarians have no more claim to him then the socialists do.

To try and illustrate why, I’m including a cladogram of the major currents of thought in the philosophy of political economy.

My view of the evolution of economic political philosophy

My view of the evolution of economic political philosophy

Maybe this is right, maybe it’s wrong.  But let’s just assume it’s right for the moment.  I want to discuss the branches I’ve labeled 1-4.

  1. The classical liberalism of Adam Smith.  It is “classical” because Smith more-or-less invents the subject as it is now understood.  Smith has some libertarian-ish views, or libertarians would not try to claim him, which include things like opposition to monopoly (in the day, entirely government created) and a belief in the effectiveness of market solutions (“invisible hand”).  He also had some non-libertarian views such as the dangers of private collusion (“[capitalists] seldom meet… but the conversation ends in a conspiracy against the public”) or the benefits of social well-being (“… some principles in [man’s] nature… interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it”).
  2. Marx.   It’s hard for me to look at Marx and not see Adam Smith: look how close they are!   The truth is that Marx is best thought of as one branch of the first major split in classical liberalism.  I want to get back to that split for (4), but for now just think of socialism as classical liberalism with a heavy emphasis on the well-being of the worker-class.   This strain of liberalism has a complicated relationship with government (Marx himself is probably closest to the Left Anarchists).
  3. Same thing as with Marx, the Laissez-Faire branch of liberalism is one of two major branches which lead away from classical liberal thought.  In this case, though, there is a heavy emphasis on the well-being of capitalists and a complicated view of monopoly.   Unlike libertarians out there, I tend to view this strain as all but dead as an intellectual force, but there are lines of influence from these ideas to more modern theories.
  4. Ricardo and the direct intellectual descendants of Smith.  Ricardo himself may have leaned right-ish a bit, but I think it’s fair to characterize his view as opposed to landed aristocracy.   As such he imbodies both socialist (pro-labor) and laissez-faire (pro-capital) views.  Ricardo may have leaned more towards capital politically, but his theory of comparative advantage was built assuming that labor is the only important input.

My takeaway from this is that the most important cleavage in liberal thought has to do with the factor of production that each strain identifies with.

Laissez-Faire thought identifies with the capitalists and quickly incorporates Nietzeschian notions of “supermen” to justify the super-wages which the capitalists earn within this otherwise liberal tradition.  The incorporation of Nietzesche makes the Laissez-Faire  strain the least classical, since Adam Smith himself is (I think, but I could be wrong) borrowing from Kant as his moral philosophy guide.

Socialist thought identifies with labor, but quickly recognizes that labor needs to band together in some way (recognizing a power disparity between owner and worker).  Thus, along this strain there is a mighty back and forth over the role of government where the importance of the debate is often obscured by the fact that all these strains agree on the goal of an empowered workforce.

Then, there’s the neoliberal tradition.  Direct descendants of Smith, this is basically the mainstream of the economics profession.   What separates this tradition is precisely the indifference between labor and capital which are handled interchangeably.   I don’t include every branch here, but I’d place Polyani along with the instutionalists.

Hayek is not here, obviously, along the neoliberal branch.  Instead it’s hard for me to view the Austrian school as anything other than the last surviving branch of the laissez-faire strain.

Now maybe all this is wrong.   Although, all I’m really doing here is classifying schools of thought using a standard clade-like (i.e. evolutionary) approach–strains of thought are related which are most alike.   But if I’m am wrong, I’d like to know why.  Rearrange the family tree and show me.

Noah Smith catches the Demand-Denialist Bug

I like Noah Smith, but his scientific-skepticism meme-immune system appears to be very weak.  The latest case in point is Noah’s post defending the use of Search Theory in Macroeconomics against John Quiggin who is rightly pointing out that Search Theory is incapable of explaining cyclical unemployment.  I’m not really going to add to what Quiggin wrote, instead I’m only interested in Noah’s response.  Before I go on, I should link to Noah’s excellent critique of Kartik Arthreya’s Big Ideas in Macroeconomics to which Quiggin is responding.

Conceding that Search Theory doesn’t explain all the employment patterns, Noah goes on to criticize “demand” explanations:

This is a simple answer… Economists are used to thinking in terms of supply and demand, so the AD-AS model comes naturally to mind… so we look at the economy and say “it’s a demand problem”.

But on a deeper level, that’s unsatisfying – to me, at least.

…what causes aggregate demand curves to shift?…how does aggregate demand affect unemployment? The usual explanation for this isdownward-sticky nominal wages. But why are nominal wages downward-sticky? There are a number of explanations, and again, these differences will have practical consequences.

… is an AD-AS model really a good way of modeling the macroeconomy?… The idea of abandoning the friendly old X of supply-and-demand is scary, I know, but maybe it just isn’t the best descriptor of booms and recessions,..

… I’m not really satisfied by the practice of putting “demand in the gaps”. If “demand” is not to be just another form of economic phlogiston, we need a consistent, predictive characterization of how it behaves…

Wow is that a lot of BS jammed into a short space.   Noah is a strong proponent of a more empirical and predictive macroeconomics, which I agree with!  but this post suggests that Noah doesn’t understand the other side of the problem, model selection and Okkam’s razor.

How do you know which model is the correct one?  You can’t just say that it’s the model that survived empirical tests because there are an infinite number of possible models at any given time which have survived those tests.   All that data you’ve collected tells you exactly nothing until you figure out which of those continuum of possible models you should treat as the preferred one.   Okkam’s razor plays the keystone role in scientific methodology as the selection criterion.   (If you were a philosopher of science you’d spend a lot of time trying to justify Okkam’s razor…Karl Popper believed it gave the most easily falsified model among the alternative… but as a practical scientist you can just accept it.)

Now that we’ve all accepted that Okkam’s razor must be used to winnow our choice of models, we should spend some time thinking about how to use Okkam’s razor to do this in practice.   That would require a post in itself, so instead let me just mention one particular criterion I use:  At any given time, who is the claimant?   In science, the burden of proof is always on the claimant because the claimant’s model at any given time  is almost always less simple than the accepted model given the field’s accepted information set.

As a heuristic, the claimant’s model generally does not pass Okkam’s razor’s test until new information is found and added to the field’s information set.   It’s possible (and does happen) that a heretofore unknown or unnoticed model is simpler than the accepted one, but that’s rarer than you might think and not generally how science proceeds.

With all that out of the way, what’s my problem with Noah’s post?  Two things:

1)  Demand is not phlogiston

For those not in the know, phlogiston was an hypothetical substance which made up fire.  The theory was rendered obsolete by the discovery of combustion.

Basically what Noah is saying here is that maybe demand, like phlogiston, is a hypothetical piece of a theory and that piece may be unnecessary.   Now science certainly does produce phlogiston-like theories from time to time, these theories tend to be the result of trying to tweak systemic models:   you have a theory of elements (at the time of phlogiston a sort of proto-elemental atomic theory) and a substance (fire) which you can’t explain.  So add an element to you model to explain the substance.

The first thing to point out is that demand is a reductionist phenomenon in the strictest sense.   The smallest unit of a macroeconomy (the atom, if you will) is the transaction.  But a single transaction has a well-defined demand:  how much the buyer is willing to trade for the item being transacted.   So the neoclassicals are the claimants here:  they’re saying that there is an emergent phenomenon in which demand becomes irrelevant for the macroeconomy.   They are using an updated version of  Say’s Law to argue that demand goes away, not that it never existed–that would be crazy.

Show me the evidence that it doesn’t exist, then we can talk.   Yes, that’s hard.   Tough… you’re the one making an outlandish claim, now live with it.

The second thing to notice is that phlogiston isn’t even phlogiston as Noah means it… rather phlogiston is a perfectly reasonable and testable scientific hypothesis, the study of which led to our understanding of oxidation.

2)  You don’t need sticky prices to get demand curves

You don’t need sticky prices to get aggregate demand, rather sticky prices are the simplest (in modeling terms) way to get rid of Say’s law while otherwise keeping the market clearing assumption intact.  Now market clearing is not necessarily a good assumption, but even more than the sticky prices are, it is a standard one.

Of course, no microeconomist worth half his or her salt would ever think market clearing is necessary because market clearing doesn’t always happen in the real world (look around).  Store shelves are rarely bare, there are usually tables empty (or people waiting in line) at restaurants and some people pay outlandish prices for tickets to sporting events from scalpers even as some seats go unfilled.   You can talk all you want about how sticky prices are a bad assumption, but the real problem here is that it’s silly that macroeconomists insist on market clearing.

This is a long winded way of saying that anything which breaks Say’s Law can substitute for the sticky-price assumption: 1) nominal debt not indexed to inflation, 2) demand for financial assests, or 3) non-stantionarity and knightian uncertainties.   I’m sure I’m missing some other possibilities.

These are all “reductionist” explanations and once again, that’s my point.   It is the neoclassicist demand-deniers who are flipping the script here and insisting on a systemic explanation for why demand should disappear in the aggregate.

I can go on, but this post is already getting too long.  For my take on AS/AD in particular, see this.  I think that answers Noah’s implicit objection.

Scientific Welfare Theory

May 30, 2014 11 comments

Steve Waldman has a good post up on welfare economics.   That’s a topic I wrote about recently and I agree with almost everything he writes; and in fact I’ve dived into these specific issues in previous posts.  I do have two complaints to make, though.

First, I can’t agree with this paragraph:

There is nothing inherently more scientific about using an ordinal rather than a cardinal quantity to describe an economic construct. Chemists’ free energy, or the force required to maintain the pressure differential of a vacuum, are cardinal measures of constructs as invisible as utility and with a much stronger claim to validity as “science”.

So long as we understand utility as a ranking of actions, not as a measure of welfare, then the only reasonably scientific approach is to use ordinal utility.   Intensity absolutely  is not a property of utility–nor should it be!–but is absolutely related to welfare.   In other words, this is Waldman making the mistake he’s accusing others of making.   There are several reasons for this,

  1. Science is only interested in things which are measurable, and “intensity of preference” is not measurable (advances in neuroeconomics aside)…
  2. At least it is not measurable in the absence of a smoothly alternative i.e. I might be able to measure “willingness to pay” in terms of money if I can set up an experiment where participants are forced to reveal how much money they’d be willing to trade.   Then, you just measure utility in terms of money.   That’s a consistent theory of utility intensity.
  3. The problem is that the theory in (2) can  be duplicated as an ordinal theory just by recognizing that the bundle which the agent is bidding on is x=(1,m-p), where m is the money she started with, p is the revealed willingness to pay and ‘1’ represents getting the good.   So the bid, p, solves u(1,m-p)>u(0,m).   With that, I can order these two vectors.
  4. More succinctly, the economist can expect to observe “buy” or “don’t buy” at p, which is a binary relationship and intensity is at best bounded by p.

On the other hand, attributing some value to the psychological satisfaction of consuming something–such as attributing an intensity to preference–is the very goal of welfare economics.   Yes, that’s hard, but it’s hard precisely because that psychological satisfaction isn’t observable to the economist/scientist.

My second issue with the post is this:

But before we part, let’s think a bit about what it would mean if we find that we have little basis for interpersonal welfare comparisons. Or more precisely, let’s think about what it does not mean. To claim that we have little basis for judging whether taking a slice of bread from one person and giving it to another “improves aggregate welfare” is very different from claiming that it can not or does not improve aggregate welfare. The latter claim is as “unscientific” as the former. One can try to dress a confession of ignorance in normative garb and advocate some kind of a precautionary principle, primum non nocere in the face of an absence of evidence. But strict precautionary principles are not followed even in medicine, and are celebrated much less in economics. They are defensible only in the rare and special circumstance where the costs of an error are so catastrophic that near perfect certainty is required before plausibly beneficial actions are attempted. If the best “scientific” economics can do is say nothing about interpersonal welfare comparison, that is neither evidence for nor evidence against policies which, like all nontrivial policies, benefit some and harm others, including policies of outright redistribution.

Let’s ignore the deep issues with ordinal vs. cardinal utility for the moment (I’ll return to that issue in a moment) and start with the social planner (SP)–remember from my previous post that the social planner is a thought experiment!–who wants to maximize welfare.  The social planner’s objective function, of course, depends on the values we assign to the social planner (as an aside, I have research which, I think, cleverly gets around this problem).  That is, welfare can take any form depending only on the biases of the researcher and what he/she can get away with.   Let’s think about that…

Suppose the SP’s welfare maximization problem takes the basic form; W(u1,u2,…,uN)–for simplicity let’s assume that I’ve ordered the population so that wealth is ordered w1>w2>…>wN–and let’s only consider small perturbations in allocation space (i.e. 1 gives a slice of bread to N, slightly decreasing u1 and increasing uN).  The question for the moment is whether this is consistent with welfare maximization.  The answer is that it almost certainly is.

Why?  because for a small disturbance W(.) is approximately linear, because all functions are approximately linear with respect to small disturbances.   So W(.) looks like a weighted sum W = a1.u1 + a2.u2 + … + aN.uN, approximately, around feasible allocation x = (x1, x2, …, xN) whose change is small enough not to affect potential complications like incentives or aggregate production.   I claim that moving that epsilon from x1 to xN must increase W  as long as a1 is not too much larger than aN… this is just because the utilities ui are concave so that a small decrease to the rich’s allocation doesn’t decrease W much, but a small change to the poor’s allocation increases W a lot (to see this, just write the new W, W’ as W’ = W – (a1.du1 – aN.du2) where du1 is the decease in u1) .   Remember that this is in complete generality.

Oh!   I hear you saying… you’re forgetting about the ordinal vs cardinal objection!   Unscientific!

Not quite.   I do have to make some modifications, though.  First, recall that the SP is a thought experiment and it’s perfectly reasonable for a thought experiment to have a telescope directed into the souls of agents–so long as we make our assumptions clear.   Second, I can get around this problem in practice as well.   Instead of u1(x1), u2(x2)…uN(xN), suppose the SP takes as given u1(w(x1; p)), u2(w(x2;p)),…uN(w(xN;p)), where w(x;p) is a function which imputes wealth from the market value, p, of the consumption bundle x.

Now, the only function of utility in the SP’s decision problem is to account for the marginal utility of wealth.    That can be accounted for simply by making W(w1,w2,….,wN) concave in all it’s arguments.   But that’s just the same problem as I had above with utility!   In other words, as long as we remember to keep track of concavity, wealth is a proxy for utility as far as a social planner is concerned.   Using wealth as a proxy for utility this way gets me around the ordinal/cardinal objection.

Now, as soon as I specify the form W takes, I move back out of the realm of science and back into the realm of values, but the point is that there are conclusions that we can draw without any other assumptions.

  1. There is a strong bias towards redistribution from the rich to the poor in any social welfare problem, ordinal utility or no.
  2. How much redistribution, though, is basically never a scientific problem… it depends on the form W takes, which depends on the values assigned to the SP.
  3. The only concave welfare function which does not support redistribution is the linear welfare function: i.e. W = b1.w1 + b2.w2 + …. + bN.wN.   But notice that this is indifferent to distribution!

In effect, this linear welfare function is the one which most conservative economists are implicitly using.   For example, it is literally the only welfare function in which the opening up of a harberger triangle would be a decisive issue in public policy.   Yet–you might have noticed if you followed my discussion above–the linear social welfare function, even for small changes, can only be a true representation of the SP’s problem if the marginal utility of wealth is constant, when it should be diminishing.

That’s the problem.

Realistic Assumptions and Godel

March 29, 2014 Leave a comment

A good post from Peter Dorman about the dangers of unrealistic assumptions.   This, however, I disagree with:

 I take it as axiomatic that the economic world is far too complex and variegated to be comprehended or forecasted by any single model.  Sometimes one set of factors is paramount, and particular model captures its dynamic, and then another set takes over, and if you continue to follow the first model you’re toast.  There are complicated times when you need a bunch of models all at once to make sense of what’s going on, even when they disagree with each another in certain respects. [emphasis mine]

That right there is the problem.  You can’t take it as an assumption that no finite set of assumptions can describe the world.  You need to do better than that.   I’ve been dying for someone to try and make this argument since this chameleon paper came out.

Setup

Let’s think about this carefully.  Let’s suppose, for simplicity, that the world can be described by the unit interval, [0,1].   For the moment don’t worry too much about what I mean by this, it’ll become clear later.

I want to describe this world, what happens in it, given some initial condition x in [0,1], and so I develop a model with some set of assumptions, which together imply that the domain for validity of that model is some subset, S, of [0,1] (for simplicity, also assume that S is connected).  That is, since I have made assumptions with my model, not every set of initial conditions can be used to map to a prediction in a well-defined way.

Cantor’s Description of [0,1]

OK.  So here’s my question: given a finite vocabulary to describe the set of initial conditions, can I exactly specify the boundary of S?  The answer is no.

To understand why, consider describing S with the following vocabulary: start with the entire interval [0,1]; if a point is in the left half of the interval assign it as a “0”, otherwise assign it a “1”; repeat this process for each half interval and assign the {0,1} to the left of the string you’ve already produced.   Basically, I’m describing [0,1] as a 2-adic number, so the point 0 on [o,1] is written as ….000000.   Which you might read as the instruction “to find point 0, always take the left interval”.

Could you describe the boundaries of S using this method?

Assumptions as p-adic numbers

You might be tempted to look at this set-up and ask yourself why I’m describing [0,1] in such an inefficient way.   The reason is that my description of [0,1] using 2-adic numbers is isomorphic to the use of axioms.

Think of it this way.  An axiom is a string of letters/symbols.   There are a finite number of letters/symbols and I string a finite number of them together to make arbitrarily long statements.  That means that I can assign every possible axiom a natural number.   Let’s do that.

So, consider that if I’m using only axiom 2 and no others.   I can write that as the set of all 2-adic numbers with a “1” in the second position; i.e. {…010,…011,…110,…111}.  This is a cantor set, and it is also the domain of validity of the assumption 2 (as I have set things up).   There’s no way that a finite set of these things could describe the boundary of S.

What’s the Take Away?

I started off with a model with a well-behaved domain of validity S.   It was nice and well-behaved.   Then, I explored trying to describe that domain of validity using a finite language and finding that the two don’t match.

What does that mean?  I don’t have to assume that no finite set of axioms can describe our economic system, and its not just that the economic system is complex (although it undoubtedly is).  Every model is an approximation by its very nature, because our ability to describe that model is an approximation.

This argument isn’t a proof, I’d need to clean things up a bit, but the intuition here is basically the same as the Godel incompleteness theorem (as an aside, I use the same basic reasoning as the basis for my moral theory, but that’ll have to wait until I write a post about my philosophy, which I plan to do someday…).  The only real insight is that a scientific model is a kind of Turing Machine and so every model has an approximation problem.

 

Keeping the Macro wars out of Micro

July 22, 2012 1 comment

Paul Krugman has a post in which he joins Peter Dorman in blaming Micro for Macro’s failures.   Given my last post, you might expect me to be a little miffed.   Ironically, though I often write about things I don’t know much about–the blog is supposed to be “practice” writing for me–this is a topic about which I really do know a great.   It involves my thesis.   Yet I don’t really have time to deal with the subject as it deserves.

Also, its  ironic because I’m on Krugman’s and Dorman’s side on most political issues.

What’s really interesting is that Krugman cites this paper to make  his case.   At the risk of violating PKIAR (“Paul Krugman is always right”), I’m going to have to point out that there are at least three problems with this.

That’s Science

Even if we take this research as Krugman has; it is not a sign of the weakness of micro theory, but a sign of strength.   You’re supposed to constantly test and re-test a theory to establish its empirical credibility.   Explanatory power is the key: if the theory works in most circumstances, than it is useful and scientific.   Truth is unimportant–let the philosophers worry about that.   Possible violations on the other hand–whether they bear fruit or not–lead to newer, better theories or deeper understanding.

The best possible place for a field to be in is to have a successful (empirically) theory in terms of broad–not necessarily universal!–explanatory power AND known violations.   High energy physics has been in a bit of a bind for several decades because the Standard Model of Particle Physics was TOO successful.   It is in this ideal point–a successful theory with possible violations–that micro theory now occupies.

As for microfoundations: well yeah if you assume that standard micro theory as it is applied in macro models is some kind of Deep-Truth-of-the-Universe… yes, then you’ll find that “micro theory” has led you astray.   Except it hasn’t: you’ve led yourselves astray by taking a theory as Truth which has no claim to the title.

In other fields reductionism is used to build the microfoundations for the theory and not the other way around.

That’s Standard

The Hasting/Shapiro claim that this is a violation of standard utility theory is, needless to say, dubious at best.   I’m a  fan of both guys and I could be convinced, but here’s the problem: this consumer behavior (keeping expenditure on a single good constant) is one of the easiest forms of behavior to reconcile with standard–even Econ 101 level–theory.   This is actually a common property.

It turns out that the absence of direct or indirect preference reversals is both necessary and sufficient for maximizing behavior.   A property call SARP (“Strong Axiom of Revealed Preference”).   What Hasting/Shapiro are trying to show is that there may in fact be a violation here.  Or at least that maintaining SARP would imply an implausibly large income effect.  It wouldn’t be the first claim to a violation of SARP, if true,but every attempt so far runs up against a hidden variable problem–you could say the ceteris is not paribus.

What Hastings/Shapiro seem, to me, to be doing is they are embracing the hidden variable.

Mental Accounting

So above I have a (toy) model of what they are thinking.   Our usual notion of what is “affordable” is not correct because it ignores the “cost” of “reoptimizing”.   If there is a shock to prices, then consumers habits are thrown off–the old habits are not affordable any long.   The consumer would like to do better, but she doesn’t know how, exactly–her plans have been upset and it will require effortful thinking to formulate new ones.   What is she to do?

Well, she does as well as she can do given her new money budget and her mental budget.   Well, there is a simple point that satisfies both–on just the edge!–which I marked ‘Z’.   It is right at the “kink” where her budget constraints meet.     Getting away from the silly picture, what could we say about the likely properties of ‘Z’ as opposed to other possibilities like ‘X’ and ‘Y’?

For one thing, Z would be at the point where expenditure on gas is unchanged, or as unchanged as possible, keeping other habits constant.   Why?   Because anything else would require her to re-optimize her demand for all other goods, which is a computationally expensive task.   On the other hand, the point ‘Y’ is in effect her setting aside more money than she strictly needs to in order to be certain that she can afford the rest of her “all other goods” basket–certainly a possibility, which would likely manifest in her expenditure falling.   While, finally, if she chose a point like ‘X’, there would be no observable consequences of her mental effort–my argument is that Hasting/Shapiro are unlikely to have eliminated the ‘X’-like possibility.

Given well-behaved utility, she would then have a strong tendency to prefer to point ‘Z’.    Hastings/Shapiro are saying that ‘Z’ is a more likely explanation than a pure money budget for their data than a point like ‘X’–in effect, the mental budget “binds” her choice.

 

You wouldn’t like me when I’m angry blogging: Peter Dorman Edition

July 21, 2012 3 comments

Following another long microfoundations-in-macro-are-crap debate (see here, here and here),  Peter Dorman says some things that really annoy me–he criticizes not just macro, but micro.   As a micro theorist working on some of these issues; I have to say not only is Peter wrong on some of this, but he’s wrong in a way that seriously disrespects the ongoing state-of-the-art work that people are actually doing.   I’m going to deal with his post in reverse order, from most in agreement to least.

3. Path dependence.  Microfoundations means general equilibrium theory, but the flavor it uses is from the mid-1950s.  The Sonnenschein-Debreu-Mantel demonstration (update to the 1970s)  that initial conditions and out-of-equilibrium trades alter the equilibrium itself (they assume away problem #2) has turned GET upside down.

Notice that I haven’t mentioned the standard heterodox criticisms of representative agents and ergodicity.  You can add those if you want.

This is certainly true.   Although, it is not as though people don’t think about this sort of thing: in fact I wrote a post not too long ago which touched on the topic.   “Equilibrium” requires a disequilibrium, but equilibrating, process to maintain itself:  the informational requirements of equilibrium are enormous, even in a simple 2 by 2 game, and so agents will need to observe “off-path punishments” to understand the “proper” equilibrium course of action.   As I like to say, signals require realizations.

But again, people think about this sort of thing.   There’s a whole literature about self-confirming equilibria and the application to macro.   For example, things like path-dependence or mutually inconsistent beliefs.   Its not the fault of the micro people that the macro people ignore that work.   Peter would know this if he gave the micro people credit for not being idiots and did a quick search on Google Scholar.

Next up:

2. Mono-equilibrium assumptions.  There are no interaction effects to generate multiple equilibria in the microfoundations macro theorists use.  Every individual, firm and product is an isolated atom, floating uninterrupted through space until it bumps into another such atom in the marketplace.  Social psychology, ecology, nonconvex production and consumption spaces?  Forget about it.  In evolutionary biology, by contrast, fitness surfaces are assumed nonconvex from the get-go; it’s central to the discipline.  Failure to recognize the interactive character of economic life leads economists to ask fundamentally wrong questions, like “what’s the equilibrium?” and “what’s the optimum?”  If this isn’t obvious to you already, you can get a longer version of the argument here.  (Note for those who are wondering: no, nonconvexity stemming from interaction effects has nothing to do with market failure.  The existence of externalities is neither necessary  nor sufficient for these effects.  See for yourself.)

This isn’t even an assumption the macro-economists are making anymore.   Peter can be forgiven for his point 3), since the macro people do seem to ignore it, but there is less of an excuse here.   Hasn’t he read anything from Farmer or Basu or DeGrauwe?   All of them deal  with multiple equilibria in macro.    There’s plenty to criticize there, but not considering multiple equilibria is not among those criticisms.   Hell, this is considered a hot topic in macro right now.

Finally:

1. Utility theory.  Andrew Gelman calls this “folk psychology”; that may be generous.  It is rife with anomalies (see “behavioral economics”), and, most important, it is oblivious to the last several decades of work in psychology, evolutionary biology, neuropsychology, organization theory—all the disciplines where people study behavior in a scientific way.

This is a major peeve of mine.   I already have a couple posts about it.   This is just about as wrong as you can possibly be.   To overturn utility theory, you need one thing and only one thing can possibly do it: preference reversals.   There has never been a preference reversal observed ceteris paribus.   Trust me on that.

The frontier in behavioral theory is not “utility theory doesn’t work, let’s find a replacement”.   Rather, it’s clear at this point that apparent violations, i.e. preference reversals, to utility theory are not on the “utility” side of the problem, but on the budget side.   There are hidden variables that act as constraints.

Utility seems wrong to people–especially people who haven’t thought too deeply about it–but when it comes down to it, utility is a trivial, almost circular concept.   It is not that people have a utility function in their heads which they maximize: rather it is that if people’s behavior can be predicted, then you can always write down a utility function when you’ve properly accounted for constraints.   It is “as if” people have a utility function, which is enough to make it a useful formulation–and then it’s just up to the psychologists and the behaviorists to tell us something about the form it takes (i.e. not time-separable, nor purely hedonistic, etc).

Why does this annoy me so much?  Well, its true that, as an almost circular concept “utility theory” is almost impossible to disprove… meaning that you shouldn’t really call it a “theory” at all.   Not worth defending, right?   Well, what sticks me the wrong way is the “folk psychology” argument.   Utility is at least a marginally successful attempt to do economics without an appeal to psychology–and should be understood as an attempt to sidestep psychology.   We are interested in the problem of scarcity, not directly in behavior.

Behaviorism has taught us that we need the psychologists anyway, but  that doesn’t mean that we learn nothing from utility; we still learn quite a bit just comparing different candidate forms of utility to each other.

On a broader note, the heterodox types like Peter, always seem to be too quick to reject out of hand the real contributions of the standard theory.   Science is not about truth but observable fact and on that score, the standard theory works surprisingly well–most of it, at least.  Even the parts that are no longer considered at the forefront–like expected utility, which has now been replaced by prospect theory–work far more often than they don’t.   If you want to contribute, then you have to take as your starting point what the theory does right before you reject the whole menu willy-nilly.

I can order a Large, an Extra Large and a Grande, but not a Small

Adam Ozimek has issued a challenge to paternalists everywhere. If paternalists support Bloomberg’s attempt to ban too large sodas, where does the slippery slope of intervention end?

OK, I’ll bite. Now I don’t consider myself to be a paternalist (although I’m sure that Adam would see me as such), and to be frank, I find the entire “paternalism” debate to be dumb and completely self-serving for libertarians.  Now, on to the argument (in list form)!

First, let me say that as far as specific policy is concerned, I’m ambivalent at best over the proposed ban.   I would prefer a system of graduated taxes on large sugary drinks… say, proportional to the square of the total sugar content and paid by the firm so that the tax is incorporated into the posted price the costumer sees.

  1. Adam makes a lot of the slippery slope of government intervention, but what about the slippery slope of the market without intervention?   What do I mean?   Well, read the title of the post.   Seriously, many if not all places I go these days don’t even offer a “small” size anymore.  Why?   Doesn’t this all make the large a small?   Well, yes… and no.    I think what we are seeing is the interaction of two different effects.  First, (likely for evolutionary reasons) humans have a bias for larger portion sizes–often larger than maximizing ex-post satisfaction would imply (i.e. you feel over-full).   Should government intervene to prevent that?   Maybe yes, maybe no.   It depends whether there is a compelling public policy issue involved (in this case there is).   The other effect is price discrimination.   Firms can extract a small amount of extra surplus from larger portion sizes.   Should government intervene to prevent that? Yes.  Not always, but often it should.   The two effects interact in a nasty way, large portions lead to higher demand (because we keep wanting to over-stuff ourselves) which leads to more price discrimination and larger portions; that interaction strengthens the case for intervention.
  2. Is there a compelling social rationale?   Yes, there is.   You may be tempted, like Adam, to believe that obesity is a personal choice and nothing more.   That’s not true, though.   The point has been made many times by those with more expertise than I, but it can’t be reiterated enough: if you are obese, you increase healthcare costs to the rest of us.   Now, when you buy a car, you also increase the (car-buying) costs for the rest of us, but when you buy a car, you are buying a car–you are PAYING the cost that you are inflicting on the rest of society.   When you get fat, to be frank, you are NOT paying the full social cost of that choice.   This is not quite a classic externality–the problem is that society would (likely) not bear the policy choice of charging the obese more for healthcare.   In an abstract sense, it would be more efficient to just charge these people for the harm they cause, but that’s not the world we live in.   As a second best solution, then, society just… encourages… thinness.   That’s not an ideal solution, but we’re in the realm of the second best.
  3. OK, so what about that slippery slope of intervention?   Hogwash.   This is why you sound like an idiot (which Adam is not) when you make slippery slope arguments.   Why?   There are several reasons.   I’ll return to this but the short of it is that (I can’t reiterate this enough) slippery slope arguments, as a class, are fallacious.   I also think they’re telf-serving for libertarian-types as well, since any “intervention” can be labeled as a “slippery slope” and therefore “bad” on only the shoddiest of reasoning.   So, we should not intervene, right?   Well, no, but I’ll get to that.   Even when a “slippery slope” isn’t just sloppy analysis, there is never, ever a logical reason to abandon an intervention for the sole reason that there is a “slippery slope”.   As I say, I’ll return to this, for now I’m just making an assertion.
  4. Where to stop with the interventions?   How much is too much?   That’s easy.   You stop intervening when the political system says to stop.   That’s what the political system is there for, so use it.   Not every problem can be solved in the market.

So, that’s my basic response.   Bloomberg’s is not the best response (I’m guessing) but I’m willing to see how it goes if no one tries something more promising (like using some kind of Pigouvian tax).   And I think there’s nothing inherently wrong about at least trying to address obesity with public policy.

I would add that I find Adam’s position at least as “paternalistic” as mine.   After all, when he advocates for “free markets” (whatever that means exactly… but it’s amazing how often “free” means “what is best for rich people”) he’s implicitly saying that he knows what is best for the rest of us.   What I really want, for example, is to buy food that I know is safe–and that may not be possible if we deregulate the food industry.   My buying of unsafe food does not mean ipso facto that I prefer unsafe food if I do not have an alternative.

I would argue, instead, that “paternalism” is just another word for policy advocacy.   I am an advocate.   Adam is an advocate.   So we are both “paternalists”, though neither of us would us that word to describe ourselves.   We advocate for different things, which each of us believe is in the best interests of the rest of the nation.    That’s the way its supposed to work.   By calling people (who he doesn’t agree with) names (i.e. “paternalistic”), he is debasing the conversation–attempting a kind of preemptive ad hominem.

At any rate, I promised I would detail why the slippery slope is always a dumb argument.   First, a point of terminology; when discussing the “slippery slope” I logically separate the problem into the “slope” (i.e. the proposed policy change) and the “slip” (the asserted tendency for the proposed change to propagate itself).   Logically speaking, you need both the slip and the slope to make a “slippery slope”, although in practice most people ignore the later.

  1. Gradualism:   Even if the “slippery slope” analysis of a policy  is sound, we should never use this fact, in and of itself, as a reason to reject that policy.  Nearly all good policy is developed through a process of gradual improvement.   You tweak existing institutions, you do not create institutions whole from out of the aether.  To reject policy because that policy admits to reform would eliminate this option.   Instead, all policy changes would need to be revolutionary and more important, those changes would need to be right–there is no prospect of feedback.   This is an impossible bar to cross; for libertarians, that’s the point.
  2. Democracy:  Suppose a policy is implemented and once it is, it becomes generally accepted by the public.   The public wishes for more improvements.   This, incidently, summarizes the example that Adam cites.   I say, “the policy admits reform”, Adam says “slippery slope!”   Here’s the thing.   The slip in this slope is the fact that people like the policy and want more.   Adam says that the existence of this slip is reason to avoid the slope; doesn’t anybody else notice the anti-democratic implications of that?   If Adam concedes the slip, then he is conceding that the policy will be popular–policies propagate through the political, i.e. democratic, process not through asexual reproduction.
  3. Experimentation:   Not enough of a reason to make policy changes on its own, but combined with points (1) and (2) its reasonable, I think, to imagine that policy improvements are possible and something is learned from each reform.   If we are gradual enough about continued reforms there is no reason to think that the costs (i.e. reforms that don’t quite work) would be greater than the potential benefits and because we learn something from each reform it is reasonable to think that society should on average be willing to attempt reforms which it expects on average to fail (what we learn has option value).    What Adam calls a slippery slope, I call exploring the policy parameter space–and you can never be sure about the peaks and valleys of that parameter space until you explore them.
  4. Induction:   So far, I’ve argued practical issues.   Now, I want to argue pure logic.   The basic structure of a slippery slope (properly used) is this:   A->B->B’->B”->…->C through an induction-like process where each step is shown to be inevitable.  As used in the real world, however, the invocation of a slippery slope has the structure A->B and so C (which is presumably bad).   This is pure non-sequitur.   The sequence of events between B and C, in short, is missing. You mister slippery-slope need to spell out not only the slope (B and C) but the slip (why the sequence leading to C must continue all the way to C).   The slope may be wrong (A leads to outcome D, not C) and the slip may terminate long before C.

I’m sick of hearing about slippery slopes.  Please stop, or at least do your due diligence and show your work through its logical steps.