Home > economics community, foundations of econ, macro > To Microfound or Not to Microfound?

To Microfound or Not to Microfound?

For the past week or so, there’s been a lively internet debate about the foundations of macroeconomics; is it necessary for models of the macroeconomy to be rooted in the microbehavior of individual agents (which is the approach of RBC and DSGE models including new Keynesian models)?  These are the “micro-founded” models.  Or would it be more useful to base macroeconomics off the historical behavior of large-scale macroeconomic aggregates (like IS-LM)?

My beef with all this is that some of these macroeconomists have started questioning the foundations of microeconomics. This is a big mistake.

It started with Krugman, but others joined in.   The argument is that standard micro analysis assumes “rational” agents, who first calculate their utility function which they then maximize to arrive at their “best” option.   Sounds pretty ridiculous, right?  Well, wrong.   Not only does it make more sense than it sounds, I argue that the utility theory is the most robust possible description of human behavior.

My claim: if choice is predictable, then there is a utility function whose maximum would predict it.   What’s more, this is a completely trivial (almost tautological) statement.

That Rationality Assumption

What does rationality mean in an economic context?   Different researchers do use slightly different definitions, but the most expansive and widely used definition of rational choice comes straight out of Mas-Colell (“Microeconomic Theory”):   Individual preferences representing choice over “bundles” (where “bundles” can be any assortment of “things” or actions or combinations of the two) are rational iff

  1. They are complete; that is any two bundles can be compared–the agent is indifferent to the two or prefers exactly one.   Now, think about what it means if completeness is violated.   If the agent doesn’t prefer one of the two bundles, than I can’t predict which bundle they will choose, even in principle.
  2. They are transitive; that is if bundle x is preferred to y and y is preferred to z than x must be preferred to z.   Again, think about what this would mean if if were violated.   If the agent has x, he will prefer to trade x for y and y for z.   But, if she manages to trade for z, then she’ll want to trade for x–in other words, her behavior could not be predicted even in theory… will she stop trading when she gets x, y or z?

This is what an economist means when we say someone is “rational”; but they are also (at least some of) the conditions that you would need for describing agents who are predictable.   Assuming “rationality” in particular does not mean that you are doing choice theory of cyborgs or for Spock.   If someone likes to pee on their money before lighting in on fire, than that person is entirely rational.   If another person didn’t pee on money and light it on fire before today, but does so today regardless (and for no reason that I can identify) that person may not be rational.

And what “rationality” buys me is that rational choices can be represented by a utility function.

This is a Totally Trivial Observation

Understood in this way, the utility theory reveals itself to be not just nigh-universal, but also totally trivial in the sense that it is entirely descriptive.

It’s not that everyone is a utility maximizer, in other words, it is that every pattern of choice can be represented as a utility function.   If I can predict your actions (even if only on average), than there is a utility function (not necessarily unique) whose maximum will return that value, and if a utility function represents your behavior then I can predict your actions by maximizing it.   This is for the same reason that for any x on the real line, I can find a function which is maximized at x.

Of course, the problem is the “there exists”… I don’t know what your utility function is unless you tell me.    You don’t know what your utility function is unless I can tell you.   Utility theory is not “pop-psychology”, it is the absence of psychology.

This is both a strength and a weakness.   The weakness is that writing down the utility function alone is totally trivial and meaningless.   The strength is that whatever behavior it is that I’m trying to model, I know that the utility theory can be applied.   It is almost entirely robust and I don’t really need to understand people to use it, although if I don’t understand the people I model my predictions may not mean much (it depends on how much the answer depended on the choice of utility function).

Uniquely Bad Critique

Of all the fishy things economists do, the utility theory (and by extension the rationality assumption) is by far the worst thing to criticize.   Any successful theory will have a utility representation (although the utility representation may not be the most useful approach to solving anything).

On the other hand, think of all the other assumptions economists (especially macroeconomists) routinely make, for example:

  1. Independance of Complex Gambles; Well known to be violated, this is the basis of the Expected Utility theory.
  2. Time Consistency; Also well-known to be violated, this is how we typically deal with choice over time.
  3. Self-Interested Choice; Ditto.  I like to refer to this one as the “greed motive”, to keep it clear in my mind what is being assumed.

These examples have at least two things in common; they imply specific forms for the utility representation and they fail empirical tests.   Yet, it’s trivially simple to write utility functions which violate these assumptions, so (trivially) I can still use utility functions representing behavior which violates them.

Back to Microfoundations

The takeaway, IMHO, is that utility maximization in itself is not the problem, except that it gives the researcher a bit of a false sense of precision.   Many utility functions represent the same behavior, but the subset of utility functions representing a given behavior is a sparse set in the function space.

I suppose my view of microfoundations is closest to Richard Serlin’s view (here and here); microfounded models and aggregate models are not substitutes in a well-planned research portfolio–they are compliments.   The world is too complex to derive from first principles even with the most powerful supercomputers at your disposal.   This is why chemists and biologists don’t try to start from string theory when doing research–and we all know it, frankly.   On the other hand, in any science, microfoundations are good for at least a few things:

  1. Identifying possible unstable situations in your aggregate models.   The classic example here would be the stability of the Phillips curve (the historical relationship between employment and inflation).
  2. Uncovering new aggregate behaviors.   I can’t think of a good example in economics, but in physics a good example would be superfluidity.
  3. Doing something that sounds impressive and rigorous, rather than “ad hoc”.

#3, though, I tend to think is the main reason for the dominance of microfounded models in journals.   After all, when you get right down to it microfounded models are no less ad hoc than aggregate models.   As Serlin rightly points out, you need to make all sorts of simplifying assumptions to make one of these models tractable–even to assuming an economy with a single agent (who has time separable and time consistent preferences and is an expected utility maximizer…’nuff said).

So let’s try to use the two approaches in concert, rather than pretending that there is some sort of competition between aggregate and microfounded models.   If you are a physicist you would learn to use thermodynamics (aggregate) and statistical mechanics (microfounded) together.


  1. nemi
    June 26, 2012 at 11:18 pm

    “if choice is predictable, then there is a utility function whose maximum would predict it.”

    First, well, choice isn’t predictable in any strong sense. Even in experiments where people face something as limited as only ten goods (and are faced with real incentives) they tend to make a lot of rationality violations. There is a rather big literature on cognitive biases – se e.g. http://en.wikipedia.org/wiki/Cognitive_biases – so I think it is fair to say that utility maximization is a pretty crude approximation

    Second, as you say, even if it where true – you could not write it down. (not even your own). You could however find variables that will affect peoples decisions on avarage – but that is a function showing some correlation between the increase in a variable and peoples desicions on avarage. To take that function, call it a representative individual, and say that you are calculating some utility maximization is a big approximation (if even that).

    Finally.Isn’t the biggest complaint that “micro-founded” models are without micro foundations, unless we are talking about a Robinson Crusoe economy.
    Those who complain about “micro-founded” models seldom complaining about theoretical models with a rich ecology of heterogeneous agents – as far as I see they applaud these models – even though they might find limited applied use for them.

    • BSEconomist
      June 27, 2012 at 11:27 am

      “choice isn’t predictable in any strong sense.”–absolutely not! I should be more careful to say ‘predictable on average’ which is not the same thing as you point out. So you’re right; I’m just being a little sloppy in my writing, though. That’s really important and one of the themes that I would like to push; for example, in game theory many equilibria are not robust to ‘mistakes’, the fact that people are only predictable on average means that many equilibria (even ignoring out-of-equilibrium dynamics that might be needed to enforce “equilibrium”) will not be observed in practice–an observation which kills representative agent models in theory if not necessarily in practice. This pushes us to concepts like “trembling hand perfection” or “cognitive hierarchies” and many more ideas. This sort of thing is actually what my research is about.

      “even if it where true – you could not write it down”

      This is true as well. I wouldn’t be too discouraged by this fact, though. Science is about description, catagorization and explanation. For each of these three utility is useful even when it is not precisely completely “true” in any philosophical sense. What matters is the economy of our description–how accurate or precise it can be given its complexity. You want to maximize explanatory power for a given level of complexity. For example, I deal with bounded rationality–there is no phenomenon which can be described by bounded rationality that couldn’t have been understood by the more traditional notion of economic rationality. But, bounded rationality is simpler than those more traditional models for explaining certain phenomena. Another example is the expected utility model–a specific form for the utility function under conditions of uncertainty–which we know doesn’t “work” in every circumstance, but it usually works well enough to be useful most of the time.

      “Isn’t the biggest complaint that “micro-founded” models are without micro foundations”

      I would agree with this. Of course, the majority of “micro” also lacks “microfoundations”. “Pure” reductionism really never ends; in physics, say, you go from objects to atoms to subatomic particles and then force particles and then (many think) strings… and then what?–I don’t know exactly, but on the day that everyone were to accept strings they would start to ask what precisely they are made of.

  1. March 9, 2012 at 12:00 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: