Home > economics community, Fallacies, foundations of econ > You wouldn’t like me when I’m angry: Andrew Gelman addition

You wouldn’t like me when I’m angry: Andrew Gelman addition

First yesterday, now today… if I keep this up maybe I’ll get the hang of this blogging gig after all.   I really should be working–I’m supposed to meet my advisor in a few hours–but this morning, via Mark Thoma, I see that Andrew Gelman has taken the “microfoundations might be a problem” position and concluded that “microeconomics is the problem”.   I don’t know if I’m really ready for an audience, but now my inner hulk is struggling to get out.  I think I might have to start a fight.

I think this summarizes his view

Bruni and Sugden explain it all very clearly. To put it another way, basing economics on a science of rational choice, independent of psychology, makes about as much sense as basing chemistry on a science of chemical bonding, independent of physics.

He then goes on to argue that the “rational choice” position is some kind of conservative plot.   I don’t want to go into that, because when it comes to microfoundations, as opposed to utility theory, I would actually agree (not so much that it is a plot, exactly, just that microfoundations are a convenient argument for conservatives, a point made here also).   So if we limit ourselves to the discussion of microfoundations, which prompted this discussion of decision theory more broadly, then we would be in agreement (more or less).

Choices

The problem I have is his attack on decision theory as it currently stands.   As I tried to explain yesterday (not that anyone listens to me) that “rational choice” assumption is really misnamed and it is this misnaming more than anything else which leads to confusion.   The assumptions which underlie utility theory would be better described as “predictability”.  If I can predict your actions, then (trivially) I can write down a (it won’t be unique) utility function whose constrained maximum is precisely at that point.

Maybe it would help to give an example.   Suppose I can predict your actions.   You choose x1 in state s1, x2 in state s2, etc.   This is entirely general… if you ask me what state it is, then I’ll tell you what you will do in that state before you do it.   In theory though, x1 (or x2, etc) are some complex objects so it would be nice if I could just describe them as numbers.   Of course, this is the utility function: there is a space of possible choices, call it X and in each state of the set of all states, S, some of those options are allows (call them X(s)).

Attention

This is slightly unusual notation, traditionally I would call X(s) the “budget set” and I might lump the budget set together with the notion of what is affordable.   I’m not going to do that, instead I’ll let X(s) be any subset of X.   Given this, I claim that I can find a U(.) (actually any number of possible U’s) such that x1 = argmax U(x) s.t. x \in X(s1), x2 = argmax U(x) s.t. x \in X(s2)… etc.   I do this for each s, simply by putting the biggest number on the actual choice, x, in the set X(s).  If x is “peeing on money and then lighting it on fire”, that is totally fine.  Normally, I would need the weak axiom of revealed preference for this to work, but suppose I relax my assumptions about the “budget set”, maybe it doesn’t represent a budget at all, but includes informational or computational limitations as well.   Then, I can always find this U(.) at the cost that I might have to put some conditions on the X(s)’s.  This is the basic idea of one of my favorite papers.

If this is what you’re doing, then the problem is that the U(.) isn’t really doing the hard work anymore.   But more importantly, it helps to illuminate the point that U(.) was never really doing the heavy lifting in the theory–the budget set is doing most of the work.   Notice that I didn’t even reference what you might prefer, but based my argument on a foundation of pure  predictable choice.   I can observe choice, but I can’t observe preference and I can’t observe attention (the X(s)’s).   However, I can at least partially deconvolve preferences and attention from the choices that are actually made.

Conclusion

The point is that “utility theory” is more a descriptive formulation than it is a theory at all.  Not necessarily the most useful description… there may be more useful ways to break down the problem of choice, but any behavior can be described in this way.   “Rational” as it is usually meant (by non-economists, or aparently some economists) doesn’t enter into the discussion at all.   This is not a theory of Spock.

This is also not a description in competition with behavioral approaches, or psychological research or even (especially) neurological research.   The utility theory is a complimentary to all of these.   You need to know what form U(.) should take and you need to know how attention changes in order to make utility theory work at all as a predictor of behavior.

Call me a micro snob if you want, but the problem is not in my field… we’ve been doing experiments (experiments!!! not the historical studies you macro and poli-sci people do) to sort this out for literally decades now.   Don’t blame us if you misuse and misunderstand what we’re doing.

Advertisements
  1. March 19, 2012 at 6:09 pm

    Hi BS Economist,

    There’s a bit of this that seems a bit strange to me, but I’m (admittedly) a bit out of my area, so perhaps we can work through this together. You write,

    “If I can predict your actions, then (trivially) I can write down a (it won’t be unique) utility function whose constrained maximum is precisely at that point.”

    I assume here you’re referencing something quite close to the Von Neumann/Morgenstern representation theorem. As a more behaviorally-oriented grad student, I’m inclined to think that behavior is (at least in some sense) predictable, even if the VN/M axioms aren’t quite right.

    Take (for example) the asymmetric dominance effect discussed by Huber, Payne, and Puto– this demonstrates the (predictable) violation of the independence of irrelevant alternatives axiom.

    In my eyes, representation of choice using appeal to utilities usually requires an assumption that we know doesn’t always hold. This, though, does not mean that behavior is entirely chaotic and unpredictable– it just means we need to build models that begin with something other than unwavering commitment to utility theory, at least as it is developed by fellows like VN/M and Savage.

    Please do let me know what you think, though– It’s great to find other burgeoning researchers interested in choice!

    Mark

    • BSEconomist
      March 19, 2012 at 7:55 pm

      I actually agree with you, more or less. The first thing I would say is that “rationality” is not precisely “predictability”, just that its much closer than most people think. I’m actually working on something which most certainly fails several of the usual axioms, but I would nevertheless call predictable.

      At the same time, IIA is not a necessary axiom for decision theory, although it is certainly sufficient. I haven’t read the Huber, Payne and Puto paper, but I think there are many known violations of IIA and these are necessary for the foundations of expected utility theory and probably some other things. I use the broadest meaning of rationality, though, which are preferences that are complete and transitive (see Mas-Colell).

      Lastly, in this post I’m reversing the problem as would be done in choice theory a la Samuelson. Given a choice correspondence, can I write down preferences which represents those choices which is still rational? The answer generally is “yes” only so long as the strong axiom of revealed preferences is true though the weak axiom is almost good enough (necessary, but not sufficient). But SARP is exactly the condition that you would need to guarantee that given any budget set you can assign the highest number to the choice that is actually made and do it consistently for all choices. Now, there are experimental violations of WARP and SARP, but additional restrictions on the budget set (such as limiting attention or cognitive limitations) can always get you around those issues. In theory. That’s what I was referring to.

      At the same time, in 90% of problems, it’s enough that you have an MRS which represents the trade-off individuals will generally accept and knowledge of the budget set. As a behaviorist, I can think of violations either in terms of an MRS which is not stable or I can think of DMs responding to budget sets other than what was assumed. As a theorist, I would say that either approach should work, so throwing out utility theory seems to me likely to be counterproductive.

  2. March 19, 2012 at 10:27 pm

    Very interesting.

    To be perfectly honest, I’m not very familiar with appeal to inferences based on revealed preferences– a very central idea in our paradigm (judgment and decision making ala Kahneman and Tversky) is that there are a wide variety of contextual effects which impact our judgments and choices; the presence of these effects means that making claims like ‘chose x in the presence of y’ should really instead take the conditional form ‘chose x in the presence of y | a tremendous array of contextual features.’ Given the inchoate nature of our understanding of these contextual effects, it seems like it’s really difficult to say what even counts as the same situation. Does this make sense?

    One vaguely related question– (and this one is a bit more concrete):

    Christopher Hsee has several papers which develop the distinction between choices made under a ‘joint’ vs. ‘separate’ evaluation mode. For example, let’s say I divide an experimental pool of subjects into three groups; the first two groups will be my ‘separate’ evaluation groups (they’ll report a willingness to pay for one item), and the third group will be my ‘joint’ evaluation group (they’ll report a willingness to pay for both items)– let’s suppose we ask participants to evaluate dictionaries:

    dictionary A has 10,000 entries, and has no defects– it’s like new.
    dictionary B has 20,000 entries, but the cover is torn.

    To restate, group 1 will report a willingness to pay for dictionary A; group 2 will report a willingness to pay for dictionary B, and group 3 will report willingness to pay for both (the first two groups ONLY get information about the dictionary they’re evaluating).

    Hsee observes a preference reversal– when evaluated in isolation (ie, in ‘separate’ mode), it’s difficult to judge how important a given number of entries is for a dictionary, but it’s easy to judge the quality of a cover. As we’d expect, people are willing to pay more for dictionary A than dictionary B in separate evaluation. However, when people consider both alternatives at once, it’s much easier to appreciate the large number of entries offered by B– individuals in the third group are willing to pay much more for B rather than A.

    To me, this sort of behavior (which we see in a variety of decision contexts) means that a language of preference “for an item itself” gets to be a bit tricky– the way we conceive of items at all depends on the context we’re in, which includes the other options available. My biggest beef with revealed preference models (at least as I understand them) is that assume some underlying ‘contextless’ utility for items, though this may not be what you’re asserting.

  3. BSEconomist
    March 20, 2012 at 5:52 pm

    I couldn’t agree with you more. Preference reversals and cyclical choice behaviors show pretty clearly that something more interesting is going on than you would have expected in the traditional paradigm. Strictly speaking, you can generically account for this sorts of effects by varying the effective budget set (attention set) or by making the utility state dependent. So, I assume that’s what you mean as not “contextless”, you can add context on the budget set or the utility function and context definitely matters.

    The dictionary example is an interesting case. I have a model which might have predicted such an effect–but that project is only about a week old and it may be my job-market paper, so I don’t want to say too much. It is certainly the case that this sort of preference reversal is difficult to describe using utility, though.

  4. March 20, 2012 at 11:28 pm

    Respect. This is precisely what I’m talking about (wrt ‘contextless’). An interesting related question, in my mind is whether adding this architecture onto the descriptive models chops a leg out of the welfare analysis (I’ve never really been sold on welfare arguments based on revealed preference).

    Do you have a twitter account? I’d love to broadcast your posts!

  5. BSEconomist
    March 21, 2012 at 4:41 pm

    I wouldn’t really argue the point, I would have to think about it. I *think* that all context does to CV and EV is to change their interpretations. Certainly arguments based on EV and CV are more robust than arguments based on CS and PS. And it is EV and CV which are better thought of as based on revealed preference, because you are compensating for the changing budget set. In fairness, it is well known that enlarging the budget set weakly increases utility, even as self reported happiness declines. I’d have to think about it.

    I don’t have a twitter account, though. To be honest I wouldn’t really want you to broadcast my posts anyway–I don’t think I’m quite ready for an audience, yet. At least, not a big one. I appreciate the offer, but for now this is strictly practice.

  1. July 21, 2012 at 1:51 pm
  2. May 30, 2014 at 5:24 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: