Hello Again Everyone,
I have just received my copy of A Model Discipline: Political Science and the Logic of Representations by Clarke and Primo. I have not yet begun to read the book; however, the idea that the emphasis political scientists put on testing (formal) models and the way that practice may distort how we view models has been an interesting recent discussion in political science. I am sure I will have more to say about the topic as I read through the book. Nevertheless, the topic of theories and the empirical evaluation of theories has inspired this post.
I have recently begun putting the finishing touches on my appendix for the first chapter of my dissertation, which I will be presenting at this year’s Midwest Political Science Association (MPSA) conference. As I go through my comparative statics (I hope to present some initial findings here soon) I am also beginning to design the experiment I will use to test the predictions from my formal model. As I went through the process of developing a formal model and thinking about how to test it, I could not help but think about the reasons why a “formal model” is preferable to a non-formal model.
I have heard many scholars argue that theory isn’t as important as the subsequent test. Some have even argued that theory is everywhere and easy to develop (yes I have heard people say this). They argue that the best way to advance the disciplined is with empirical investigation which will reveal whether the evidence corroborates our theory or whether our theory has been falsified and should be discarded. I am sure you have heard someone at some time say, “let the data speak”. If you have read previous posts of mine you probably know that this way of thinking can have severe consequences, especially when our observable data does not take into account strategic behavior, which can be thought of as omitted variable bias.
In this post I want to talk about what Morton and Williams have called the Theoretical Consistency Assumption (TCA) and how it relates to the way we test our theoretical predictions with causal inference as our intention. I have already spoken about the Neyman-Rubin Causal Model (RCM) here, but essentially RCM researchers, when testing non-formal models (and also formal models under most conditions), must assume TCA. Which means that the “assumptions underlying the causal predictions evaluated are consistent with the assumptions that underlie the methods used by the researcher to infer causal relationships” (Morton and Williams, p. 198).
In general, this means that the assumptions used in one’s empirical model (statistical or experimental) used to test the predictions of one’s theoretical model must be consistent with the assumptions that underline the predictions of one’s theory. This would also require that the assumptions of our theory be as explicit as possible. For example, if one uses matching to test the predictions of one’s theory, one is assuming ignorability of the treatment (the method of collecting data, the nature of the missing data, and the nature of unobservables are not systematically dependent on each other; in other words the observations collected are independent of missing data and unobservables conditional on the observed data). The important take-away is that for TCA to hold the assumptions of the empirical test (functional form, Gauss–Markov, etc.) must be consistent with the theoretical model. How often do we believe that the assumptions in our empirical investigations are consistent with our theoretical predictions? Naturally, one can see why formal models and experimental analysis has an advantage in this regard.
Of course I am about to advocate what Clarke and Primo’s new book urges us to reconsider. That aside, researchers using formal models can begin with a precise set of assumptions about the data generating process (DGP) (in symbolic mathematical terms) which are solved to derive predictions about the DGP (Morton and Williams, 2010; Morton 1999). Formal models allow for precise predictions about the variables in the model when it is in equilibrium. We can then draw relationship predictions about how two variables in the model are related and causal relationship predictions about how the change in one variable “causes” the changes in the other variables (via comparative statics).
When using observational data, some scholars such as Signorino have argued that scholars should derive statistical estimators from their formal models. Nevertheless, if one has the ability to conduct an experiment they can come as close to TCA as possible. In fact we can design an experiment which allows us to make the assumptions underlying the empirics as equivalent as possible to those of the formal model underlying the predictions. Moreover, one can relax the assumptions in a controlled manner in order to investigate the consequences of relaxing those axioms.
This gives us the ability to conduct what Morton and Williams (2010) call “theory tests” and “stress tests“. Theory tests are when we hold all the assumption in the empirical study as close as possible to those of our formal model. A stress test is when we relax some of the assumptions and allow them to vary in order to see what happens under these conditions (much like a robustness test). Now I have read much of what Clarke and Primo have already written on models and I am anxious to read their new book. I, too, believe that models have more than one use (in fact I have used models for “Foundational”, “Generative”, and other purposes). Nevertheless, I am compelled by the use of models for drawing predictions which are then tested (using experiments or otherwise) with a strong focus on causal inference.
Most importantly, I think that the Theoretical Consistency Assumption is important and often overlooked; especially by those that “let the data speak”. Getting the wrong answer is rather easy with observational data and the neglect of a consistency between theoretical and empirical assumptions. Of course, adhering to TCA is not always possible and can be cumbersome under many conditions and I am not completely dogmatic about it (as in I won’t believe any claim that is not first derived by a formal model and then tested with an empirical model that has consistent assumptions with the theoretical predictions), but I do think it is an important consideration, at least for studies with causal intent (wait… isn’t that a lot of studies… or at least I have read a lot of studies studies that claim causation…?).
Have a great weekend! GO GIANTS!
Nicholas P. Nicoletti
Morton, Rebecca B., and Kenneth C. Williams. (2010). Experimental Political Science and the Study of Causality: From Nature to Lab. New York, NY: Cambridge University Press.