Causal Inference and Experimental Design
One of the primary advantages of experiments in the social sciences is their ability to help us isolate causes. According to the Rubin Causal Model (RCM) also known as the Neyman-Rubin Causal Model (named after the researchers who developed and formalized the logic) we are essentially looking for the difference between two states of the world, where the Treatment (Ti) = 1 and Ti = 0 or δi= Yi1 – Yio;. Where δ is the causal effect, Yi1 is the outcome which experienced the treatment, and Yi0 is the the outcome which did not experience the treatment (Morton and Williams, 2010) . Of course there is a problem here; in observational data (experimental or otherwise) an individual can only be in one of the two states of the world at any given time (although I will discuss attempts to remedy this soon). With observational data we are actually estimating: Yi = TiYi1+(1-Ti)Yi0 (Morton and Williams, 2010). Accordingly, RCM assumes a world of hypothetical or theoretical counterfactuals, but we never actually observe these in the Data Generating Process (DGP).
Of course with observational data generated by nature researchers do not have the benefit of random assignment and thus the problem becomes relatively more problematic because of potential confounding unobservables. Of course, these unobservables exist in the world of the experimenter as well; however, they are tempered by random assignment of the treatment and manipulations – systematic biases will be distributed randomly among subjects.
“Within-Subject” Design as a Solution?
Nevertheless, there is a debate among experimental researchers as to how to “solve” the problem that one subject cannot simultaneously experience both states of the world. In a “between-subjects” design, one group is given the treatment and another is not. Researchers use the group that did not experience the treatment as the “baseline” and compare the group that did experience the treatment to the baseline. In this case we must still think in the hypothetical world of counterfactuals. However, there are alternatives to this design called “within-subjects”. In these experiments subjects make choices in both states of the world (and thus can serve as their own baseline in some cases).
Morton and Williams (2010) describe several types of within-subject designs. One is called the Multiple Choice Procedure, where subjects make choices in multiple states of the world simultaneously. One procedure for doing this is the Strategy Method. Under this methods subjects are going to take part in an experiment which is based on a formally derived theory (specifically game theory); subjects essentially choose what strategy they are going to play in every instance of strategic interaction (i.e. conditioned on hypothetical choices made by other players in the game). The experimenter than implements the strategies each player chooses in the game. In contrast to the strategy design is the Decision Method, where subjects essentially evaluate different different situations based on the game when they actually occur (still conditioned on other players actual actions in prior points in the game). One final way (that I know of), for going about this is the Crossover Procedure where subjects make choices in different states of the world sequentially.
I am currently designing an experiment to test predictions from a game-theoretic model . However, what I am interested in is how the electorate (or individuals voters in the experimental case) will vote when given two distinct signals; one unbiased but potentially inaccurate (which I call the media) and the other political biased but they know the true state of the world (which I call the opposition). In the experiment, I am going to assume that the “opposition” plays their optimal strategy given the different states of the world. What I am really interested in is how the “electorate” votes. I know that I am not going into detail about the model, but that is not what I am interested here. The point is whether I should use one of the “within-subject” designs.
A major problem that I see with these methods is that by using a “within-subject” design we may actually be applying a treatments in the sense that asking subjects to give us a complete strategy set over all possible alternative may actually change how they would react if the experiment was designed with the Decision Method. There is evidence that respondents behave similarly under both methods (Stanca, 2009). Nevertheless, it’s not unreasonable to think that in some instances (especially for the crossover procedure) subjects will learn something about the experiment that changes the way they behave in subsequent states of the world.
One of the other issues is that in many cases we do not have large enough sample sizes for experiments to invoke the law of large numbers. Thus, even with random assignment it may be possible that our results are not actually measuring the Average Treatment Effect (ATE) or E(δi). By using “within-subject” designs we can increase the number of observations which experience Ti and ¬Ti. Of course these “within-subject” designs only work for certain experiments.
The major debate here is whether within-subject designs bias the DGP that the experimenter is trying to control. If we are using experiments to isolate “cause” then it is very important to be sure that our designs are adequate. For example, behavioral economists have claimed to have evaluated the “rationality” (or really what we would call Homo Economicus) assumption in experiments using the dictator game and the ultimatum game. They claim to have found evidence for fairness or reciprocity in these experiments; however, what they may actually be finding is choices conditional on experimenter or observer effects, or the idea that the observation by researchers causes respondents to act in a particular way. This is why the design of an experiment is of the utmost importance; otherwise we may come to the wrong conclusions.
I would appreciate any discourse on between-subject vs. within-subject designs and whether they can bias the results of an experiment.
Nicholas P. Nicoletti
Morton, Rebecca B., and Kenneth C. Williams. (2010). Experimental Political Science and the Study of Causality: From Nature to Lab. New York, NY: Cambridge University Press.