I am still reading Experimental Political Science and the Study of Causality by Morton and Williams and in Chapter 5 the authors discuss how randomization in experimental design actually works like an “idea” instrumental variable (IV) in terms of controlling for confounding variables.
Essentially there are three conditions for an ideal IV: (1) Statistical Independence where the manipulation (M) or IV is independent from the outcome we are trying to explain; (2) M or the IV is a perfect substitute for who receives treatment (T); and (3) There is no missing data or in the experimental case we observe the choices of all of the subjects with zero drop off.
Under these conditions the IV will not correlate with the other unobservables which are confounding T and is also a perfect substitute for T giving us the true relationship. As we know with observable data good (let alone ideal) IVs are difficult to find. However, with experiments randomization of the treatment and random selection from the subject pool can be accomplished (at least in the lab, field experiments are a different story).
Randomization can act like an idea IV when: (1) Subjects are assigned simultaneously; (2) the manipulation is independent from the random assignments of manipulations of other treatment variables; (3) there is perfect compliance; and (4) we can observe all the choices of the subjects.
Interestingly, these conditions only hold when subjects are
“recruited at the same time for all possible manipulations, random assignment of the manipulations occurs simultaneously and independently of assignments to other manipulations of other treatment variables, when there are no cross-effects between subjects, and when all subjects comply as instructed (none exit the experiment before it has ended and all follow all directions during the experiment)…” (Morton and Williams, 2010, p. 144).
It occurs to me that the one which is probably most violated (or hardest to control under budget and space constraints) would be the first condition, where all subjects are recruited at the same time for all possible manipulations. There is one way to get around this problem and that is if the researcher is able to recruit subjects from the same candidate pool when conducting treatments during different times. This is a strong case for using undergraduate student subjects because they are all from the same pool and are likely to be homogeneous in regards to possible confounding variables over time. Of course using student subjects as voters seems reasonable, but using them as, say, elite decision makers (i.e. presidents or heads of state) may bias the results in other ways.
Anyway, when reading about randomization as an ideal IV, I recalled a recent paper by Arena and Joyce which looks at causal inference under conditions where strategic behavior conditioned on partially unobservable variables can confound analysis. They find that the instrumental variables approach does in fact do the best at getting to the correct relationship. Nevertheless, “ideal” IVs are extremely difficult to find, especially if we go by the strict conditions set out above.
On another topic, I know that my posts seem rather mechanical right now. I am hoping to get better as I go along. Right now I am trying to sort out how to use experiments properly. What I am finding is that they are not the “magic bullet” when it comes to solving the problems with observable data; in fact, a poorly designed experiment can give terribly biased results. Nevertheless, I am convinced that well-designed experiments (when possible) are a powerful alternative to statistical models with observable data. Have a great weekend!