Thursday, October 12, 2006


Iraqi Death Survey Part II

In my first post, I simply mentioned 3 items that, from my own experience conducting surveys, seemed surprising, at best, or downright unbelievable, at worst, regarding the apparent success and efficiency of the survey teams. These questions did not address the methodology of the survey, they simply cast aspersions on the integrity of the survey teams. Quite simply, I find it particularily hard to believe that you can achieve a 98%+ response rate under conditions as difficult as those in Iraq (I've never gotten that kind of result in Minnesota), and that you can conduct the careful, thorough, sensitive surveys that the article implies it conducted, in what must average well under 1/2 hour per survey (at 40 per day, it would probably actually add up to less than 15 minutes per household of actual survey time.)

But those are just skepticisms. In this post, I will bring up three problems that I find with the sampling methodology. In future posts I will look at some difficulties I find in the methods of extrapolation in the conclusions.

1. The second stage of the sampling is troubling:

“At the second stage of sampling, the Governorate's constituent administrative units were listed by population or estimated population, and location(s) were selected randomly proportionate to population size.”

Why should this be troubling? Well, if a few hundred selections were made per Governorate it wouldn’t be, but given the fact that in all but two governorates, 3 or fewer clusters were selected – in many cases only 1, the chances that any smaller towns (or administrative units) anywhere in Iraq might be selected are diminishingly small. It would be interesting to compare the number of small town or rural household clusters selected with the overall population of rural Iraqis to find out whether this underrepresentation were substantial, as I guess that it is. I don’t have access to the raw data, so I can’t tell.

2. The sampling method used carries incredible inherent bias:

“The third stage consisted of random selection of a main street within the administrative unit from a list of all main streets. A residential street was then randomly selected from a list of residential streets crossing the main street. On the residential street, houses were numbered and a start household was randomly selected. From this start household, the team proceeded to the adjacent residence until 40 households were surveyed.”

This may seem fine, and it would be, in suburban U.S. However, in Iraq, with hundreds of thousands of internally displaced persons, and the chaos of many others living in temporary housing and the squalor of newly-constructed unofficial slums, none of these persons had any possibility of being surveyed. Neither did any who live on any unnamed or unrecognized streets. And again, it is quite possible that many smaller towns don’t even have any officially named main street, so wouldn’t show up on the list at all. What effect would this bias have? There’s no way to know, but it is clearly a built-in bias, systematically selecting against some rather large demographic groups.

3. In addition, it is clear that at the whim of the interview team, these pre-selected sites could be changed:

“Decisions on sampling sites were made by the field manager. The interview team were given the responsibility and authority to change to an alternate location if they perceived the level of insecurity or risk to be unacceptable.”

This is a clear admission of selection bias in the sampling. Given the sectarian tensions in Iraq, even granting the alleged professionalism of the canvassing teams, it is impossible to tell the impact of these biases, but their existence is unquestionable. The implication in the report is clearly that more deadly areas were underrepresented, but were more distant (possibly safe) areas also selected against because of the level of risk required to reach them?

To summarize, then, the sampling methods reported systematically select against three groups (that I can think of): A) rural Iraqis (both in the method of selecting administrative units, and in the method of selecting particular streets) B) internally displaced refugees (since many live in camps, and not on named streets) and C) urban slum dwellers, who may often also not live in organized households on named streets.

The fact that the previous survey, done via GPS location, mirrors the present one in the 2003-04 results merely indicates that, to the extent that the present method may be less representative than the previous one, the biases did not effect the death toll results from 2004; it says nothing about whether these biases are still unimportant. After all, the nature of the situation has changed significantly in Iraq; that's one of the main conclusions of the report. If the recent upsurge in violence has been predominantly urban rather than rural, these sampling methods might tend to overestimate the results. If it has been predominantly in the urban slums or refugee camps, the methods might underestimate them. There is simply no way to tell beyond educated guesses, which take us completely out of the realm of statistical science. In any event, these obvious biases are, in my view, important enough to raise serious doubts about the validity of the conclusions.

More troubling, and carrying us way beyond anything that can be "fixed" by more statistical analysis, is the third point; namely, that in spite of the authors' best intentions to randomize the clusters, what they ended up with was, in point of fact, a sample with a high risk of personal selection bias. As understandable as concerns for safety are, statistics has no compassion. If you modify your selection based on personal considerations, you lose your claim to statistical validity.

Comments: Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?