Sampling and Census 2000

November 16, 1998

Having considered the proposed use of sampling in Census 2000, especially the special sample survey to be conducted after the census proper, one group of statisticians concludes that current survey techniques cannot provide the degree of accuracy that would be needed to improve the census data.

Morris L. Eaton, David A. Freedman, Stephen P. Klein, Richard A. Olshen, Kenneth W. Wachter, and Donald Ylvisaker

Our object is to discuss the Census Bureau's plans for the use of sampling in Census 2000. Sampling comes into the design in two main ways:

1. Non-response follow-up in the census was done in the past on a 100% basis. In 2000, however, the Bureau plans to follow up only a sample of non-respondents. As indicated below, sampling for non-response may not be especially helpful in the census context.

2. The more salient-and problematic-use of sampling in Census 2000 is Integrated Coverage Measurement, or ICM. Despite the name, the ICM program is not in our view an integral part of the census operation. Instead, it is an effort to adjust the census counts with the hope of reducing coverage errors. The Bureau plans to release only one set of numbers, based on the census and sampling for non-response follow-up, as corrected by ICM. That concept is called the "one-number census."

Data for ICM will be collected in a special sample survey done after the census. ICM is in many ways like programs that were offered for adjusting the census in 1980 and in 1990. Based on experience with these programs, we believe there is a substantial risk that ICM will degrade rather than improve the quality of census data. ICM is a complex, multistep process, intended to fix relatively small errors in the census proper; the many somewhat arbitrary technical decisions involved could turn out to have large effects.

The ICM process depends on a number of statistical assumptions that have been tested and found wanting. Moreover, there are many opportunities for error in ICM fieldwork, and there may be substantial difficulty in detecting or correcting such errors. As a result, ICM may add more error than it removes. The 1990 analog of ICM was the Post Enumeration Survey, or PES. On July 15, 1991, the PES estimated a census undercount of 5 million people. However, 50% to 80% of this estimated undercount resulted from PES errors rather than census errors: Thus, most of the estimated undercount reflected errors in the estimation process, not errors in the census. Details can be found in the remainder of this article, and a list of references is provided at the end.

The major weak points in current census adjustment techniques can be summarized as follows:

* Many somewhat arbitrary technical decisions will have to be made. Some of them may have a substantial influence on the results.

One example serves to illustrate the last point. Some time after the Bureau recommended adjusting the 1990 census, it discovered a mistake in its analysis of the PES data; this error added about a million people to the estimated undercount and had the effect (along with some relatively minor errors) of shifting a congressional seat from Pennsylvania to Arizona.

The ICM program seems to be just as vulnerable to error as the PES. Of course, the situation for 2000 will remain somewhat unclear until the data have been collected. If experience is any guide, however, it will be nearly impossible to demonstrate-to any reasonable degree of certainty-that ICM improves on the census for purposes of apportionment or redistricting. The basic issue, of course, is that the errors in the census are relatively small: In 1980 and 1990, net undercounts were in the range of 1% to 2%. To improve on the census, ICM would need to achieve error rates well below 1%. That sort of accuracy does not seem to be within the realm of current survey techniques.

In the balance of this article, we discuss ICM, sampling for non-response, and the interaction of these two programs, mentioning recent legal cases and briefly stating our conclusions. We provide some sources for further reading on census adjustment, including publications summarized in the present article. The Bureau is still refining its plans for 2000; descriptions of the design are provisional for that reason, among others.

Integrated Coverage Measurement
The ICM program is based on a cluster sample of 60,000 blocks containing 750,000 housing units and 1.7 million people. A listing is made of the housing units in the sample blocks, and persons in these units are interviewed after the census is complete. ICM records are then matched against the census.

In most cases, a match validates both the census record and the ICM record. An ICM record that does not match to the census may correspond to a "gross omission," that is, a person who should have been counted in the census but was missed. Conversely, a census record that does not match to the ICM may correspond to an "erroneous enumeration," that is, a person who was counted in the census in error. An erroneous enumeration may be a person who was counted twice in the census-perhaps because he sent in two forms. Another person may be counted correctly but assigned to the wrong unit of geography: She would be a gross omission in one place and an erroneous enumeration in the other. Of course, some persons are missed both by the census and by the ICM; their number is estimated by statistical modeling. However, these models are systematically in error, due to "correlation bias." The impact of correlation bias seems to vary from place to place in the country.

Fieldwork may be done to resolve the "status" of unmatched cases, that is, to decide whether the error should be charged against the census or against ICM. Even after fieldwork is complete, some cases remain unresolved. Such cases are handled by statistical models that "impute" (or estimate) the missing data. The number of unresolved cases may be relatively small, but it is likely to be large enough to have an appreciable influence on the final results. Statistical models used to make these sorts of imputations have many somewhat arbitrary elements and should therefore be scrutinized with great care; experience from the past is not encouraging.

Movers---people who change address between the time of the census and the time of the ICM interviews---represent a major complication to ICM. Unless persons are correctly identified as movers or non-movers, they cannot be matched; the identification depends on getting accurate information from respondents on where they were living at census time. The number of movers is relatively small, but they are a large factor in the adjustment equation. More generally, matching records between ICM and the census becomes problematic if respondents give inaccurate information to the census, to ICM, or to both. Thus, even cases that are resolved through ICM fieldwork may be resolved incorrectly.

We turn now to estimation. The Bureau divides the population into "post strata" defined by demographic and geographic characteristics. One post stratum might be Hispanic male renters age 30-49 in California. Persons in the ICM sample are assigned to post strata on the basis of the fieldwork. Moreover, each person in the ICM sample is assigned a "sample weight." If the Bureau sampled 1 person in 100, each sample person would stand for 100 in the population and would have a sample weight of 100. The actual sampling plan is more complex, with the result that different people have different weights.

To estimate the total number of gross omissions in a post stratum, one simply adds the weights of all ICM respondents identified as being (1) gross omissions and (2) in the relevant post stratum. To a first approximation, the estimated undercount in a post stratum is the difference between the estimated numbers of gross omissions and erroneous enumerations. Next comes an "adjustment factor"; when multiplied by this factor, the census count for a post stratum equals the estimated true count. Typically, adjustment factors exceed 1; most post strata are estimated to have undercounts. However, many adjustment factors are less than 1; these post strata are estimated to have been overcounted by the census.

We now consider the process for adjusting small areas, such as blocks, cities, or states. Take any particular block by way of example. Each post stratum has some number of persons counted by the census as living in the block. (The number may be 0.) The census number is multiplied by the adjustment factor for that post stratum; the process is repeated for all post strata, and the adjusted count for the block is obtained by adding the products. Finally, the count for any larger area is obtained by adding the counts for the blocks within the area.
The adjustment process assumes that undercount rates are constant within each post stratum across all geographical units. This "homogeneity assumption" is quite implausible and was strongly contradicted by data from the 1990 census. Ordinarily, samples are used to extrapolate upward, from the part to the whole. In census adjustment, samples are used to extrapolate sideways, from 25,000 sample blocks to each and every one of 5 million inhabited blocks in the United States. That is where the homogeneity assumption comes into play.

Sampling for Non-response
Non-response follow-up in the census has been done to date on a 100% basis. In the bulk of the country, forms are mailed out to all identified housing units. If there is no response from a housing unit, interviewers come knocking on the door. In 2000, however, the Bureau plans to follow up only a sample of non-respondents.

Non-respondents will be sampled within each tract. If, for instance, a tract has 2000 housing units and 1200 return their census forms by mail, there are 800 non-responding units. The Bureau would then sample 600 of these 800 units and send interviewers only to the sample units. Statistical models based on sample responses would be used to impute additional housing units into the census. Block-level data, in particular, will be somewhat problematic in Census 2000.

There is some body of opinion that sampling improves accuracy, since interviewers can be better trained and supervised. Given the proposed sampling rates, this advantage cannot be substantial. Sampling seems inherently more complex than a census; experience shows that sample surveys have worse coverage than the census-and worse differential coverage. Of course, sampling for non-response in Census 2000 may not be directly comparable to past surveys.

How Does Sampling for Non-response Interact with ICM?
The essential task of the ICM program is to match records against the census. That conflicts with sampling for non-response because ICM respondents may be in households that did not return census forms and were not selected for follow-up. To solve this problem, the Bureau proposes to do 100% follow-up in the ICM sample blocks. That involves at least two kinds of sampling:

and three kinds of fieldwork:

Other assumptions must be made here: (1) census coverage will be the same whether non-response follow-up is done on a sample basis or a 100% basis, and (2) residents of the ICM sample blocks do not change their behavior as a result of being interviewed more than once. Failure of these assumptions can be termed "contamination error." The magnitude of contamination error is unknown.

The Lawsuits
In 1980 and 1990, New York and other plaintiffs sued to compel adjustment of the census. They lost. In 1998, the Speaker of the House of Representatives sued to preclude use of sampling in the census, at least for purposes of apportionment (allocating congressional seats to states). He won. The court ruled that the "plain meaning" of the Census Act bars sampling. A parallel case was filed by the Southeastern Legal Foundation, with similar results. The legal position may shift yet again---if the Supreme Court overturns the decision or if Congress passes new laws. The Supreme Court will review the cases on November 30, 1998.

Summary and Conclusion
We see the following crucial difficulties in the ICM: (1) Many somewhat arbitrary technical decisions will be made. Some of them may have a substantial influence on the results. (2) Many of the statistical assumptions are rather shaky. The homogeneity assumption is an example. The assumptions behind the models used to impute missing data can also be mentioned. (3) There is ample opportunity for error, especially when respondents give incomplete or inaccurate data to the census or ICM. (4) The errors are hard to detect. Any effort to demonstrate that ICM improves on the census must reckon with the sources of error discussed here.

Of course, the census has errors of its own. However, comparing the magnitudes of census errors and ICM errors is fraught with difficulty. The ICM is quite a complicated operation, and there is a strong likelihood of significant---but undetected---error. In July 1991, proposed adjustments were predicated on the idea that the census missed (net) 5 million people out of 250 million. The figure of 5 million is only an estimate, derived from the PES. Later research showed that the bulk of this estimate derived from errors in the PES rather than errors in the census. If ICM-like the PES before it-puts in more error than it takes out, Census 2000 will be at considerable risk.

Further Reading
L. Breiman, T.R. Belin, J.E. Rolph, D. Freedman, K. Wachter, et al., Three papers on census adjustment, Stat. Sci., 9 (1994), 458-537. These papers review arguments for and against adjusting the 1990 census; discuss error rates in the PES; and include comments both from those who supported adjustment and from those who were opposed.

R.E. Fay, J.H. Thompson, and R.E. Thompson, The 1990 post enumeration survey: Statistical lessons, in hindsight, Proceedings of the Bureau of the Census Annual Research Conference, 1993. Written by senior Census Bureau personnel; acknowledges many of the difficulties in adjusting the 1990 census.

D. Freedman and K. Wachter, Planning for the census in the year 2000, Evaluation Rev., 20 (1996), 355-77. Discusses plans for census 2000; reviews evidence from 1990; summarizes the Supreme Court decision in New York v. Commerce.

D. Kaye, editor, Papers on census adjustment, Jurimetrics, 34 (1993), 59-115. Arguments for and against adjusting the 1990 census.

N. Schenker, editor, Special section on the 1990 undercount, J. Amer. Stat. Assoc., 88 (1993), 1044-1166. Generally supports adjustment.

M.P. Singh, editor, Special section on census undercount measurement methods and issues, Survey Methodology, 18 (1992), 1-154. The focus is on 1980, but there is some discussion of 1990; both sides are represented.

D.L. Steffey and N.M. Bradburn, editors, Counting People in the Information Age, National Academy Press, Washington, DC, 1994. The case for adjusting census 2000.

A.A. White and K.F. Rust, editors, Preparing for the 2000 Census, National Academy Press, Washington, DC, 1997. Updates Steffey and Bradburn.

This is article is based on Technical Report No. 537, Department of Statistics, University of California, Berkeley, by M.L. Eaton, Department of Theoretical Statistics, University of Minnesota, Minneapolis; D.A. Freedman and K. Wachter, Department of Statistics, UC Berkeley; S.P. Klein, RAND Corporation, Santa Monica; R.A. Olshen, Division of Biostatistics, Stanford University; and D. Ylvisaker, Department of Statistics, UCLA.

Donate · Contact Us · Site Map · Join SIAM · My Account
Facebook Twitter Youtube linkedin google+