Providing Incentives to Survey Respondents

Final Reports

Submitted to the Regulatory Information Service Center General Services Administration Contract Number GS0092AEM0914 by the Council of Professional Associations on Federal Statistics September 22, 1993

Table of Contents

Executive Summary ii

Summary Report 1

Introduction 1

The Problem 2

Defining Incentives 3

The Different Nature of Surveys of Institutions 5

Special Issues for Federal Surveys 6

When Incentives Might be Effective 8

A Spectrum of Potential OMB Policies 9

An Agenda for Further Research 12

Attachment 1: Symposium Participants 14

Attachment 2: Questions Addressed in Pre-Symposium Papers 16

Attachment 3: Pre-Symposium Papers 17

The Paperwork Reduction Act of 1980 (PRA) assigns responsibility to the Office of Management and Budget (OMB) to review and approve plans sponsored by Federal agencies for the collection of information for administrative, statistical, and regulatory purposes. The Regulatory Information Service Center, an office of the General Services Administration, provides essential information and services that support this mission to the President and his Executive Office, the Congress, agency managers, and the public. To reflect the extension of its review responsibilities into the regulatory sphere, OMB promulgated a formal regulation to implement the PRA. One of the features of that rule is a set of guidelines that apply uniform standards to information collections for regulatory, administrative, and statistical purposes. The guidelines require that data collections using statistical methods must meet the test of generalization. This, in turn, has placed a premium on more rigorous methods and high levels of response. The same guidelines generally have prohibited the use of payments to respondents.

Increasingly, concerns about response rates and the quality of information provided have led some Federal agencies to consider or propose the use of incentives to encourage response. While proposals that include cash incentives are rare among statistical agencies, they have become more common in data collections in support of drug enforcement, health policy and regulation, and tax regulation. Views on whether such incentives are consistent with the objectives of the PRA, on when and where they may be appropriate, and on the practical consequences of providing incentives, vary widely both within and outside the Federal statistical community.

To assist the Office of Management and Budget in developing applicable principles and decision rules concerning the use of incentives by Federal agencies, the Council of Professional Associations on Federal Statistics planned and organized a symposium to consider:

whether there should be further development of guidelines for providing incentives to respondents;

what the nature, content, and applicability of such guidelines might be (e.g., should certain classes of surveys be included/excluded -- surveys of individuals and households vs. surveys of business firms, descriptive surveys vs. tests/assessments, one-time surveys vs. longitudinal surveys, voluntary surveys vs. mandatory surveys);

what evidence exists concerning negative and positive impacts of incentives on survey response and bias, and what can be done in designing surveys to minimize negative effects while preserving positive effects; and

whether further research is needed on questions such as the impact of providing incentives on overall response rates, differential response rates, willingness of respondents to provide certain types of information, and the accuracy of actual responses.

The symposium took place at the John F. Kennedy School of Government, Harvard University, October 1-3, 1992. Moderators for the symposium were two members of the Kennedy School faculty, Frederick Schauer and Julie Boatright Wilson. Participants included approximately 20 invited representatives of government, business, academic, and research organizations.

This final report includes a summary of symposium findings and recommendations, pre-symposium papers prepared by participants, an overview of research on providing incentives to survey respondents, and a transcript of the symposium proceedings. The Council of Professional Associations on Federal Statistics would like to express its sincere appreciation to the symposium moderators, Frederick Schauer and Julie Boatright Wilson, and to Timothy Atkin of the U.S. Coast Guard and Peter Bounpane of the U.S. Bureau of the Census, who served in various ways to facilitate the symposium. Finally, we wish to thank all of the participants, each of whom contributed significantly to the success of the symposium.

Summary Report

The growing use of objective statistical information to guide decisions that affect the lives of most citizens has imposed high standards for the accuracy and completeness of data. Concerns about rates of response and about the quality of data that are provided increasingly have led Federal agencies to propose the use of incentives in government-sponsored surveys.

Under the authority of the Paperwork Reduction Act of 1980, the Office of Management and Budget (OMB) is responsible for approving federally sponsored plans for the collection of information. Historically, OMB has not had an explicit, detailed policy on the use of incentives to encourage survey response. Rather, the review guidelines have called upon agencies to demonstrate "substantial need" if incentives are proposed. Generally, OMB has been reluctant to approve the use of incentives in surveys sponsored by government agencies, but there is a divergence of opinion on the matter. In fact, views on whether the provision of incentives in Federal surveys is philosophically correct, assessments as to whether or not the use of incentives is cost beneficial, and opinions about the practical consequences of providing incentives vary widely, not only in OMB, but also across the agencies of the Federal statistical system.

To provide a forum for focused exploration of these views, OMB hosted an invited symposium in the fall of 1992. While OMB did not expect that the symposium would produce definitive answers to all of the questions surrounding the use of incentives, there were two goals. First, OMB expected to obtain information that would help in preparing guidelines to foster greater consistency in reviewing future requests by Federal agencies to use incentives when conducting surveys. Second, OMB hoped to articulate a research agenda, or a list of key issues and questions that need further investigation.

The Symposium on Providing Incentives to Survey Respondents was planned and organized by the Council of Professional Associations on Federal Statistics (COPAFS). In consultation with OMB and other members of the statistical community, COPAFS developed the symposium agenda and a list of key organizations to invite. Participation was structured to provide representation of both the Federal and private sectors. (Attachment 1 provides a list of the participants.)

To maximize the use of time at the symposium, each participant was asked to prepare a paper in advance. In addition, one participant, Richard Kulka, prepared an overview paper on the state of research on the use of incentives. The advance preparation of papers enabled participants to become familiar with the perspectives others would bring to the symposium. To foster general consistency in the papers, a set of questions was posed to each participant. Attachment 2 provides the list of questions, and Attachment 3 provides the pre-symposium papers.

The symposium took place on October 1-3, 1992, at the John F. Kennedy School of Government. A general session was held on the evening of October 1 to explain the goals and process for the symposium. There was a full day of discussion on October 2 and a half day of discussion on October 3. The deliberations focused on several key areas: a description of the problem including the perceived goals of incentives; an elaboration of issues surrounding the use of incentives in Federal surveys; a listing of survey situations in which incentives might be effective; a spectrum of potential OMB policies on incentives; and an agenda for further research. The discussions are summarized in the following sections of this report.

The Problem

The symposium began with a general discussion of the "problem" participants had been asked to address. At the simplest level, the problem could be stated: "Do monetary incentives work to improve response rates to surveys?" Almost immediately, however, participants acknowledged that this definition was much too narrow.

In the first instance, participants indicated that monetary incentives are only one kind of incentive. For example, there are gift or in-kind incentives. Moreover, it was noted that survey practitioners already use other types of incentives to encourage response (e.g., appeal to civic duty, eventual use of the data to help people, only a few minutes will be required). Many participants felt that monetary or in-kind incentives represent simply an extension of the types of incentives already used.

There followed a lengthy discussion about whether response rates are, in fact, declining. Although some important exceptions were noted, participants felt this is the case. Where rates are not declining, most participants felt that other methods, often expensive, are being used to keep response rates high. If monetary incentives work, their use might actually lower total cost. In a time of declining budgets for survey activities, this could be especially important.

At a more fundamental level, participants pointed out that concentrating on overall response rates could be misleading. More and more, survey results are needed for subpopulations. Response rates in some of these subpopulations are quite low--low enough to raise serious questions about the quality of the survey data for those subpopulations. There was also considerable discussion of the fact that increasing response does not necessarily yield an increase in the eventual quality of survey data. There was some concern about potential bias introduced by offering incentives, especially in repeated applications. Participants were also concerned about an apparent increase in household response rates that might be offset by a loss in quality due to biases introduced by the incentive, especially as related to item nonresponse.

Participants pointed out that surveys are also becoming more complex. This can result in longer interviews or keeping the respondent in a sample for several interviews. Or, the complexity might be that the interview is not "traditional," but might require the respondent to keep a diary, take a test, keep supermarket receipts, or put a special box on the television. Finally, there was a discussion about the "more intrusive" nature of questions on surveys.

In addition to all of these concerns, potential effects of societal changes were noted. A variety of factors appear to be resulting in greater reluctance of respondents to participate. Changes in living patterns (e.g., more single parent households, more single person households, busy schedules, multiple residences) make it more difficult to obtain response. Concerns for security and privacy have introduced many "gatekeepers." These take such forms as telephone answering machines, executive assistants, locked gates, security systems at apartment houses, and legal staffs in large companies. All of these "gatekeepers" make it quite difficult to make contact with certain segments of the population. Even if there are not response problems now, these societal changes could amplify problems with response rates in the very near future.

Survey practitioners have faced these problems and developed many improvements to meet these challenges. Incentives are only one of these potential improvements. This discussion led to general agreement that incentives should not be considered in isolation but as "another tool in the toolbox" available to survey takers. There was also general agreement that in most cases incentives should be considered as a tool only after other potential methods to improve response (e.g., better interviewer training, better questionnaires, "creative" call-backs) have been exhausted.

Two broad restatements of the "problem" that emerged from this discussion were generally used throughout the remainder of the symposium.

Do incentives offer a way to improve the quality of survey data not available by other means? This posing of the question incorporates the notion of incentives as another tool, of the last resort nature of incentives, and of the need to look at overall quality, not simply response rates.

In more technical terms, the problem was restated as follows: How should a survey designer allocate the total available resources to the various components of the survey (including potential payment of incentives) in order to minimize the mean square error of the key statistics of interest to the survey sponsor?

Defining Incentives

While there was no intent for the symposium participants to develop a definition of incentives, it became clear in the discussion that the term "incentive" could have very different meanings, depending on such factors as survey purpose, target population, and methodology. Among the variables that participants suggested could considerably alter the meaning of incentives and their potential benefit are the following:

*Whether the incentive is monetary or in-kind.

*The level of the incentive (increasing amounts of money; or an American flag versus a desk clock).

*Whether the incentive is on-the-spot or, for example, a chance to win a large prize in a lottery.

*Who the respondent is (e.g., an individual, the entire family, only children under a certain age).

*If the respondent is an institution rather than a person. (This distinction was further defined to differentiate between profit and nonprofit institutions, large vs. small businesses, etc.)

*Whether it is a one-time interview or the respondent is going to be in a panel over time.

*The mode of interview--e.g., mail, personal interview, telephone interview, taking a test, undergoing a medical exam, serving in a focus group.

*The length of the interview.

*Where the interview takes place--e.g., respondent's abode, a central location, at a hospital.

*The type of questions asked--e.g., simple facts, opinions, detailed financial data, potentially embarrassing questions.

*Whether the response offers any risk to the respondent. (For example, if the respondent is asked about illegal behavior, can law officials obtain the data? Or, if a corporation responds, can its competitors obtain the data?)

*Who the survey sponsor is--e.g., the Federal Government, the Federal Government through someone else, State or local government, a private survey organization, a large corporation, or a university.

*How the survey results will be used--e.g., public good, market research, information on current health crisis, political opinion poll.

*Whether the survey is mandatory.

*Whether all respondents will be paid the same incentive or differential payments will be offered.

*Whether the respondent has to bear any out-of-pocket costs.

*Whether the survey is full-scale or a pilot test or a focus group.

*Whether the respondent represents a special segment of the population--e.g., movie star, Member of Congress, doctor, CEO, priest, one of the largest 100 corporations in the U.S.

The Different Nature of Surveys of Institutions

Because the experience of most participants in the symposium was in the area of demographic surveys, the discussion centered principally around the use of incentives for surveys of persons. Nonetheless, from time to time there was discussion of institutional surveys. The following points were offered with respect to surveys of institutions:

*The situation with regard to profit versus nonprofit institutions differs.

*For the domain of private sector businesses, incentives will have varying effects depending on the size of the company, the type of business (e.g., farmers are very different from manufacturers which are different from retailers), the amount of effort required to respond, and the type of information requested.

*For Federal Government surveys of the private sector, the mandatory nature of response (when applicable) seems to be enough of an incentive to obtain acceptable levels of response.

*For larger companies, in-kind incentives may be more promising than cash incentives. The incentive may be for the person in the company who does the reporting rather than to the company as an institution.

*Some type of incentive may be necessary if the domain of the survey is top executives.

*Increasingly, companies are using legal staffs to screen and determine if requests for data should be answered. Also, several companies have policies to inhibit response to surveys for fear of losing competitive information. These trends make obtaining responses very difficult in many instances, especially if the survey response is voluntary.

*For nonprofit institutions, the respondent is often a volunteer or someone who doesn't have time during usual working hours to obtain the requested information. An incentive is often viewed as necessary to compensate the institution for the extra effort to obtain the requested information. The incentive could be as little as the cost of copy reproduction or it might be the hourly wage of the person who works overtime to obtain the information for the survey. Because of the experience of several participants, there was considerable discussion concerning the use of incentives to obtain data from available records at hospitals. In those cases, incentives which are simply repayment for the extra effort involved have proven to be at times necessary and quite effective.

*Research on the use of incentives for institutions, particularly private sector businesses, is very limited.

Special Issues for Federal Surveys

Generally, the discussion at the symposium concerned the use of incentives in surveys without regard to sponsorship. Because the ultimate aim of the symposium was to assist OMB in formulating guidelines for the review of requests by agencies to provide incentives to respondents to Federal Government surveys, portions of the discussion were directed to the special nature of government surveys. Participants chose the concept "social contract" to represent the relationship between the Federal Government and the respondents to its surveys.

All agreed with the need to collect information to aid in policy development, program planning, and other activities. At the same time, participants understood the need to review requests for data collection by Federal agencies, although some may have disagreed on the extent of this review. For its part of the "social contract," it was viewed as necessary for the Federal Government to review all requests for surveys to ensure that there was not redundancy, that the information requested was really for the public good, and that the "burden" on the respondent is minimized. As for respondents, their part of the "social contract" was viewed as the responsibility to provide the information requested since eventually it would yield summary statistics for the public good.

The current OMB policy on providing incentives to survey respondents was outlined in the symposium background paper prepared by Jerry Coffey. Basically, the OMB policy has been not to approve the use of incentives unless there are "exceptional circumstances." The definition of exceptional circumstances, however, has been left up to the appropriate agency desk officer within OMB. Variations among desk officers and the lack of a definition of "exceptional circumstances" have led to inconsistency in OMB action on requests to use incentives.

A lengthy discussion followed, with many participants expressing frustration with the inconsistent application of policy and the inability to get approval for incentives they felt were justified. Many participants felt that OMB was unfairly singling out incentives for intense review and scrutiny. There was no resolution of these frustrations, but the defining of the OMB position was helpful to all.

Despite differences, there was general agreement that there are several aspects of Federal Government surveys that could change the way one views the use of incentives relative to surveys in general. First, they are paid for by taxpayer money. Second, their goal is to produce information for the public good. And finally, government surveys can have higher visibility. These agreed-upon differences in Federal Government surveys led the discussion once again to the "social contract."

Although everyone may not understand or define "social contract" the same way, participants generally felt that the government statistical system is working. In most cases, Federal Government obtains relatively high response rates to its surveys. Because of that success, there is some reluctance to risk harming the fragile nature of the "social contract."

This concern represents one of the reasons that OMB has been hesitant to agree to large-scale use of incentives in federally sponsored surveys. Another concern is cost-effectiveness. There was general agreement among participants, including the representatives of OMB, that there are circumstances where the use of incentives could reduce total survey costs. OMB indicated that often this cost savings is not well documented by the agency requesting the use of incentives. More important, in a practical sense, such cost savings might not be achieved.

This discussion was compounded by the question of whether it is philosophically acceptable to pay respondents to government surveys an incentive. Part of the concern has to do with the "social contract" argument. There is an added concern about potential backlash if large amounts of taxpayer monies are being used to finance incentives.

Finally, there was discussion of what paying incentives today might mean for the future. This issue was often referred to as the "slippery slope" of the "give an inch, take a mile" problem. Paying an incentive is often much easier than employing some other techniques to reduce response problems. There is a concern, however, that a "quick fix" today could lead to serious problems in the future. If there was general use of incentives in government surveys, respondents could come to expect them. In that case, future surveys would have to use incentives, thereby raising total survey costs but not necessarily alleviating response problems. Participants knew of no way to readily research this concern. Although in general participants felt the probability of this future problem was small, it was recognized as a potential difficulty that needs further thought.

While there was no immediate answer to the "slippery slope" issue, in general participants agreed with the summary offered by one of the participants. The risk of the "slippery slope" in and of itself should not be an absolute deterrent to the use of incentives in government surveys. In cases where OMB sees a significant risk, however, the requesting agency should have "higher hurdles to jump" to justify the use of incentives.

Participants then turned their attention to the issue of what are exceptional cases that could merit the use of incentives. To help in this discussion, participants first discussed guidelines and then listed a series of cases in which incentives might prove beneficial.

When Incentives Might be Effective

Symposium participants generally agreed that an implied policy of using incentives only in "exceptional circumstances" was too vague a rule. Most participants felt that the emphasis should not be on proving that there is an exceptional need for incentives but rather on demonstrating the substantial benefit of the incentive. In order to provide input to OMB, participants discussed the kinds of survey situations in which incentives have a high probability of being effective or necessary.

In line with the definition of incentives as a "tool" in survey methodology, participants felt it was imperative that agencies indicate what alternatives had been considered instead of incentives. If the agency could adequately document that the alternative methodologies would not work or were too expensive, that might suggest approval of the use of incentives.

There was also some discussion about a proposal that there be no OMB policy with regard to incentives, but that each agency make its own decision. OMB would, however, require that each use of incentives be accompanied by a research program and that the results be fed into a central location. While statisticians within Federal agencies might initially like the idea of no constraints, most participants realized this would be impossible. Therefore, participants turned to listing a set of circumstances in which they thought OMB should seriously consider an agency's request to use incentives. This list included:

*To encourage hard core refusals to respond, especially in small subpopulations of interest. Using current nonresponse imputation models without adequate representation from the hard core refusals could bias survey results enough to affect the quality of the eventual data.

*To compensate a respondent if there is risk in participating (e.g., asking questions about illegal activity).

*To engender good will when there is some evidence that cooperation is deteriorating.

*When there are unusual demands or intrusions on the respondent (e.g., lengthy interviews, keeping a diary, having a blood sample drawn, taking a test that could prove embarrassing).

*When sensitive questions are being asked.

*When there is a good likelihood a gatekeeper will prevent the respondent from ever receiving the questionnaire.

*If there is a special target population for whom encouragement will have little if any chance of working, particularly if other survey organizations pay the respondents in that group (e.g., prostitutes, the homeless).

*If there is a lengthy field period (e.g., a commitment over time for a panel survey).

*If the target population is a small group that is often surveyed, meaning any particular respondent is liable to be in somebody's sample frequently (e.g., deans of universities, CEO's).

*If there is any out-of-pocket cost to the respondent (e.g., transportation cost to the interview site, baby sitting costs).

*If other organizations routinely pay incentives to the target populations (e.g., doctors).

*If the population is a control group in an important (and perhaps expensive) study where it is imperative to keep most respondents in the control group sample or the result of the whole study could be vitiated.

*If the respondent is a small business or a nonprofit institution in a voluntary survey and the respondent perceives some cost and burden to participating.

In summary, most participants agreed with the general thesis that "incentives should be considered whenever the positive forces to cooperate are low."

A Spectrum of Potential OMB Policies

While almost everyone could agree that incentives should be considered whenever the positive forces to cooperate are low, putting that rule into practice posed greater difficulty. The next portion of the symposium was devoted to discussing a spectrum of potential policies or guidelines OMB might develop with respect to incentives. At one end of such a spectrum would be a policy of never paying incentives to respondents to government surveys. At the other end would be a policy of having no OMB policy on incentives, leaving the decision to each agency. Neither of those extremes seemed reasonable. Since the eventual OMB policy would be expected to lie between those extremes, five potential scenarios were suggested. Before discussing the potential policies, participants outlined a "standard case" for surveys where incentives generally would not be considered.

While no exact definition could be developed, the following components were viewed as comprising the standard interview case for a survey of persons:

*The population of interest is generally a cross section of everyone, such as in a national survey.

*The interview can reasonably be conducted in one session. Initially, there was a suggestion to define the standard interview as one that takes one hour or less, but after much discussion most participants felt that one session is a better definition.

*The interview takes place at a time and place of the respondent's convenience.

*The purpose of the survey is to produce data of general interest for the public good.

*The survey contains noncontroversial questions and is generally considered nonintrusive.

The participants recognized that terms were used in this definition that are not well defined themselves and could mean different things to different people. Nonetheless, this definition of the standard case was deemed sufficient for a discussion of potential OMB policies on incentives for the nonstandard case. Participants also recognized that a separate definition would have to be developed to define the standard case for a survey of institutions or businesses.

After accepting the above definition of the standard case, the participants discussed five suggested OMB incentive policies that could be used in nonstandard cases.

1. Incentives would be considered only if the respondent incurred an out-of-pocket cost.

There was considerable discussion about whether this policy really provided an incentive or simply offered an expected reimbursement. In general, participants did not want a strict accounting by respondents (i.e., unless you produce a taxi receipt, you do not get paid). Rather, participants viewed such an incentive as a lump sum payment when respondents were expected to incur a cost. How the money was used was up to the respondent.

2. Incentives would be considered if the respondent incurred an out-of-pocket cost or if the survey was too intrusive.

The discussion of this policy focused around trying to define intrusive. Generally, this was viewed as meaning there were unusual demands made on the respondent which could include:

a greater amount of time than the standard interview;

doing something painful or embarrassing;

doing something that requires some effort (e.g., taking a test);

having to go to somewhere special to participate (e.g., a clinic); or

involving some risk to the respondent.

3. Incentives would be considered if the respondent incurred an out-of-pocket cost; or if the survey was too intrusive; or if the survey was aimed at a hard-to-reach population.

Participants felt that hard-to-reach really meant hard-to-interview. This category could include those who are hard to encourage to cooperate, and therefore initially refuse. In such cases, incentives might be effective. Participants felt incentives would not be effective for those who are hard to find.

Participants also included in the hard-to-interview category those who are difficult to reach by mail, those who must be kept in a sample (such as members of a control group), and those disenfranchised from society. For all except the hard-to-find, there is already a large cost involved in locating respondents for interviews, encouraging them to respond, or keeping them in the sample. Since the monetary outlay for these cases is so high already, incentives might be very cost-effective for these groups.

4. Incentives would be considered only if the sponsoring agency could show that their use would minimize the mean square error per unit cost.

Participants viewed policy four as a rule for how to evaluate policies one, two, and three. Participants felt it was important to point out that the mean square error needed to be viewed relative to the planned use of the data.

5. Incentives would be used to compensate respondents for their time and effort in participating. In this policy, response would not be viewed as something that is part of one's civic duty, but rather as an effort for which the incentive would compensate.

Generally, acceptance of this policy would imply that the "social contract" has broken down and that the government should pay for the opportunity cost of a respondent's time. While participants did not feel that this is the case in general, it could be the case for certain subpopulations (e.g., homeless, disenfranchised, prostitutes, drug dealers).

An Agenda for Further Research

Beyond the documentation provided in the background papers, the symposium was not expected to develop a catalog of all known research on the issue of incentives for survey respondents. Throughout the discussions, however, a number of conclusions from prior research on the topic were noted. These included the following:

*Incentives do work in increasing response rates, particularly in mail surveys. Most participants read the literature to show that incentives are usually positive or have no effect. The evidence on personal interview surveys is not so clear.

*Prepayment of an incentive is much more effective than promise of a payment, even when the promised postpayment is relatively large.

*Evidence from one private sector participant indicated that incentives work better in some lesser developed countries (Mexico, Italy) than in the U.S. and Western Europe.

*Incentives are effective when the respondent has to exert some special effort or incur some cost (e.g., take a test, collect records from the files).

*Increasingly, surveys, though not many Federal Government surveys, use incentives of one kind or another.

*Incentives have proven effective in getting past "gatekeepers" of certain professionals (e.g., doctors).

Although participants were not expected to produce a definitive list of questions to be answered, a number of key areas that need further research were identified. An overriding concern about this list of topics and the attendant research agenda should be noted. Specifically, participants expressed some reservations about the ability to design appropriate studies that would answer a number of these difficult questions with considerable confidence. Some of the issues may not be "researchable" at this point in time. A prime example is the potential effect of incentives on future attitudes of survey respondents.

The questions proposed for further research were as follows:

*Although incentives can improve response, what is their effect on data quality? Is there a problem with item nonresponse? Can incentives have an effect on the interviewer which can lead to survey bias? Is it possible that repeated use of incentives or the use of incentives in repeated interviews of the same respondent can lead to bias?

*Although incentives may increase response among initial refusals, can incentives do anything with the difficult or impossible to interview populations?

*What is the effect of various levels of incentives? Is there a point where the incentive is so high it raises doubts in the respondent's mind about the sincerity of the survey?

*What really motivates a respondent to answer?

*Are the effects of incentives different for different population sub-groups (e.g., children, those over 80 years old, young Black males)?

*What is the public's reaction to using tax money to pay respondents to Federal Government surveys?

*What is the effect of paying some, but not all respondents?

*What kinds of incentives work for institutions, and how do those incentives vary by type of institution?

*Can Federally appropriated funds be used to pay incentives to respondents to Federal Government surveys?

*What are the long range effects of paying incentives? Could their isolated use now lead to a situation where they are so expected as to become mandatory in the future, thus raising the total cost of taking a survey?

Symposium Participants

Symposium Moderators:

Julie Boatright Wilson, John F. Kennedy School of Government, Harvard University Frederick Schauer, John F. Kennedy School of Government, Harvard University

Symposium Facilitators:

Katherine K. Wallman, Executive Director, Council of Professional Associations on Federal Statistics Peter Bounpane, Assistant Director, US Bureau of the Census Timothy Atkin, United States Coast Guard

Symposium Participants: Representatives from Business, Academic and Research Organizations

Sandra Berry, The Rand Corporation Norman Bradburn, National Opinion Research Center George Carcagno, Mathematica Policy Research Robert Groves, University of Michigan Gregory Hoerz, Manpower Development Research Corp. Richard Kulka, National Opinion Research Center David Morganstein, Westat, Inc. Paul Moore, Research Triangle Institute Donald Rock, Educational Testing Service Kirk Wolter, A.C. Neilson Company

Symposium Participants: Representatives from Federal Agencies

Richard D. Allen, National Agricultural Statistics Service, US Department of Agriculture Steve B. Cohen, Agency for Health Care Policy and Research, Department of Health & Human Services Emerson J. Elliott, Commissioner, National Center for Education Statistics, Department of Education Herman Habermann, Chief Statistician, Office of Management & Budget Ronald C. Kelly, Regulatory Information Center John Morrall, Office of Management & Budget Audrey Pendleton, Planning & Evaluation, Department of Education Clyde Tucker, Bureau of Labor Statistics Charles A. Waite, Associate Director for Economic Programs, US Bureau of the Census Andrew White, National Center for Health Statistics

Symposium Participants: Observers

Susan W. Ahmed, National Center for Education Statistics, Department of Education Marcie L. Cynamon, National Center for Health Statistics Geraldine Mooney, Mathematica Policy Research Robert St. Pierre, Abt Associates Robert D. Tortora, Associate Director for Statistical Design, Methodology and Standards, US Bureau of the Census Daniel Walden, Agency for Health Care Policy & Research, Department of Health and Human Services Edward Pat Ward, Westat, Inc.

Questions Addressed in Pre-Symposium Papers

In preparation for the symposium, we ask that you furnish a brief paper outlining your perspective on the following questions. When asked to consider particular circumstances or classes of surveys, try to be as precise and specific as possible, bearing in mind such distinctions as those between household and business surveys.

1. What are the most important benefits that might be produced by cash or gift incentives(1) to respondents?

2. What are the most important risks that might be incurred through the use of such incentives?

3. In what circumstances are cash or gift incentives most likely to produce net benefits?

4. Are there particular classes of government surveys where cash or gift incentives should be considered?

5. What research should be performed to justify the use of incentives (or a particular level) in particular circumstances? (Alternatively, are there particular combinations of circumstances and incentive levels that should be regarded as relatively immune to adverse effects?)

6. What guidelines, limitations, or other principles would you recommend in selecting an appropriate incentive?

7. If you could choose no more than three research projects to address issues of cash or gift incentives in government surveys, what would those projects be?

8. In addition to the materials we reference in our pre-symposium bibliography, is there other literature that you are aware of that is relevant to the issues?

Pre-Symposium Papers(2)

Allen, Rich. Providing Incentives to Survey Respondents: Perspective from USDA-NASS.

Berry, Sandra H. Providing Incentives to Survey Respondents: Answers to Pre-Symposium Questions.

Bradburn, Norman M. Thoughts on Providing Incentives to Survey Respondents.

Carcagno, George J. and Mooney, Geraldine M. Incentive Payments in Surveys: OMB Symposium (Revised).

Coffey, Jerry. Providing Incentives to Survey Respondents (Draft for Discussion Only).

Cohen, Steven B., Walden, Daniel C., and Ward, Edward P. Providing Payments to Survey Respondents: The NMES Experience.

Elliott, Emerson. The Use of Incentive in Government Surveys: Views of the National Center for Education Statistics.

Groves, Robert M. Respondent Incentives, Minimum Mean Square Error per Unit Cost, and the Resident's Responsibilities to the Polity.

Hoerz, Gregory. Perspectives on Respondent Incentives.

Kirsch, I., Rock, D., and Yamamoto, K. Providing Incentives to Survey Respondents: Responses to Pre-Symposium Questions.

Kulka, Richard A. A Brief Review of the Use of Monetary Incentives in Federal Surveys.

Moore, R. Paul. Providing Incentives to Survey Respondents.

Morganstein, David, and Waksberg, Joseph. Providing Incentives to Survey Respondents.

Morrall III, John F. An Economic Approach to the Use of Monetary Incentives in Federal Surveys. Tortora, Robert, Dillman, Don A., and Bolstein, Richard. Considerations Related to How Incentives Influence Survey Response.

Tucker, Clyde. Providing Incentives to Survey Respondents in Government Surveys.

Waite, Charles A. Incentives to Survey Respondents: A Census Bureau Perspective.

White, Andrew A. , Ezzati, Trena M. , and Cynamon, Marcie L. Incentives in Government Sponsored Surveys: An NCHS Perspective.

Willimack, Diane K., Petrella, Margaret, Beebe, Tim, and Welk, Marcy. The Use of Incentives in Surveys: Annotated Bibliography.

Wolter, Kirk M. Providing Incentives to Survey Respondents.

1. 1 The phrase "gift incentive" refers to any non-cash item or service provided to a respondent (e.g., calculator, a special report, or a copy of survey results).

2. 2 To obtain a copy of these papers, please contact Virginia de Wolf at OMB's Statistical Policy Office: 202-395-7314 or by email (