COPAFS
 

Survey Respondent Incentives: Research and Practice

March 10, 2008

Title: Incentives in Private Sector Surveys

Author:Barbara C. O’Hare, Arbitron Inc.

Surveys conducted by private sector organizations and commercial establishments face response rate challenges similar to those conducted by government agencies or academic institutions. There is a well-established body of literature discussing theory and experimental findings on reasons people respond to surveys. Theoretical perspectives include norms of reciprocity, social exchange, and the use of heuristics to evaluate the opportunity and cost of participating in a survey (Dillman, 1978; Groves, Cialdini and Couper, 1992). Respondent cooperation can be influenced by such attributes as topic salience, survey sponsorship, data collection mode, interviewer behavior, survey burden and incentives, among others (Heberlein and Baumgartner, 1978; Oksenberg, Coleman, Cannell, 1986; Fox, Crask, Kim, 1988; Nederhof, 1988; Church, 1993; James and Bolstein, 1990; Singer, Hoewyk, Maher, 2000).

Leverage-saliency theory postulates that the relative importance of survey attributes differs across respondents, and the salience of the attribute in making the survey request will influence the decision (Groves, Singer, Corning, 2000). The value of incentives in the decision to participate is thus related both to other survey attributes and the way in which these attributes are presented to the potential respondent. Groves, Presser and Dipko (2004) in a test of topic interest and saliency in the survey request found some support for incentives leveraging participation of respondents for whom topic interest was of less importance.

The survey attributes, some by definition, are closely associated with the survey organization itself, and therefore, different types of survey organizations may vary in their opportunity to manipulate these attributes to effect higher response rates. Clearly, private sector survey organizations often lack the name recognition privileged to the “Federal Government” or the credibility of a university. As a result, the organization’s opportunity to emphasize sponsorship in introducing a survey is not the same for commercial establishments.

Take for example “Your Response is Required by Law”, a message tested by Dillman, Singer and colleagues for the US Census Bureau (Dillman, Singer, Clark, and Treat, 1996). We’ve joked many times at Arbitron that we wish we could use this on the front of our pre-alert mailings.

When Arbitron measured its name recognition on a Roper omnibus survey, we found only 27% of those surveyed said they knew of Arbitron, and when asked what the company does, only 12% of those correctly identified us with radio ratings in an open-ended question.

While Arbitron has tested and incorporated changes to numerous aspects of our survey appeal, including mail packaging, introductory scripts, interviewer assignments, and survey instrument design, we consistently have found that incentives are a survey attribute that we can control and have predictable effects across broad segments of the survey population. As a result, we rely on incentives to increase survey participation rates. My knowledge of other commercial organizations indicates that many use incentives to improve response rates. Key factors that determine whether incentives will be used are likely similar to considerations in the other sectors – respondent burden, the subject matter, cooperation rates of subpopulations, and the incidence of the behaviors being measured.

The discussion I’m presenting in this paper draws primarily from my experience at Arbitron, a radio ratings company. I’ve tried to represent a range of practices in private sector survey organizations by contacting a number of different organizations, and include that experience where I can. I would like to acknowledge Nielsen, Scarborough, Edison Media Research and market research experts at Daston Communications, and Directions In Research. [1] One of the challenges gathering specific information from any commercial organization is the proprietary nature of their survey designs. Reliable, science-based survey organizations (including Arbitron) make a description of methodology publicly available. However, information such as exact incentive amounts is not readily available and so the experience I share today is limited.

The use of incentives varies across organizations and is used as one “tool in the toolbox” to obtain respondent cooperation. Many private sector organizations have tested a wide variety of monetary and non-monetary incentives, likely due to fewer constraints than those in government or academia on the type of incentives that can be offered. While money consistently outperforms non-monetary incentives, rising survey costs to maintain response rates have led to recent reconsideration and testing of non-monetary incentives. I will discuss both monetary and non-monetary incentives and when and how they are used in the commercial sector.

Decision Criteria for the Use of Incentives

First, as supported in the literature, Arbitron lives by the basic tenets on the use of incentives:

  • Prepaid incentives work better than promised incentives.

  • Monetary incentives improve response better than non-monetary incentives.

  • The effect of incremental increases in incentive amount on response rate differs across subpopulations.

Despite these common considerations regardless of organization type, the decision process of when and how to use incentives may differ from non-profit organizations. In particular,

  • What are the organizational research standards?

  • What are the client-driven data needs?

  • What are acceptable response rates?

  • What is the bottom line cost-benefit?

Addressing the first bullet, it is true that without government oversight and with the wide range of purposes for private sector surveys, there is consequently a wide range of practices across organizations. Most of the major private sector organizations, and I include Arbitron in this group, use probability samples and the science of survey methods in producing their estimates. In the case of the media ratings produced by Arbitron and Nielsen, the estimates are the currency for buying and selling advertising time and are used to set the exchange rate, or Cost Per Point. Other private sector surveys use non-probability samples and produce “quick and inexpensive” measures that quite often are adequate for their use. And still others are in-between, acknowledging their limitations, but accept them given the limited risk in the practical applications of their results.

The major private sector organizations are sensitive to abuse of incentives and respondent coercion. Most importantly, we are concerned about bias in the estimates. Decisions about the use of incentives are in the wider context of the survey design. Incentives are an effective tool that we can control. And, a reality of a private sector organization is that the bottom line expense of incentives keeps incentive amounts in check!

Within the major private sector survey organizations, any change in survey design, such as incentives, are experimentally tested before implementation. Concerns in testing include the effect on response rates, on demographic representation across respondents, and the effects on the key estimates. The commitment to reliable estimates is critical for us.

Another aspect of the decision process to use incentives is the expectation of the clients and their data needs. Like any survey, the survey design in commercial surveys is driven by the purpose and use of the estimates. And, although millions of dollars can be at stake if radio and TV ratings are in error, or a poll estimate is in error, it is a different consequence than millions of dollars at stake in a healthcare program or federal funding decision. Just to give you a sense, in a major market, one point in the radio rating for a station (representing the percent of the population listening) can mean $1 million in advertising revenue for the station. Generally, marketing decisions are driven by trying to reach particular demographic groups. To the methodologist, representation across the population is a concern about bias; to the clients, it’s a matter of how well you’ve been able to measure their consumers. For many in the private sector, this has led to differential survey incentives. Arbitron has been targeting incentives based on the demographics of the household for decades. It is easy to understand that radio listening and station preference is demographically related and different demographic groups respond to surveys at differential rates. Add to this that young adults, blacks and Hispanics are among the heaviest radio listeners and respond to surveys at lower rates and you have the case for differential survey treatments.

Addressing the third bullet – acceptable response rates – we are definitely challenged in the private sector. There are a number of factors that contribute to what is acceptable. First, experience has shown that response to commercial surveys, especially among the general public, is generally lower than noncommercial surveys. As a result, what is acceptable has to be realistic. Second, quite often commercial surveys are on short-term deadlines limiting the field period and opportunity for multiple conversion attempts. For example, the Arbitron diary-based radio listening survey can be considered 48 one-week surveys with a field time of not much more than 6-8 weeks. Response rates in the 30s and 40s, and more recently even lower, are common. Incentives thus become a critical tool in being able to boost response rates when other survey design options are limited.

Last but not least, is the contrast with other sectors in the cost-benefit decision on the use of incentives. Yes, the effect on profitability is a consideration, weighed in conjunction with benefit to response rates and the quality of the estimates. Use of any incentive in a survey is first evaluated through a cost model. Using our current operational costs for interviewing, materials, incentives, postage, processing and the estimated response rate gain (based on field test results), we decide whether the cost of the new incentive “pays for itself” by reducing the starting sample. Frankly, incentives have not been cost neutral over recent years. Rather, the decision is made not only considering operational costs but also the value to the quality of the estimates. Quite often, it is a discussion of the best way to spend our budget to meet our research standards and meet our client needs.

Oversight of Survey Practices

While government surveys have oversight through the Office of Management and Budget and university surveys have IRBs, media measurement companies are subject to the Media Ratings Council (MRC). Established in the early 60s, the committee provides oversight to assure execution of standards in media measurement. In order to receive accreditation as a service that meets the standards, companies subject their surveys to annual audits of the execution of their methodology. In addition, changes in methodology are shared with the MRC and taken into consideration when evaluating whether a measurement service meets the established standards.

Accreditation by the MRC adds value to the credibility of the ratings estimates. For the clients, accreditation assures that the ratings are the product of execution of a methodology that meets the MRC standards. For the survey organization, accreditation indicates that their procedures meet the standards, providing value to the estimates.

In summary, private sector organizations are faced with a similar decision process to other survey organizations. The intent of the incentive is to engage the respondent. The dynamic between survey organization and respondent is similar, whether private or public. Establishing trust, invoking principles of social exchange, the incentive can help engage the respondent to cooperate in the survey request. Decisions to use incentives are driven by survey attributes such as respondent burden, topic, and the target respondent incidence in the population and cooperation rates. Incentives are particularly effective in panel survey designs, where recruitment and retention are critical. It is important to build a relationship with the panelist. The incentive is a “tangible show of appreciation and intensifies the relationship.” (as noted by Sandy Daston, Daston Communications).

Monetary Incentives: Current Practices and Experimental Studies

“Cash is King.” The practice of small, pre-paid monetary incentives has been found to work well in commercial surveys. This has been demonstrated in the three large media measurement surveys, through their ongoing testing of incentives and other survey design aspects. However, incentives are considered in conjunction with other survey attributes. The oft-cited study of James and Bolstein (1990) is a nice example of the trade-offs between incentives and field period in a mail survey of cable subscribers. The authors tested pre-paid incentives with values from $0.25 to $2 using the Dillman Total Design Method with four waves of mailing. They concluded it would be more cost effective to conduct a four-wave survey without an incentive than a two-wave survey with a $1.00 incentive. When follow-up mailings are limited, incentives can be an effective means of increasing response rates.

The Arbitron, Nielsen and Scarborough media and consumer measurement surveys are well established. Our survey designs are very similar, in that they are multi-stage, multi-mode surveys measuring radio, television and newspaper usage in markets comparable to MSAs across the U.S. All three companies use an RDD sample frame to conduct a first stage phone interview followed by a second-stage mail survey. In addition, Nielsen and Arbitron measure media through panel survey designs, where media measurement is through electronic meters. All three companies use incentives.

When a survey is multi-staged, there is the opportunity to deliver an incentive at different points in the survey process. Our experience and experimentation have shown that delivering small incentives along the way improves survey response rate.

The graphic below shows a contact timeline of the Arbitron diary survey. There are five mailings to the household, four of which now include a monetary incentive.

The Arbitron Incentive Structure

More than 30 years of testing have led to the current Arbitron incentive practices. Over time, we have added incentives to additional contacts during the survey process. When Arbitron first initiated the radio diary survey back in the ‘60s, a prepaid incentive was built into the design and included with each mailed diary. We started with a $0.50 incentive with each diary. Over the years things have changed

All phone numbers to which we can append a mailing address receive pre-alert mailings. For many years, our pre-alert mailing before the initial phone stage of the survey was a postcard. After testing in the late 90’s, we began sending a letter with $1 incentive in markets with low cooperation rates. The letter and $1 have consistently resulted in a 6 point gain in phone survey cooperation rates. As cooperation rates have fallen we expanded its use and as of the end of 2006, a letter and $1 incentive pre-notification is sent to every mailable household. A recent test of a $2 incentive with the pre-notification delivered mixed results.

To counteract this potential response bias, Arbitron implemented a letter with $1 incentive to all unlisted households, sent after the phone interview, to households who agreed to participate and provided their name and address. The practice of sending this letter within 24 hours of the phone interview has increased diary return rates a few points in this segment (Arbitron internal research reports, 2003).

We have always sent a “small cash gift” with each diary. All diaries for the household are delivered in a box to the adult who agreed over the phone. Each diary is in a separate sleeve within the box, along with its cash gift. As recent as the early 90s, we still had $0.50 diary incentives. Minimum $1 diary incentives were instituted in the mid-90s, and just this past year we raised the minimum incentive to $2. Households in low response demographic groups receive somewhat higher incentives.

In addition to the diary premium, we send a follow-up letter the first day of the diary week with the majority of households receiving $1 per household. Again, some households in low response demographic groups or markets receive slightly higher amounts.

The success of prepaid small monetary incentives has resulted in this being a reliable tool for addressing response rate declines for private sector surve

Dollar amounts of incentives are similar across organizations, and all are presented as a “token of appreciation.” Incentive amounts vary by market and demographics, and the following table is a generalization of practices within the three media measurement companies.

Table 1. Approximate Incentive Amounts for Three Media Measurement Surveys

 

Nielsen

Arbitron

Scarborough

Basic incentive

$2/hh

$2/person

$2/person

Low response demos (Age 18-34 HHs; African American, Hispanic)

$30/hh (householder)

$3-$10/person (HH member)

$5-$10/person

Refusals/Noncontacts

~ $10

$2/hh

$25-$35 promised

Nielsen conducted a controlled experiment in their mail survey of incentive amounts from $0 to $10, in increments of $1 (with the exception of a $9 group (Trussell and Lavrakas, 2004). Further, the treatments were crossed by phone contact with the household – those who had agreed to be sent diaries, those who had refused and those in which there was no contact. As found in the literature, a monetary incentive significantly increased survey returns vs. no incentive and the authors found response rate increases in cooperation rates with each additional dollar. A valuable finding was the interaction between outcome of previous contact and the incremental change in cooperation rate. There was less cooperation benefit with each additional dollar for the group who had already indicated they were willing to participate in the mail survey. For households that had not previously committed to the mail survey, each additional dollar helped by a similar amount.

Arbitron conducted a test of the effects of the amount and timing of incentives on diary return rate (O’Hare, 2007). We found a statistically significant gain of 8 points in return rate moving from a $1 to $2 prepaid incentive, and significant 5 point gain from $2 to $3. A $5 and $10 incentive resulted in smaller, non-significant increments in return per each additional dollar. Generally, we found that dollars spent earlier in the contact with the household had greater response rate benefit than dollars spent at later points.

In the area of market research, monetary incentives are not always used in one-time-only surveys. According to Ginger Blazier of Directions in Research, incentives are primarily used in business-to-business and medical surveys. Here amounts can easily be $100 or more for these special populations. For short consumer surveys, either conducted by phone or online, an incentive will not be offered. However, more burdensome surveys, such as a 30-minute interview or keeping a diary may result in an incentive offer of $10 to $25, a gift card, or a charitable contribution offer

Commercial panel surveys typically use a system of points to earn monetary or gift rewards. In the panel survey that Arbitron conducts, panelists earn points resulting in a monthly check for their participation. But, even here the amount typically is around $30 and is always described as a “thank you” or gift of appreciation.

Differential Incentives by Response Propensity

A practice which is used by all the media survey organizations is the targeting of incentive amounts by the likelihood of survey response, with higher incentives for lower responding population segments. When Arbitron moved from in-person placement to the current phone and mail methodology for Hispanic and black households in 1983, we also introduced “Differential Survey Treatments.” This includes additional effort, not only in incentives, but also in additional phone calls, special mailing materials, and interviewer assignments.

We also extended these special treatments to households with men ages 18 to 34. Young adult cooperation in the commercial sector is critical. Marketers and advertising agencies want to target young adults in their campaigns. Whether media usage or consumption patterns, knowing their behaviors is important. As a result, survey organizations put a great deal of effort and money into gaining cooperation from this key segment. Arbitron, Nielsen and Scarborough all send higher incentives to the low responding demographic groups to low responding groups, including young adults, Hispanics and African Americans.

Incentive levels that were only in the $3-$5 per person range just a few years ago have doubled. And, our experience has shown through qualitative research that it’s less about the money than lifestyles, competition for their time, and limited motivation from civic responsibility. Monetary incentives still work, but it is taking increasing dollar amounts to gain the attention and cooperation of young adults.

A further note about the differential use of incentives across population subgroups – this practice is a realistic and effective tool to address representation in the estimates. Low response rates among African Americans and Hispanics is a critical issue. A recent study by Arbitron tested different incentive combinations to evaluate the response rate benefit of dollar amount and timing of delivery. Among the findings was the consistent and considerable difference in response propensity by race/ethnicity as seen in this graph:

In this graph, you can quickly see that survey return rates for black and Hispanic households are consistent 20 or more points lower than the non-black and non-Hispanic group. While unfortunately, small sample sizes in the individual test groups by race/ethnicity resulted in few statistically significant differences, the overall pattern is telling. It suggests that to achieve parity in survey return rates by race ethnicity, incentives for the low responding segments would need to be more than 4 to 5 times as high in total value.

Leverage saliency theory suggests that incentives would have their largest effects when response propensity is low (Singer, et al., 1999). These test results support this theory, as well as the findings of Trussell and Lavrakas (2004), where increases in cooperation with each additional dollar among reluctant respondents, but not so for the pre-committed respondents.

Promised Incentives

Because of the growing expense of incentives and evidence of diminishing returns with each additional prepaid dollar, Arbitron and Scarborough have both tested promised incentives in combination with small prepaid incentives in recent years. In 2001, Arbitron tested a promised incentive in combination with small prepaid incentives. In separate test groups, we offered $5, $10 and a $10 gift card offer (Arbitron AAPOR presentation 2003). The $5 monetary incentive and $10 gift card increased overall return rate by a statistically significant 5 points, with a significant 11 point gain with the $10 monetary promised incentive (p<.05). More important, when we examined diary returns by household demographics, we saw a minimum 10 point gain across all the low response groups. As a result, as of 2002, we now offer a $10 promised incentive to Hispanic and black households in selected markets. Of course, this is a costly incentive. In addition to the incentive itself are the costs of fulfillment and administration, including abiding by escheatment laws.

We tested extending the promised incentive to the phone stage by offering a $5 promised incentive if the household agreed to receive the diaries. Phone numbers with addresses were sent a letter with $1, and a special notice indicating we would give them a $5 “thank you” if they agreed to participate in the survey when we called. After careful analysis, we discovered that the $5 promised incentive gave us no improvement in consent over the $1 prepaid incentive. We were pleased to find that the $5 that we sent to the household the day after they agreed to participate had a carry-through effect of boosting diary return rates by a statistically significant 6 points ( p< .05). The results of this study led us to introduce a $5 “thank you” in the letter after the phone interview to affect diary return rate in selected markets and demographic groups.

Market research firms often present their incentives as promised incentives, conditional on completing the phone or online survey. This allows the organization to offer incentives of notable amounts, such as $10, in order to gain cooperation. For shorter surveys, incentives may not be offered at all. Edison research uses a technique of providing a postcard with a survey website and promised incentive for completion.

Non-monetary Incentives
As is found across survey organizations, commercial surveys have had limited success with non-monetary incentives. Yet, we continue to test non-monetary alternatives for a few reasons:

  • A non-monetary gift may be perceived by the recipient as having a value beyond its actual financial worth. This is sometimes referred to as “trophy value” because the gift is unexpected or is an item the recipient would not have treated themselves to otherwise.

  • Money has the risk of more likely being seen as “payment for time,” a perception that all survey organizations want to avoid.

  • Non-monetary incentives can save the survey organization money through volume discounts on the purchase of the incentive or savings in unredeemed gift cards. As a result the perceived value to the recipient is greater than the cost to the organization.

For these reasons, testing of non-monetary incentives continues in the private sector. These have run the gamut from key chains and pens to gift cards and magazine offers. None have been found to work as well or be more cost-effective than money. Arbitron, Nielsen and Scarborough do not currently use non-monetary incentives in place of money in their surveys, but all are reviewing and testing the options.

Occasionally we hear the suggestion to offer music downloads or a chance to win a Bose radio. While this incentive, on the surface, might seem to engage the respondent in the topic of the survey, it also has the potential to change their radio listening behavior and bias the survey results.

With the rapid growth in the use of gift cards in recent years, gift cards offer the most promise as an alternative to monetary incentives. In 2006, Nielsen began testing non-monetary incentives in the form of bank checks (payable as “cash”) and bank gift cards, as alternatives to money. These forms of “money” can be used for just about any financial transaction by the recipient and are possibly more cost effective than monetary incentives when checks or cards are not redeemed.

The cards function similar to debit cards. One of the benefits of gift cards is that they can be activated after the recipient has taken an action, such as return their survey. On the other hand, the incentive amount needs to be large enough to make it worthwhile to the respondent and to the organization. In Nielsen’s case, they tested a $40 check and $40 gift card against $30 in bills. Their testing of gift cards continues.

Among the considerations in the use of gift cards are the ease of use by the recipient, such as:

  • Is the card easy to use? (e.g., signature required? Easy to get balance?)

  • Is it readily accepted by nearly all retailers?

  • Can an expiration date be assigned to the card?

  • Do escheatment laws cover the unused cards or can the unused card values be recouped by the survey organization?

  • What is the administration cost per card? What are the payment terms with the card supplier? Upfront money? Payment upon use?

Another non-monetary incentive that Arbitron tested a few years ago was a magazine offer to encourage diary return among young men. Magazine subscriptions can be purchased in volume from brokers at deep discounts (as little as $1 to $3 per subscription), making them a cost-effective attractive alternative to money. We tested an unconditional offer of a subscription, along with the same prepaid monetary incentive, in the diary package. The offer was presented in a separate flyer designed to particularly and specifically target men ages 18 to 34, and included a business reply card where they could indicate their choice of 4 different magazines. Results were disappointing in that we saw no overall statistically significant increase in diary return among young men. The magazine subscription was only requested by 12% of households. Clearly, this was a test of the entire package offering the subscription – the mailer and the choice of magazines would have affected the observed results. Further testing would reveal what elements of the offer are most effective, if any.

Panel surveys typically use a system of points to earn rewards. Because of the long term relationship established with the respondent, points are an effective reward for their participation without a direct dollar value and the panelist has some control over their reward through their level of participation. For the survey organization, the number of points earned can differ depending on the behavior to be reinforced. To retain a panelist, the survey organization can offer bonuses at key points of attrition, such as 3 months. Providing the panelist a goal to work toward helps build and maintain the relationship. Often, points can be redeemed through a gift catalogue. The value of the items to be “purchased,” as well as the point values, differs across surveys. Lowest item values are typically $10 to $15, and prizes are still presented as a “token of appreciation for your time.” Catalogue items can be included to appeal to particular demographic segments. (Sandy Daston, Daston Communications).

Online surveys, such as those conducted by NFO or Harris Interactive, often use panels. These surveys typically also deliver their incentives online, such as payment in points for being on a panel. A one-time-only survey might use payment through Paypal or other reward system. A recent study examined the use of incentives in a web survey of a professional association on “trends and concerns in real estate” conducted in 2001 (Bosnjak and Tuten, AAPOR). A $2 prepaid incentive, delivered through Paypal, a $2 promised incentive through Paypal, and a sweepstakes for a chance to win $50 or one of two $25 prizes were tested as incentives. The authors found the sweepstakes offer to result in a significantly higher number of respondents who visited the website. This finding held in survey completion rates (65% sweepstakes vs. 56% prepaid and 58% promised).

Sweepstakes are used at times in commercial surveys. In panel surveys, they may be used in addition to the point system, providing an opportunity to win the “big prize.” For one-time-only market research phone surveys, sweepstakes have been used for interviews of 10 to 15 minutes length with a chance to win $100. Longer surveys that offer sweepstakes typically offer a small incentive such as a gift card that for a product of interest to a wide audience (per communication with Sandy Daston, Daston Communications).

Non-monetary incentives continue to be an appealing alternative to monetary incentives to survey organizations in the private sector. With increasing monetary incentive amounts resulting in diminishing gains for each additional dollar and the potential for volume discounts on the face value of non-monetary incentives making them more cost effective, we continue to “scan the horizon” as new possibilities arise.

Summary and Conclusions

Many of the considerations that drive survey organizations to use incentives are the same, regardless of the survey sector. A creditable, probability- based survey must address the issues of the reliability and validity of the estimates regardless of the behavior of interest. On the other hand, the private sector decision process reflects issues of client needs and profitability that differ from the noncommercial sector.

This paper has highlighted current practices across a variety of private sector survey organizations. In general, we can say:

  • Incentives are quite often used as one tool in the tool box to increase response rates.

  • To improve representation of low response demographic groups, differential incentives are used by some organizations.

  • Probability samples and survey quality are critical to the value of the estimates. Incentives are used to help achieve these standards.

  • Cost models are used to estimate the benefit of incentives, but it is increasingly less common that incentives pay for themselves in response rate gain.

  • Promised incentives, in combination with small prepaid incentives, have been found to be effective in improving response rates.

  • Non-monetary incentives continue their allure of being more cost effective, but there is still limited success in finding a reliable replacement to money.

Thank you for this dialogue on the role of incentives across sectors of survey organizations. As we all know, the decision to participate in a survey is the result of a complex relationship between survey attributes and respondent characteristics. There is no one set of rules describing how and when to use incentives, but the opportunity to better understand incentives in the larger context of the survey decision is critical to advancing survey methods.

References

Brick, J. Michael, Jill Montaquila, Mary Collins Hagedorn, Shelley Brock Roth, and Christopher Chapman. 2005. “Implications for RDD Design from an Incentive Experiment”. Journal of Official Statistics. Forthcoming.

Church, Allan H. 1993. “Estimating the Effect of Incentives on Mail Survey Response Rates: a Meta-Analysis”. Public Opinion Quarterly 57:62-79.

Council of Professional Associations on Federal Statistics (COPAFS). 1992. Providing Incentives to Survey Respondents: Final Report. Papers Presented at the Providing Incentives to Survey Respondents Symposium, Harvard University, Cambridge, MA.

Cantor, David, Barbara C. O’Hare and Kathleen D. O’Connor 2007. “The Use of Monetary Incentives to Reduce Non-Response in Random Digit Dial Telephone Surveys” in Advances in Telephone Survey Methodology. J. Lepkowski, C. Tucker, J.M. Brick, E. de Leeuw, L. Japec, P. Lavrakas, M. Link, R. Sangster (Eds.) New York: Wiley.

Curtin, Richard, Stanley Presser, and Eleanor Singer. 2005. “Changes in Telephone Survey Nonresponse over the Past Quarter Century”. Public Opinion Quarterly 69:87-98.

Curtin, Richard, Eleanor Singer, and Stanley Presser. 2005. “Incentives in Telephone Surveys: A Replication and Extension”. Unpublished manuscript.

Dillman, Don A. 1978. Total Design Method. New York: Wiley.

Dillman, Don A. 2000. Mail and Internet Surveys: The Tailored Design Method. New York: Wiley

Dillman, Don A., Eleanor Singer, Jon R. Clark and James B. Treat. 1996. “Census Return: Benefits, Confidentiality, and Legal Mandate,” Public Opinion Quarterly 60:376-389.

Fox, Richard J., Melvin R. Crask and Jonghoon Kim. 1988. “Mail Survey Response Rate: A Meta-Analysis of Selected Techniques for Inducing Response,” Public Opinion Quarterly 52:467-491

Groves, Rober M., Robert B. Cialdini, and Mick P. Couper. 1992. “Understanding the Decision to Participate in a Survey” Public Opinion Quarterly 56:475-495.

Groves, Robert M., and Mick P. Couper. 1998. Nonresponse in Household Interview Surveys. New York: Wiley.

Groves, Robert M., Eleanor Singer, and Amy Corning. 2000. “Leverage-Saliency Theory of Survey Participation”. Public Opinion Quarterly 64:299-308.

Groves, Robert M., Stanley Presser and Sarah Dipko, 2004. “The Role of Topic Interest in Survey Participation Decisions.” Public Opinion Quarterly 68:2-31.

Heberlein, T.A. and Robert M. Baumgartner. 1978. “Factors Affecting Response Rates to Mailed Questionnaires: A Quantitative Analysis of the Published Literature,” American Sociological Review 43(4): 447-462.

James, Jeannine M. and Richard Bolstein. 1990. “The Effect of Monetary Incentives and Follow-up Mailings on the Response Rate and Response Quality in Mail Surveys” Public Opinion Quarterly 54: 346-361.

O’Hare, Barbara C., Dan Singer, Anna Fleeman, and Robin Gentry. 2007. “Encouraging Respondent Cooperation: Experiment on the Timing and Amount of Incentives” Paper presented at the American Association of Public Opinion Research, Anaheim, CA.

Oksenberg, L., Coleman, L. and Cannell, C.F. (1986) “Interviewers’ Voices and Refusal Rates in Telephone Surveys.” The Public Opinion Quarterly 50(1): 97-111.

Singer, Eleanor. 2002. “The Use of Incentives to Reduce Nonresponse in Household Surveys.” Pp. 163-177 in Groves, R., Dillman, D., Eltinge, J. and R. Little (eds), Survey Nonresponse. New York: Wiley.

Singer, Eleanor, Robert M. Groves, and Amy Corning. 1999. “Differential Incentives: Beliefs About Practices, Perceptions of Equity, and Effects on Survey Participation”. Public Opinion Quarterly 63:251-260.

Singer, Eleanor, John Van Hoewyk, and Mary P. Maher. 1998. “Does the Payment of Incentives Create Expectation Effects?” Public Opinion Quarterly 62:152-164.

Singer, Eleanor, John Van Hoewyk, and Mary P. Maher. 2000. “Experiments with Incentives in Telephone Surveys”. Public Opinion Quarterly 64:171-188.

Traub, Jane, Kathy Pilhuj, and Daniel T. Mallett. 2005. “You Don't Have to Accept Low Survey Response Rates: How We Achieved the Highest Survey Cooperation Rates in Company History”. Poster presented at the Annual Meeting of the American Association of Public Opinion Research, Miami Beach, FL.

Trussell, Norm and Paul J. Lavrakas. 2004. “The Influence of Incremental Increases in Token Monetary Incentives on Mail Survey Response,” Public Opinion Quarterly 68: 349-367

[1] The author would like to thank colleagues who contributed their comments on the general practices within their survey organizations: Jane Traub, Scarborough Research, Michael Link, Nielsen, Sandy Daston, Daston Communications, Ginger Blazier, Directions In Research and Larry Rosin, Edison Media Research.