COPAFS
 

COPAFS Incentive Conference
Session II: Use of Incentives – Who, What, Where, When, Why, and How?

Introduction

Use of incentives has become a routine part of many survey research efforts, viewed by many as a necessity to obtain respondent cooperation and ensure proper sample representation. Yet, despite the growing body of published (and oftentimes unpublished) research on the topic, a number of fundamental questions remain unanswered. Moreover, the nature of the questions posed has changed over the past sixteen years since the first incentives conference, moving away from the basic question of whether incentives should be used to now encompass an array of more complex queries: Even if incentives improve cooperation, do they really reduce bias in the estimates produced? How do respondents view incentives? Are researchers setting monetary expectations for survey participation or is the need for incentives simply a reflection of changes in societal norms away from a more altruistic view of survey participation and towards an economic information exchange model? While the use of differential incentives to different groups may prove successful, is the practice “fair”? Does it matter if it is or is not? At its core, is the use of incentives coercive or just an operational necessity?

This chapter provides a rich diversity of perspectives on these and related topics, pointing to several new paths of inquiry, and, in some instances, providing a few potential answers to these vexing issues. To ensure that we have captured the richness of the discussions from the COPAFS seminar, the chapter is structured a bit non-traditionally. First, we provide the opening remarks from four leading authorities in this area:

  • Richard A. (Dick) Kulka, Ph.D. served on the staff of four major survey and social science research organizations, including most recently as a Senior Vice President with Abt Associates Inc., where he leads the company’s Survey Methodology & Data Capture and Strategic Business Development activities. He was an active participant in the 1992 COPAFS/OMB Symposium on Providing Incentives. On several occasions since that time he has had the opportunity to address and comment on what has now become a burgeoning research literature and highlight some of the major practical and research questions and issues that the increasing use of respondent incentives in a broad range of surveys has raised.

  • Paul J. Lavrakas, Ph.D. is a research psychologist and methodological research consultant. He is editor of the Encyclopedia of Survey Research Methods, which is forthcoming from Sage in summer 2008. From 2000 to 2007, Paul was the chief research methodologist for The Nielsen Company. Prior to that, he was the founding faculty director of the Northwestern University Survey Laboratory (1982-1996) and the Center for Survey Research at Ohio State University (1996-2000).

  • Stanley Presser, Ph.D. is a faculty member of both the Sociology Department and the Joint Program in Survey Methodology at the University of Maryland. Among other topics, his current research focuses on the causes and consequences of survey nonresponse.

  • David Cantor, Ph.D. is an Associate Director for Survey Methodology at Westat and a Research Professor at the Joint Program for Survey Methodology at the University of Maryland. His has conducted research on incentives on a wide variety of household and establishment surveys, including RDD, mail and in-person interviews. One of his current interests is developing methods to modify or replace RDD for purposes for generating general population estimates.

Second, an overview or distillation of the session is provided, noting some of the discussion highlights and key issues raised. Finally, a complete transcript of the session is included, with only minor edits to improve the flow. In this manner we hope to inform the discipline and stimulate additional work in this area to further our understanding of incentives in survey research. In addition to the four panelists, we would like to thank the session rapporteurs, Allison Ackermann (Abt Associates/SRBI) and Diane Willimack (Office of Management and Budget) for their detailed note taking and session summaries, without which this chapter would not be possible. Michael W. Link, Ph.D. (The Nielsen Company)
Session Moderator

Opening Remarks by Panelists

Opening Remarks by Richard A. Kulka

As we have heard, a key driver for the current seminar was the 1992 Symposium on Providing Incentives to Survey Respondents hosted by COPAFS on behalf of OMB. As a means of stimulating and furthering our discussion here, I agreed in my opening remarks to provide a brief context for the issues emanating from that symposium and its aftermath that have driven a great deal of the dialogue that continues to this day, albeit with substantial additional knowledge and in a significantly different survey research environment. In their paper, Brian Harris-Kojetin and Diane Willimack have already mentioned some of the issues and conclusions that emerged from the discussions of survey professionals at that “consensus conference,” but I would like to highlight some of those, while adding a few others that relate in particular to the questions we wish to cover in this panel discussion.

First, there has been an evolving discussion over many years on defining the “standard case,” where respondent incentives should likely not be used and/or are expected not to be particularly effective. These efforts trace back to as early as 1975 to a consensus-building discussion by survey methodologists at the first Health Survey Methods Conference, where attendees concluded that “when respondents are being asked to accept a moderate task, within the range of a standard one-time interview of about an hour, compensation does not have a significant effect on response rate. However, when the positive forces on respondents to cooperate are fairly low—as in a mail survey—or when a great deal is being asked of respondents, compensation appears to be helpful” (Cannell and Fowler, 1977, p. 16). As noted by Brian and Diane, the definition of a “standard case” where incentives would generally not be considered, that emerged from the 1992 symposium more or less endorsed (and only slightly elaborated on) these basic principles:

  • The population of interest is generally a cross-section of everyone, such as a national survey;

  • The interview can reasonably be conducted in one session;

  • The interview takes place at a time and place of the respondent’s convenience;

  • The purpose of the survey is to produce data of general interest for the public good; and

  • The survey contains noncontroversial questions and is generally considered nonintrusive.

A key question today is to what extent does the current array of surveys in which respondent incentives are used continue to exclude such “standard cases?”

Second, once again from this consensus-building gathering, attendees agreed that the circumstances where the use of respondent incentives might be indicated, effective or required include:

  • To encourage hard core refusals to respond, especially in small subpopulations of interest;

  • To compensate a respondent if there is risk in participating (e.g., asking questions about illegal activity);

  • To engender good will where there is some evidence that cooperation is deteriorating;

  • Where there are unusual demands or intrusions on the respondent (e.g., lengthy interviews, keeping a diary, having a blood sample drawn, taking a test that could prove embarrassing);

  • When sensitive questions are being asked;

  • When there is a good likelihood a gatekeeper will prevent the respondent from ever receiving the questionnaire;

  • If there is a special target population for whom encouragement alone will have little if any chance of working;

  • If there is a lengthy field period (e.g., a commitment of time for a panel survey);

  • If the target population is a small group that is often surveyed (e.g., deans of universities, CEOs);

  • If there is any out-of-pocket expense to the respondent (e.g., transportation or baby sitting costs);

  • If other organizations routinely pay incentives to the target population (e.g., doctors);

  • If the population is a control group in an important (and perhaps expensive) study where it is imperative to recruit and maintain such subjects in the study. A key question is to what extent do the surveys which provide respondent incentives conducted since that time meet one or more of these general criteria, or are there other major factors driving at least the perceived needs for the use of incentives that are not on this list?

Third, as noted by first by Hermann Haberman, and then by Brian and Diane, there were some deep concerns expressed at the symposium about the potential over time of creating a “slippery slope,” whereby respondents who have previously participated in at least government surveys out of a sense of civic duty come to expect to be paid for providing any information. While there are obviously many respondents who continue to participate in surveys across a broad spectrum without remuneration, there are also some significant indications today that concerns about a slippery slope were not entirely unfounded?

More generally, it is clear that a lot has happened since the 1992 symposium, and that the use of respondent incentives, and the ways in which they are used, have proliferated rapidly over that period, not only in the private sector, where their use was already well-established, but also in academic and government surveys, where the prevalence of their use has grown rapidly. In effect, the questions posed today with regard to the use of respondent incentives in surveys are far less about should they be used and where, but rather about how they can be used most effectively to increase survey response, reduce nonresponse bias, decrease costs, and avoid unintended consequences. Nevertheless, it is still important to note that in spite of hundreds of applications and a now voluminous research literature compiled over nearly two decades, the emphasis and knowledge gained by this additional experience is still very prominently focused on increasing response rates, with far less emphasis on these other criteria or the role of respondent incentives in the context of other factors and approaches to addressing nonresponse, nonresponse bias, and their effect on survey estimates.

Current issues in the use of respondent incentives

Finally, let me briefly mention some of the key questions or issues that have emerged over this period with regard to the use of respondent incentives, many of which have been answered at least in part by survey practice, and others which may take on new meaning in today’s survey research climate.

Direct impacts on Survey Statistics. First, despite mounting evidence that incentives can and do improve response rates, there has been some significant caution regarding their use because of their potential (and uncertain) impact on survey statistics. An incentive payment can obviously directly change survey statistics in two basic ways:

  1. An incentive can alter the sample composition of respondents (e.g., by converting reluctant or hard-to-reach sample members who otherwise would not have completed the survey); and

  2. Changes in survey estimates may result from the interaction between incentive payments and the actual responses provided by participants (including potential effects on item nonresponse or how they respond to survey questions).

And both can be either for better or worse.

Indirect impacts on Survey Statistics. Incentives can also have an indirect impact on survey statistics by, for example:

  1. Reducing data collection costs (e.g., interviewer effort);

  2. Influencing interviewer expectations (either positive or negative);

  3. Encouraging falsification among some interviewers.

Practical or Operational Issues

  1. Levels of Incentive payments. How much is enough is still an important and largely unanswered question. The notion of “token of appreciation” is still alive, but payment for time and services appears to be gradually becoming more common. The reviews that we have just heard, especially for academic surveys, show a very broad range of incentive payment levels, and some are very large indeed.

  2. Timing of incentive payment. As suggested by Barbara O’Hare, the efficacy of prepaid versus promised incentives, once believed to be well-established, needs to be revisited. And the use of incentives in panel studies has raised important issues on timing, levels and their interaction

  3. Form of incentive. Ways of providing cash or cash-equivalent incentives have evolved with the evolution of new survey modes, but the provision of non-monetary incentives is being revisited as well.

  4. Differential effects of incentives by target population. The protocol for many surveys assumes that incentives will be more effective for certain populations or subgroups, and the incentive protocols for several ongoing surveys have tested and operated under this assumption.

  5. Conditional use of respondent incentives. The conditional use of incentives has taken on increasing importance in recent years in two basic ways (both of which are increasingly common in everyday use):

    • Should all respondents be paid the same incentives, or should consideration be given to different levels or types of remuneration for different respondents?

    • Should consideration be given to paying some but not all respondents in a given survey? In particular, should incentives be used only for reluctant respondents and/or refusal conversion?

These are some of my opening thoughts, and I look forward to our discussion of these and other issues.

Opening Remarks by Paul J. Lavrakas

Thank you. Some of the things I'm going to say today, I presume many of you will not agree with, and that's the purpose, to be provocative. So if I see heads shaking this way, I won't be surprised.

The last seven years, I had the opportunity to work with Nielsen Media Research, the largest research company in the world, the most profitable research company in the world. They invest a lot of money in studying ways to improve the data quality of their services. As a research psychologist, I had the opportunity to plan and interpret the largest, I believe, experimental designs that probably anyone on the planet was conducting. We're talking about tens of thousands of cases assigned randomly to different conditions. Tens of thousands of cases sometimes in each condition. And a great thing about being with Nielsen was there was no restriction on disseminating this information at conferences, so I think my department and I had more than 20 papers for the Joint Statistical Meetings and such, so nothing that I'm saying today, or even now, or in the comments I might make later is proprietary information. It has all, in one way or another, been reported in public, either in these papers, or in a publication we had in POQ .

I’d like to discuss what I believe are the top priorities and new research that we need in the area of incentives. Not just one study, but rather a set of research programs that need to be invested in. And we will see whether you agree or not as the day goes on.

Survey incentives have effects on various dependent variable types. Obviously, response rates; obviously, data quality, although we have not heard much yet on data quality. Ultimately, our biggest concern is non-response bias, but we also have what we call sample representation effects. And when you pay an incentive across the board, the same to everyone, there's a trend in that you can exacerbate the sample representation problem. You have what might be termed a “fan-spread” result, where yes, everyone's response rate goes up, but the people most likely to respond without an incentive go up even higher, so you exacerbate the problems with your unweighted sample, using a one-size-fits-all approach to incentives.

There are a lot of knowledge gaps and that’s where I am asking us, or encouraging us, to do more experimental research. These are not necessarily in a priority which I think are most important. It's more where each fits within the survey process.

Using a Cash Incentive with an Advance Contact Mailing

Past research has shown that advance contact, used to gain cooperation from a respondent can raise final response rates. This effect on final response rates is enhanced when a small cash incentive is used as part of the advanced contact. As part of a large national experiment reported by Nielsen at AAPOR 2005, it was found that adding $2 in an advance mailing prior to reaching a household via telephone and then mailing a diary to the household with an additional noncontingent cash incentive, did not raise final response rates beyond what occurred when a household was sent an advance contact without any incentive but received the same total amount of incentive along with the diary in the form of one noncontingent cash incentive.

New experimental research is needed to determine if there is a threshold in the amount of the cash incentive that is sent in an advance contact mailing that would raise final response rates beyond what would occur if the amount of that incentive were merely bundled along with whatever noncontingent incentive would be given at the time the household is contacted for data collection purposes. That is, it is not known now whether there is any cost-benefit effect in sending advance contact incentives in studies in which an incentive also is given at the time of data collection. In addition, essentially nothing is known about these effects on sample representation (i.e., the demographic and psychographic make-up of the unweighted responders) and possible nonresponse bias.

Using Large Contingent Incentives vs. Small Noncontingent Incentives

There is considerable past research that shows that a cash incentive of a given value is much more effective in raising final response rates if it is given as a noncontingent incentive than if it is given as a contingent (promised) incentive. This follows from Social Exchange theory as discussed by Dillman and others.[1] Other research has shown that lower value noncontingent incentives are able to stimulate higher response rates than somewhat higher value contingent incentives. In essentially all of this research, the incentives were given as “a token of appreciation” for the respondent’s cooperation in completing the survey task.

However, it is not known if there is a cost-effective threshold in the amount (value) of a promised incentive (given only to cooperating respondents) that would bring about a cost-beneficial effect on final response rates compared to a smaller value noncontingent incentive (given to all respondents). Furthermore, in deploying large valued contingent incentives, it is not known whether characterizing (framing) these as “payments for your time” (i.e., payment for services rendered, as in an economic exchange) rather than as merely “tokens of appreciation” (i.e., a courtesy “thank you,” as in a social exchange) might not further raise final respondent rates, although Biner and Kidd (1994) reported suggestive results from an unconfounded experimental design to this effect.[2]

New experimental research is needed to determine if there is a threshold in the amount of large cash contingent incentives which every cooperating respondent would be eligible to receive (i.e., not an incentive in the form of a “sweepstakes” offer) that would raise final response rates in a cost-beneficial manner compared to the use of a lower value incentive given to all respondents regardless of whether they cooperated or not. Nor is it known what effect it has on the deployment of a large incentive to characterize it as an economic exchange. In addition, essentially nothing is known about these effects on sample representation, data quality, and possible nonresponse bias.

Response Propensity Modeling and the Targeting of Differential Incentives

Trussell and Lavrakas (2004) reported on a large national experiment showing that the effects of incremental increases in noncontingent incentives were mediated by the nature of prior contact a survey organization had with the respondents.[3] In discussing their results, the authors suggested that there is no optimal level of incentive for all respondents and that a one-size-fits-all approach to allocating incentive treatments makes no theoretical sense.

Lavrakas and Burks reported at the 2004 and 2005 AAPOR conferences on a series of other large Nielsen studies.[4] These studies indicated that the likelihood of a given respondent cooperating when contacted by a survey organization can be predicted at better than chance levels using data known about the respondent in advance of making the request that they cooperate in the survey task.

A logical step that follows from these findings is that a survey organization can model the respondent propensity that a sampled respondent will cooperate when contacted and can thereby use that information to determine the value and nature of the incentive treatment to use with a given respondent. Experimental research is needed to test this hypothesis in order to determine if overall response rates can be raised, without raising total incentives costs, beyond what would be achieved using a one-size-fits-all approach to allocating the total resources used on incentives. In addition, nothing is known about these effects on sample representation, data quality, and possible nonresponse bias.

Use Higher Valued Checks Instead of Lower Amounts of Cash

Little has been reported on the use of checks of higher value on final response rates compared to the use of a lower amount of cash as the incentive. Nielsen reported at the 2007 AAPOR meetings the results of a large national experiment which found noncontingent checks of higher value ($10) to be extremely cost-effective, but not as effective as lower amounts of cash ($5) in stimulating final response rates. [5]

New experimental research is needed to determine if there is a threshold in the amount of a noncontingent check incentive that is given at the time data collection contact is made (i.e., it is given regardless of whether the respondent cooperates) that would raise final response rates beyond what would occur if the amount of that noncontingent incentive were of paid in cash, but at a lower value. That is, it is not known now whether there is any cost-beneficial effects in maintaining or possibly raising response rates by giving a large incentive in the form of a check (since only those checks that are cashed before their expiration date generate an expense for a survey organization), compared to giving a smaller cash incentive, which generates an expense for the organization in the case of every respondent. In addition, essentially nothing is known about these effects on sample representation, data quality, and possible nonresponse bias.

Using Incentives to Differentially Stimulate Cooperation via an Internet Mode for Data Collection

The internet, as a mode for data collection, has unique cost-benefit potential compared to interviewer-administered modes and the use of regular mail survey. In theory, it could be very cost-effective for survey companies to stimulate cooperating respondents to go to a website to complete a questionnaire than to use another mode for data collection. This also may have special appeal to a large and growing proportion of the general population that finds it easiest and most appealing to complete a questionnaire via the internet. (This may especially be appealing when sensitive data are being gathered.)

Nielsen reported at the 2007 AAPOR meeting the results of a large national experiment in which a mail survey was sent to households and respondents were given three modes to respond: mail, toll-free phone number, and the internet. [6] Both noncontingent and contingent incentives were used in this study. Even though no differential incentives were used for the mode chosen by the respondent, more than 30% of the cooperating households chose the internet to respond.

New experimental research is needed to determine whether incentives are cost-effective in raising the proportion of households that will cooperate via the internet mode if that mode is one of the modes offered to them. In addition, essentially nothing is known about these effects on data quality and possible nonresponse bias.

Opening Remarks by Stanley Presser

As survey researchers, we are accustomed to acknowledging that the way survey questions are framed influences the answers respondents provide. In reflecting on how survey results are shaped by the way questions are posed, Howard Schuman observed: “In surveys, as in life.” Put differently, question wording effects are not restricted to surveys. Far from being survey artifacts, such effects are inherent in the nature of language, and thus are found whenever and wherever questions are posed.

This means that what we know about survey incentives (or, more properly, what we think we know about incentives) has been shaped by the ways we have framed our questions about them. I will argue that the questions we have asked about incentives have often been framed too narrowly, and that we would benefit from broadening our focus in three ways.

The literature on incentives has centered, to a very large extent, on response rates. The major questions have been: Under what conditions do incentives improve response rates? And, what kinds of incentives improve response rates most cost-effectively? The assumption has long been that gains in response rates indicate reductions in nonresponse error.

But over the last several years there has been increasing recognition that this assumption is frequently unjustified. In fact, not only has it become clear that lowering nonresponse rates does not necessarily translate into reduced nonresponse error, we even have evidence that reducing nonresponse rates can increase nonresponse error.

As a result, the discovery that an incentive increases response rates is, by itself, uninformative about whether the incentive affects nonresponse error. Most studies do not explicitly recognize this, and among the small minority that do, few empirically address the issue. Thus we need a shift in focus from nonresponse rates to nonresponse error.

Indeed, we should broaden our perspective even further by following the recommendation of Bob Groves to think about “minimum mean square error in terms of unit cost.” Taking seriously a total survey error perspective involves considering tradeoffs. For instance, how would the impact on data quality of an incentive-induced response rate gain compare to that from more intensive questionnaire development? Making these kinds of judgments is often difficult, if not impossible, but we should at least attempt to guide our design decisions (and the methodological inquiries that underpin them) from such a broader vantage point.

Although constituting a substantial broadening of the focus that now predominates, even a total survey error perspective is too narrow. It is too narrow because it focuses on single studies at fairly restricted points in time. The problem is that the phenomena we study can be affected by our efforts to study them. Thus, over time, as incentives become more common, their effects may change. Decades ago, Paul Sheatsley warned against using monetary incentives, not because he doubted they could be effective in the short run, but because he feared they would boomerang in the long run. Are we changing the ways in which the public thinks about surveys by routinely providing monetary incentives? Are such changes affecting the nature of cooperation with surveys, including the impact of incentives on cooperation? We need to evaluate incentives not only in the context of particular studies at a given point in time, but in terms of their effect on the future of surveys more generally.

This is no easy task, but at least it presents us with the kind of empirical challenge we are accustomed to confronting. By contrast, the third way in which we need to broaden our focus may be harder because it involves nonempirical matters. Unlike many decisions confronting survey researchers, some of those about incentives involve ethical dimensions. If we demonstrate that open questions yield higher quality estimates than closed items, or that post-stratifying to Census estimates yields more accurate results than not making such adjustments, the implications for practice are relatively clear. By contrast, if we demonstrate that offering refusal conversion payments to reluctant respondents is cost effective, the implication for practice, as Eleanor Singer has observed, is much less clear. Such payments may improve our estimates, but are they fair to those who cooperated initially? Likewise, some research suggests that differential initial payments to sample members with varying demographic characteristics may improve response rates. But are such payments equitable? Are they consistent with our obligations as researchers to respect the persons on whose cooperation our enterprise depends? I don’t know the answers to these kinds of questions, but I believe we must confront them.

Although it is easy to conceive of ethical concerns as constraining our effectiveness in some respects, they may also inspire us to be more effective in other respects. Consider the argument made by Arthur Kennickell:

“to improve response on surveys (or even to maintain current levels), we must account for the humanity of both respondents and interviewers. Respondents are not filing cabinets to be rifled at will, but people who face conflicting demands on their time. It is generous of respondents to share their time with survey takers, and this fact should never be forgotten or taken for granted. Interviewers are paid for their work. Nonetheless, in almost every area of work, other factors than money appear to be important determinants of superior performance. It is a wasted opportunity when survey managers fail to engage interviewers' interest beyond the level of pure production. If interviewers fail to communicate a compelling vision of a survey and a deep respect for respondents' generosity, response rates will suffer.”

Thus incorporating an ethical dimension may be the most vital of the three ways of broadening our thinking about incentives, as, among other things, it reminds us that incentives are just one of many approaches to enhancing cooperation.

Opening Remarks by David Cantor

Being last, I will tend to repeat a bit, but I think my focus is a little bit narrower than Stan's. I want to focus a bit on thinking about when we should be using incentives. What is getting lost in our conversation today is how these answers differ by the type of survey that you're doing; especially when you're thinking about telephone surveys, versus in-person surveys, versus other kinds of surveys.

My introduction to incentives was on an RDD survey, and the client was very concerned about response rates. And, at the time, they wanted to spend a huge amount of money on incentives. We told them that you might raise the response rate three or four points and asked if you really think you would be getting your money's worth? They thought it was worth doing, and we went along with it. At the time, we thought maybe we can increase the face validity of the survey because now we're going to be at 75 percent, rather than 70 percent, and that's going to make a big difference in how people interpret the survey.

I think now, at least in the RDD world, where we're down to 30 percent, and still going down, the context and the thinking about how important a response rate is, as well as how important it is to use incentives, is an important question to think about. In Groves' piece in POQ several years ago, in his appendix, he goes into a couple of calculations and simulations, which basically shows that unless you're going to get a very large kick in a response rate, the chances of you being able to address non-response bias is very low. So I think we need to rethink what we're going to be doing with the money, especially with respect to data quality, and total survey error, to improve the survey. Can we do a few more cognitive interviews, or do an experiment that tests alternative questionnaire items? In other words, perhaps we should be using the money to address issues related to measurement error, rather than non-response bias.

Some of you may have seen the WSS presentation last week, where the presenters did an experiment in their survey, and their main outcome variables were services related to healthcare. They found that their rates changed by 50 or 60 percent when they re-ordered their questions. We need to think about these kinds of effects relative to the kinds of effects that we're talking about when increasing response rates 5 or 6 percent.

But, of course, we have to do more research related to incentives. And the places where I think some of this research might be best placed is not so much on the conditions that lead to higher response rates with different kinds of incentives, but other areas related to incentives. We do know quite a bit about how incentives are ultimately going to affect response rates. There are a number of other areas that we need to look into.

A big one is cost. I think we've learned for in-person interviews that incentives do tend to pay for themselves. The recent experiment that Dick was referring to with the drug study (National Survey of Drug Use and Health) found that incentives more than paid for themselves, as well as got higher response rates. So there, it seems like a fairly straightforward decision. That experiment actually replicated another experiment that was done maybe 15 years ago at Westat, which found the same result. An experiment on the National Adult Literacy Study found that incentives increased response rates while more than paying for themselves.

In telephone surveys it's not as clear. At least the evidence right now seems to find that incentives really don't pay for themselves. And if you're going to be introducing incentives in a telephone survey, you're doing it for other reasons. But there is not a lot of data on this, so I think it's worthwhile concentrating on this to see how you might use incentives in a telephone context to try to minimize costs.

A second area of research related to incentives is response quality. Here, the evidence doesn't seem to be very encouraging. Incentives don't seem to affect quality one way or the other. Of the studies that have been done, very few have found differences between outcome variables when you're looking at different incentive conditions. There are a few exceptions, but I think it's important to try to understand why those exceptions might be occurring, and under what conditions they occur in order to make a more informed decision about whether you really need an incentive or not.

Another area with regard to quality -- one of the things that Paul mentioned – is whether you can encourage people to use different modes of interviewing in order to reduce total survey error. For example, getting more people on the internet, where you can save a lot of money, and maybe even get higher quality responses. A second example is whether incentives can increase the use of records[JM1] . In other words, how do incentives affect data quality? Rather than focusing on the response rate alone, we should be focusing on how incentives reduce survey error.

And then a third area of research, which is a little bit more specific, is in the area of longitudinal surveys. We don't know very much at all about the best way to use incentives in the context of a panel survey. Do you pay them a lot up front? Do you continue to pay them? How does the incentive affect the expectations over the course of a panel? And, similarly, in surveys that have multiple stages within the same time frame, or the same contact. For example, when you're doing a screening interview, and then you have get someone to do an extended interview. How does the incentive at the initial contact affect the quality of the data at the extended interview?

We'll talk about many of these things over the course of the day, I'm sure. But my point is that I think we need to start looking harder at why we might want to use incentives. If we're going to use them, we need to justify them from a total quality perspective. These are the types of things that I think we know least about. Since the last OMB conference, I guess there was a big uproar that tried to convince OMB to let us use incentives. And I think 16 years later, we may have gone overboard, at least in certain areas, where we're using incentives indiscriminately. I think what we need to do is go back a little bit and see where incentives really should be used relative to trying to do other things to maximize the quality of the surveys.

Discussion Session Distillation

What is the Greater Good?

As use of incentives becomes more widespread and routine, there is the potential for researchers to turn to ever larger incentives as a quick panacea for more complex or deliberative design issues. For instance, rather than spend the time and money making multiple calls on a telephone survey and work a sample sufficiently, why not simply offer $25, $30 or more upfront to secure participation? The key question here is: what justifies the use of incentives?

Incentives can have two main effects – on sample representation and on response quality. The problem is that these effects may be positive or negative, so their use needs to be considered carefully. A more sophisticated look at incentives would take into account where incentives can have the most impact and that is through the use of differential incentives (e.g., providing an incentive or higher level of incentive for hard-to-reach population subgroups and no or a lower incentive to all others). Differential incentives come into play primarily when the end goal is a reduction of nonresponse error. This practice, however, raises issues of equity and fairness of treatment across respondent groups. Balancing a desire for higher quality data with the potential impact of incentives on respondents is tricky. If the end result is higher quality data, however, one could ask: what is the greater good? Is it better to make incentives equitable for everyone or does the greater good (in terms of survey estimates) justify differential incentives?

A counter school of thought eschews the use of incentives in favor of utilizing scarce resources to fund tasks which could inform and improve the survey process in other ways. For example, could resources be used to conduct a better nonresponse bias study? To improve the questionnaire or conduct a questionnaire experiment? Would these uses be of greater value? Some argue that researchers should not use scarce resources for incentives at all, but rather look to conduct more meaningful methodological research, which could benefit multiple studies rather than just one. Along these lines, a better job needs to be done conveying the importance of survey research to respondents so that they understand it is in their interest to participate. With regard to ethical considerations, there are a number of areas open for exploration. If we believe in what we’re doing, it should be conveyed to respondents. If a survey is important enough to burden respondents, why not try to get it right? We know using differential incentives works for those where the survey is less salient – so does that actually create equity in a sense? In effect, does the incentive help offset what might otherwise be a differential burden to respond for some? If the incentive helps to obtain better data, isn’t that ethical? If the topic of the survey is salient, then that may be viewed as an “incentive” to participate. For those whom the topic is not salient, paying them to participate is what becomes salient.

Differential Incentives and Equity

The concept of “equity” is often raised when discussing or justifying the use of incentives. This is true particularly when talking about the use of differential incentives. For instance, providing hard-to-reach groups, such as younger adults, and black or Hispanic households, with a higher incentive than other groups; or providing an incentive to nonrespondents, but not to initial or “easy” responders. In the context of incentives, what does the concept of “equity” really mean and how do we balance the concept of equity or equitable incentives with the operational utility derived from the use of differential incentives? Moreover, how do respondents feel about the use of differential incentives? Eleanor Singer conducted work in this area and found that respondents found the use of differential incentives to be unfair, but that knowing about their use did not affect respondents’ willingness to participate. Dollar amounts are perceived differently depending on where one stands economically, so even equal amounts affect respondents differently. In terms of effective amounts, larger incentives tend to work better, but this is not a linear relationship. One 2004 survey conducted by RTI International found no difference between $10 and $20 to encourage web respondents (but both performed much better than $0). A more recent replication of that study took an interesting approach to the equity issue. All respondents were offered a $30 incentive for early response via the Web. After a certain date, CATI collection started, and no incentive was offered. Then, $30 was subsequently used for the hard-to-reach groups and for refusal conversion. In terms of “equity,” therefore, the entire sample had the opportunity to receive the $30 incentive. While this approach didn’t necessarily increase response rates per se, it allowed the researchers to “hit the wall” in terms of earlier participation resulting in a shorter data collection period.

Looking at incentives from an ethical viewpoint, is it more ethical to pay a survey organization to call a respondent over and over than offering money? Is it more ethical to pay a survey organization to call respondents 7 or 8 times, or to pay large incentives to the respondent to respond the first or second time? Researchers have a responsibility to make data as high quality as possible and if we keep that in mind, thinking about post-payment equity makes everyone’s response more valuable because the whole dataset more valuable. We have a responsibility to provide good data, but that assumes that obtaining cooperation with incentives increases data quality.

Incentives can also have an effect on interviewers. For instance, what is the effect on interviewers if they know that refusing sample members will ultimately be offered more money? What are the long term effects on interviewer morale?

Are we already on the downhill side of a “slippery slope”; that is, once researchers start using incentives will it no longer be possible to stop? We don’t know a lot about what motivates respondents to accept promised incentives (What is the “value” of the incentive money to respondents? Does the amount matter? Is a token sufficient? It is it viewed as an economic exchange?). These areas require considerably more thought and empirical investigation. We may not yet have reached the “slippery slope” but incentive payments continue to grow in use and amount. Among Federal surveys, approximately 30% offer incentives. Researchers should also consider what percent of respondents received incentives, not just how many surveys use them. Either respondents expect incentives or researchers think they need to use them – but where “reality” lies is unclear.

Are Incentives a Generational Phenomena?

Are respondents’ expectations/behaviors regarding incentives generational? That is, is altruism a characteristic of the older generation and economic exchange the predominant viewpoint among the younger generation? There is a view that response rates are declining not because survey researchers have begun to use incentives in earnest, but because the survey climate has changed. From this perspective, the increase in incentives we are witnessing is not because of respondent expectations.

This raises a critical question: what is the motivation or the theory behind incentive use? Do we need to think about two different trajectories: promised incentives vs. advance incentives? The latter use may be what is causing problems. Prepaid vs. post-paid incentives rely on different theories.

Have Incentives Become a Crutch?

While many researchers have embraced the use of incentives as a tool for gaining cooperation, there does not appear to have been much change in other aspects of survey design that might eliminate the need for incentives. One example is the use of shorter surveys. While modest changes in interview length may not change cooperation rates, there are actions researchers can likely undertake in terms of arguments made to persuade people as well as in improvement to the survey experience itself. Incentives have an affect on IRBs as well in that they tend to allow more burdensome surveys in exchange for incentives. This may be another example of where we’ve hit the “slippery slope.”

There are other areas of concern as well, such as the potential effect on interviewers. Some report that the use of incentives helps to build interviewer confidence. With the increased use of incentives interviewers may come to expect them – does this make them work harder or less hard? Research is needed to examine whether allowing interviewers to decide when to use incentives introduces bias or not. It is unclear, however, if interviewers are good judges of who should receive incentives and in what amounts. Other concerns follow. In terms of incentive amounts, are we training respondents to wait for higher dollar amounts for refusal conversion? In longitudinal surveys, if a higher dollar amount is offered early, then do you need to offer higher amounts for later follow-ups?

Incentive or Remuneration?

A working group from the American Association for Public Opinion Research (AAPOR) recently issued a report about using cell phones in surveys and made the distinction between offering an incentive as a courtesy to cover costs (remuneration) versus going beyond reimbursement (incentive). Does it make a difference whether the idea behind an incentive is offsetting real financial burden or thanking the respondent? When burden is high then you are paying for out of pocket expenses or recognizing burden. Should researchers distinguish this rather than grouping them with “tokens of appreciation”? When do we think remuneration is necessary? Only when respondents are asked to do something extraordinary? Researchers appear to have lost track of situations when a token is sufficient. We need to distinguish a token from other reasons for payment.

Where Does the Future Lie?

While the amount of research on incentives has grown substantially since the first incentive workshop, it’s clear that there are still quite a number of areas in the field which are either under sown or unplowed altogether. What are the broad contours of the future research agenda in this area? Researchers need to be careful when studying the effects of incentives, considering that there are many survey attributes at play in determining participation or completion of a requested task. When incentives work, researcher need to ask themselves: relative to what? Control conditions should be clear. As a discipline, we still do not measure bias well, so cost effectiveness needs to be a primary objective. Future research also needs to focus more on the ethical issues. For example, two states have introduced legislation to stop all gifts to physicians. Many legislators appear to care about the difference between “payment for services” and “tokens”. Another area involves research on incentives in relation to the web, in particular prepaid vs. post-paid. We need to learn how to encourage people to go to the web since that is often a very cost effective mode of surveying. One of the critical, over-arching areas is the need to understand better why incentives work to make future research more meaningful. Unfortunately this is one area where we appear to have progressed little since the first incentive panel in 1992.

Transcript of Open Floor Discussion

DR. MICHAEL LINK: Well, thank you, all four of you. It's time for interaction. But I want to start, because I've been taking some notes and scratching my head going okay, so, incentives may raise response rates, but not necessarily, do anything about bias. Okay. It doesn't seem, as David said, doesn't seem to be affecting quality. Okay. And Paul mentioned that sometimes we use it to improve representation, but depending on how we use the incentives, we might actually wind up over-representing groups that we don't want, and still under-representing the others, or we have to go to the use of differential incentives, where you're giving a lot of incentive to one subset of the population, and maybe little or none to others. So my question for the four of you is very basic: What justifies, based on what we know, the use of incentives? Discuss, please.

DR. RICHARD KULKA: I tried to touch on that very briefly, and what I was presenting are the conditions under which incentives might be used. But I've listed a number of conditions under which incentives were proposed to be used, but I think as the other discussions have noted, in some ways, we've become more sophisticated looking at other dimensions, and other ways become less sophisticated.

I want to pick up on the example that you gave, which is, when we increase response, and I pointed this out, there are two effects incentives can have on it. One can be on response rate, one can be to increase the participation, and change the sample composition. And the other one could be affecting the responses people give.

Well, I also mentioned that those can be both positive and negative. And, so, the less long-winded answer to that would be, we need to be careful to know that we're doing what we intend to do. The example that's given repeatedly is bringing people into the sample who are already in the sample at higher levels, thereby increasing the difference between respondents and non-respondents and increasing measurement error, which Stanley was referring to. So a more sophisticated look would be, if our goal is not just response rates, but really is focused, ultimately, on non-response bias, and non-response error, you would have to take this into account when you do things.

The problem with that is that's exactly what people are trying to do who are paying differential incentives to different parts of the population. And Stanley pointed out that there are interesting trade-offs there about the effects over the long haul, the issues of equity, and so on. So there's not a simple answer to the question, other than the fact that there are certain populations where we know that you're not going to even be able to get them into the ballpark. There are some surveys out there, ignoring RDD, where we're getting 20 percent of key populations in major social programs where we depend a lot on what's going to happen with those outcomes.

I also wanted to emphasize the second part, which is we can also have a direct impact on statistics of the responses themselves. If we pay very large incentives, or we pay incentives at all for certain parts of the population, we can make -- Eleanor Singer had some interesting work which showed a positivity bias to those using incentives. And, particular, in surveys that are more like what we call the “standard case”. So I think we need to pay attention to the standard case. I think we are paying incentives for things that are very much like the standard case, and I think of those where we do need to pay incentives, we need to pay attention to an outcome other than just response rates. But I'm sure everybody else has something on that. I don't know.

DR. PAUL LAVRAKAS: Obviously, we all know, the answer to Michael's question, it depends. I think that's the answer.

DR. LINK: That's very scientific.

DR. LAVRAKAS: Yes. Yes. Well, Stanley's comments, I think, provoke us to think ultimately what's the “greater good”? And I'm not going to be able to answer that. But in thinking about what I've heard for the last 15, 20 minutes, I'm wondering whether taking the approach that Stanley and Sandra mentioned earlier about ethics, would there be such a thing - I don't know that it exists, but it would be something like an ethics checklist that I'm sure anyone with an IRB is using informally, but I don't know that a lot of private sector organizations are thinking in these terms. Something that allows you to point towards a greater good, so you go through thinking about the implications of the incentives you're thinking of using. The checklist, or the flow chart, or whatever it is allows you, in a much more structured fashion, to try to determine -- am I helping? is it better? --, however you define “better”, to make it equitable for everyone, or is the greater good at some macro level that you want to have things like more assurance of lowered non-response bias, better response rates, if that's what you think leads you to lower non-response bias, better data quality, better sample representation. If those things define the greater good, maybe this checklist - again, I don't know what it would look like - but I'd be glad to work with a group of people that might want to try to put something together.

DR. DAVID CANTOR: Well, I'm not sure -- I'm not thinking about ethical issues, as much, but I think from the point of view of someone who sits at a desk, and gets requests about doing surveys, and different design options, the first thing that I would think of at this point would be well, what else can I use the money for? How much is it going to cost? And if you're really interested in knowing if you have problems with your response rate, can you use that money better for something like a non-response bias study?

Usually, we come into the process where we don't have the luxury of looking at everything about the design that we might want to change, so things like the questionnaire would be nice to look at and see if there's something about the questionnaire that we could either change, or do an experiment with, to try to do a little bit more with regard to figuring out whether the estimates really are what you think they are. So I think those would be sort of the first sets of options that I would think about.

DR. STANLEY PRESSER: Yes. I think I would echo a lot of these points. I mean, I think David's notion of having fixed resources, and there are lots of different things you can do with this other than "pay respondents". And I think we need better information; that is, we need to fund more methodological research to better guide us in those kinds of allocation decisions.

But, I guess, I would go further, and just as a devil's advocate, and say at least for the normal case, or the regular case, I mean, let me throw out the proposition that we shouldn't pay respondents at all. We shouldn't do that. This is a misuse of our resources. And I say that, in part, because I go back to a point that I really didn't develop, that when we think of these ethical considerations, we often think about them as "thou shalt not." And, so, it's telling us all these things we shouldn't do, don't pay people differentially based on their demographic characteristics, or don't reward -- it's not fair to reward the people who are being reluctant. That harms the people who cooperated to begin with.

But the other way of thinking about the ethical considerations is to tell us that there are all sorts of other things that we should be doing. We believe in what we're doing, and I believe that survey research is vitally important to the society that we live in. It's essential. And what we need to do, it seems to me, is a better job in conveying that to respondents on particular studies. And maybe we need little research programs on how to do a better job in really persuading respondents, without sending them $2 in advance, that it is in their interest to participate, and to cooperate. So, I mean, I think the ethical perspective, I hope can be seen as encouraging us to do stuff, as opposed to not do stuff. And, so, I guess, I would at least throw out this, certainly as a devil's advocate, to say that in the normal case of the surveys where we're not asking people to travel to a site where they then have to get their blood taken, or to do things that I think are not traditional, not every day, normal course of events in terms of simply answering questions, that we should think about not routinely paying at all, and thinking about what are the other things we need to do to encourage people to cooperate.

DR. LINK: Stanley, I don't know. I think that's a very provocative statement. I saw a number of folks out here that seemed to have some visceral reaction to that. Seriously, who -- Mike?

DR. MICHAEL BRICK: Yes, I guess on the equity issue, if we're placing a burden on the survey population to respond, and if it's true that by giving an incentive to people who don't normally respond, reluctant responders, aren't we -- we're already placing a burden on these people to respond. And some may do it without an incentive, some may do it with an incentive. We believe it's important enough to put this burden on them, so why don't we try to get the answer right? If an incentive is cost-effective, or it helps us get the right answer, then aren't we making the ethical decision to provide that incentive to get a better answer? We have lots of evidence that maybe that doesn't do it, and I don't disagree, you shouldn't do it in that case, but if it does improve the quality of the estimate by giving people who refuse an incentive, isn't that ethical?

DR. LINK: John?

MR. BOYLE: I'd like to further couch this differential incentive in the context of equity. We know you get differential -- using differential incentives counter differential response rates within samples. And if you're doing a survey of Vietnam era, the Vietnam theater veterans respond more than the era veterans. If you're doing a longitudinal survey of victimization, the ones who were victims at time one are less likely to refuse at time two than the non-victims. The difference is salience.

So the question is, isn't there an equity involvement of salience. For those people who do the survey, because it is salient to them, and seeing that the survey is more valuable to those who aren't, that are just being asked to be compared, at some level it seems to me that financial incentives are just part of the equity equation. And you're, basically, countering lower levels of salience.

DR. CANTOR: You're, obviously, an economist.

(Laughter.)

DR. LINK: Well, you know what, I hate to stop right here, but we're going to pick up with that. And I want -- that's a key point, I think, that's going to drive a lot of the afternoon, which is this equity in differential incentives, because that's where things are in a lot of companies. That's where are other companies are headed, too, and that's really, I think, where a lot of the debate is going to wind up being.

(Proceeding broke for lunch)

DR. LINK: Thank you. I want to pick back up where we left off. Clearly, I think I sensed a high energy level within the room when we started talking about this concept of equity in differential incentives, and the fact that oftentimes, concerns about equity are raised when discussing or justifying the use of incentives, particularly when we're talking about using differential incentives. For instance, you have hard to reach groups which you might want to pay $30. All other respondents are relatively easy to reach, so maybe you pay them $2-5, because you don't have to actually use the money to gain their cooperation. Take it a step further, there are some folks who are suggesting we use incentives for non-respondents, but not pay the easy respondents at all. I think which raises another set of issues.

So in the context of incentives, what is equitable? What does equity actually mean, and how do we balance the concept of equity or equitable incentives with what we know is the operational utility derived, oftentimes, from these incentives? So, again, I'm going to open this to the floor of folks that have ideas on this particular topic …,

MS. BARBARA O'HARE: This is Barbara O'Hare from Arbitron. I think there's been a little bit of work done on how respondents feel about differential incentives, but I don't remember the exact findings, but I wondered if anyone else did.

DR. KULKA: Yes, Eleanor (Singer) has done works explicitly on this. These are small experiments, but what she found is that in her experiments, that people, when they're asked about whether they think it's equitable or fair, they say no, it's not, and it was pretty consistent. On the other hand, when she did the only test she could do in the context of a small experiment, she asked how this would affect your willingness to participate in future surveys, it did not affect people's reports of that. Now, you could ask whether that context is really predictive of actual behavior, or whether, if that actually happened to people over and over again, whether they might act on their initial attitudes. But, to my knowledge, that's the only research that's done on that, is what Eleanor did, unless somebody else is aware of something else.

MS. SANDY BERRY: Sandy Berry from RAND. I just want to point out that even an equal incentive, equal dollar amount is, in fact, a differential incentive because it's perceived differently by people at different income levels, who attach different levels of importance to it. So even an apparently equal one will incentivize people differently.

DR. LINK: Other comments on the topic? John Riccobono. I seem to recall that at certain times in your career, you have used differential incentives across different modes.

DR. JOHN RICCOBONO: I'm John Riccobono, RTI International. I'm not exactly sure how to respond to this. We've tried over the years -- let me tell you a little. My world is federal surveys mostly, entirely in education, and thanks to Brian, we've done a lot of experimentation with incentives over the years, in my own case, post-secondary education student studies. And we tried all of these things, and we tried all kinds of levels of incentives. We tried early incentives, we tried incentives at different stages of the process. And I think we've gotten some pretty good data on what works and what doesn't work for this population. And this, again, is student-based probability studies of all kinds of students, so I've heard some things.

I think most of the stuff that I've been hearing is hard to argue about, and is consistent with what we've found. What we did find is, we tried everything, we tried prepaid versus promised, and, unfortunately, that didn't work for us. And we tried small incentives, we tried no incentive. And what I think has worked best for us has been large incentives.

(Laughter.)

DR. RICCOBONO: That's not a surprise. I mean, for example, in 2004 we did a studies for National Center for Education Statistics, and in this 2004 study we looked specifically, and we do this by random assignment, and we looked at no incentives, because we were -- for the reasons already spoken, where everybody wants to be able to cream the cheap cases, these are people that are going to -- a bunch of people are going to respond anyhow, so we tried no incentive, or $10 incentive, or $20 incentive. And the fact was $10 and $20 did better than no incentive. And we looked it for early response period. And what we do is multi-mode studies, both web and interviews, and what we would like to do, obviously, what we would like to do is bring is as many of those self-administered web-based surveys as possible, so we initially start with a three week period where there's no outgoing calls. If you respond, here's the incentive, if you respond within three weeks.

In 2004, with the zero, ten, and twenty dollar thing, we found ten and twenty dollars, no difference between them, but both significantly different from zero. And I think the differences were in the three week period something like 12 percent versus 18 percent response in that period. And that's where you would really like to get the response. We've done it recently for 2008, and hiked this to $30 early incentive. And this hasn't been reported yet, because it's just happened, and we're just doing it now. The $30 incentive, after a three week period, has already brought in 37 percent response to that early response group it was applied to. Relatively small group of the population, but a heck of a bang for the buck, as we say. Did you want -- no, that's all right.

PARTICIPANT: Is this a study where you were giving money to go to the web, but then told them at the same time they'd be getting a phone call if they didn't respond? So you had both those things --

DR. RICCOBONO: No, no, no, no. This is -- we're just saying that -- we give them a Help Desk number they can call.

DR. LINK: Brian would like to help you out, John.

DR. RICCOBONO: Brian. Okay. I'm glad you didn't ask me about ethics.

(Laughter.)

DR. BRIAN HARRIS-KOJETIN: We know better than that. What I just wanted to point out about this study that John is talking about, I thought had a very interesting twist to deal with this equity issue, so if I get this straight, John. What happens, initially, is that students are offered $30 for this early incentive if they go to the web by themselves and log in and do this. If they have to wait after that first initial three week period, and the caddy calls starts, then the incentive drops down to $20, or it's definitely lower than that.

DR. RICCOBONO: Drops to zero.

DR. HARRIS-KOJETIN: Drops to zero for the next eight weeks of calling or something like that. Then, however -- so everyone had the opportunity to earn $30 by going to the web, and all of these students have access. But if they don't do that, then there's a period of interviewing where they don't get any incentive, but then for the refusal conversion stage, now when all of what you're left with are the hard to reach people, then they can go back and offer them $30 again. So it's a way of using kind of C- I thought they came up with a very clever way of kind of dealing with this equity issue, and still being able to use incentives for refusal conversion. But everybody actually had that same opportunity, initially.

DR. LINK: Don.

DR. DON DILLMAN: That's what I thought was interesting about it, because there's really two things here. One is the incentive, then the other, they're going to get a phone call. And getting that phone call I almost think is a bigger negative incentive, than the positive of getting the money.

DR. BRICK: If they dodge a phone call long enough, then they'll get paid.

DR. DILLMAN: Yes, but they don't know that.

DR. LINK: Other comments?

DR. RICCOBONO: You know what I'm speculating, and I think is the case, is that these incentives -- and we've done these studies, as you know, Michael, for years, and they're repeat studies. And I think what we're seeing here with this kind of an incentive structure is not that it's going to give us a heck of a lot more response. I don't think it is. I think we hit the wall when we hit the wall, and that's someplace around 70, 75 percent response, which is not bad, but it's --

DR. LINK: That's with a list sample, where you know who you're going after.

DR. RICCOBONO: With this structure, though, and the use of this incentive thing -- well, two things. You hit the wall earlier. Okay? And so, for those who are interested in the timely release of data and findings, that's very important. Otherwise, you're going to spend, and we have in the past, six more months badgering, threatening, being threatened by the respondents, and you eek out roughly a 70 percent response rate, but all kinds of expense in terms of labor and things like that. I think this does away with that. We see it. We see it, especially when we come back with that $30 incentive at the end of the study. I mean, you see it spike again, and you see it, and they hit the wall, and it's right around 70-75 percent. So we've costed this out several ways. It's undeniable that even the $30 incentive pays for itself under these scenarios. And the big impact is not with the increased response, and you do have to handle things, of course, in terms of your non-response bias analysis, but the big impact is in terms of being able to finish data collection in a six month period, rather than a twelve month period.

DR. LINK: Other comments on mode issues?

DR. BERRY: Not the mode issue, but I remember -- I still have that file of stuff from that 1992 conference. It's amazing because we've actually moved offices since then. But the question that I had in 1992, which I still have is, is it more ethical to pay a survey organization to call people ten times, than it is to pay a respondent a reasonably large incentive to get them to cooperate the first or second time? And I think it's still a trade-off that we're dealing with both as a cost issue, but I think also as somewhat of an ethical issue, as well.

DR. KULKA: Yes. I think -- I recall you framing that before, Sandy, the notion, is it more ethical to pay a survey organization to badger people to death, or to pay a respondent? And I thought that was even more compelling in that --

DR. BERRY: You can tell I've become more diplomatic.

DR. PRESSER: I made that point earlier. The way you frame this is --

DR. BERRY: It's everything.

DR. LINK: Richard?

DR RICHARD. CLARK: Rich Clark, University of Georgia. I think going back to Stanley Presser's comments about kind of taking an active view toward ethics, we have a responsibility to make the data as valuable as possible, since we're taking people's time. And so, the higher quality of the data, the higher quality of the end survey, the more value everybody's time has been. And with that in mind, I think we start to -- if we keep that in mind, and you think about the post-payments for non-response being inequitable, it's actually driving up the value of everybody else's response overall, if, in fact, you're going to do this in such a way as to increase the population estimates, or the projects from the survey. And, therefore, the value of the whole data set all together.

DR. PRESSER: I think that's a useful perspective, but it does make the assumption that spending this additional money on respondent incentives is actually going to improve your estimates. And, at the moment, I think we don't have lots of evidence that that's the case.

DR. YVONNE SHANDS: Hi. Yvonne Shands. I'm at the University of Medicine and Dentistry of New Jersey. I have two different questions. I have the unique perspective of having worked in the field as an interviewer for 10 years before coming in-house on the methodological side for the past 10 years. And I probably worked for a lot of you folks, but the first question is the effect on interviewers of these graduated response incentives. Having worked on National Longitudinal Survey of Youth for several years, the long-term effect of training people that when they refuse they'll be offered more money to participate, and then going back in the next wave of data collection and facing that fact again with the same respondents. As an interviewer, there was an effect on me, and I'm wondering what the overall effect was on data quality, on response rates, et cetera.

And then, also -- well, actually that kind of ties up the two things. One is the long-term effect of training respondents to act in this way in longitudinal studies and panels, and then what research has been done on the effect on your interviewer's morale, and interviewing retention by these different respondent incentives.

DR. LAVRAKAS: I can't relate any studies that I've formally been involved in, but anecdotally, both on the telephone, and in-person interviewing, over the years that I was with Nielsen, we heard a lot of complaints from the staff from a morale standpoint that these people for a three minute interview, why are we giving them so much money, or something like that? Or in the case of the in-person where you're recruiting for a much longer panel, where you meter the household, considerable amount of funds were being given to certain households. And there was no question that there was interviewer resentment, recruiter resentment for it among some of the people, not everyone. But I don't have any formal studies to relate.

DR. LINK: Bob?

DR. ROBERT GROVES: I'd like to follow-up on that, and given the incredibly high scientific stature of the panel, I wanted to know if they thought the prediction in 1992 slippery slope had occurred? It's been a long time, so have we observed the slippery slope, in your judgment?

DR. CANTOR: Which slippery slope?

DR. GROVES: I think one slippery slope argument was that if we start giving incentives, it will be impossible to do studies. That would be the extreme version of -- without incentives, that the public would demand incentives, or not cooperate.

DR. CANTOR: We certainly haven't seen it, and I think this sort of dovetails, goes back to probably what Sandy was saying, that I was thinking, that we don't know very much about why people are more motivated to respond when they're given an incentive. What exactly goes on in their head when they're offered $10, because it certainly must differ for, say, a telephone survey where you send $2 in the mail, versus when you promise someone $10 to do a three-hour interview. And we don't know very much about their motivations, but in terms of the -- and it seems to me when we're talking about equity, that's an important thing to understand. Whether they're viewing this incentive as a token, that whether the amount matters or not to them, versus whether it's an economic exchange, to use Dillman's phrase. So I think that kind of research would give us a little bit more empirical information to think about the equity issue.

But in terms of the slippery slope, I've never run into a respondent that, or heard of respondents that are claiming that they got paid on another study, and they're not going to do it unless they get paid this time around.

DR. LAVRAKAS: Well, I have, and I bet a lot of people have. Bob, I think the way that I respond to what you asked was the -- what is the proportion in a given survey of the issue that makes an incentive the salient factor to cooperate? I think since `92, that proportion has increased on average each year, slowly. I think it's going to continue to increase. But one of my first comments that I made when I was standing up at the podium is that a substantial proportion of the people we sample want to help us without any incentive. And if we don't work up some methodologies that allow that to happen, we're just going to accelerate this slippery slope. And who knows what other damage we're going to be doing.

DR. CANTOR: Yes, I guess, I don't agree that much, because I don't think the respondents view participation in the survey in and of itself as a very salient event. And I also think that there are a lot of other kinds of surveys going on around there that we get confused with, that we have no control over. So the idea that offering or not offering incentives because it's somehow going to create a much higher expectation among the public, I just don't see any evidence of that.

DR. KULKA: Yes, I referred to this a little bit earlier. And I think the way -- the less extreme version of the argument was that, are we going to gradually get to a point where respondents are going to be expected to pay for everything. Now, that's the extreme case. I don't think we're there, and I don't think we're close to there. But I think if you look even at the information we had today, we're now at the point where, compared to `92, where a third of the cases reported on the federal side are paying respondent incentives.

Now, there's a question of whether that's because the respondent population is now demanding incentives, or we think they are demanding incentives. And I think from that perspective, I think we're dangerously close to a slippery slope. I don't think we're well down it, necessarily, but if we, as a research community, doing this in various organizations accelerate at the same rate between `92 and now that we did in the next 15 years, it's really hard to argue that we're not going to build up an expectation among many, many potential respondents in the community that this is what's there.

I agree that it's not there yet, but I think that we certainly are building up an expectation among the people who design surveys that this is something that should be considered in more cases than were that time ago. And in that sense, I think we're at least, if we're not on a slippery slope, we're at least approaching that. And maybe we're on the bunny hill version of the ski slope, but I think it's still a danger. I don't think we're there yet, but I think the symptoms we've even looked at so far today are there.

DR. PRESSER: Can I make just two quick comments?

DR. KULKA: Sure.

DR. PRESSER: One is, I think that this 30 percent number that Brian described this morning is going to get -- has been quoted a lot, and may continue to get quoted a lot. And one thing that's probably worth emphasizing, and my guess is Brian could probably shed more light on this, if not today, over time when he goes back and looks at the data again, is probably the more important number is not 30 percent of the surveys, but what percent of kind of the respondents contacted in the year. I mean, it could be that that 30 percent represents 30 percent of all the respondents. It could be that it represents some trivial percent of the respondents. And I think that's related to this issue of where on the slope we are.

More generically, on the question of even if -- I mean, the question is, is there a slope? And I think there, it's partly a function of we think of surveys as this generic thing. And, so, there's the Arbitron and Nielsen do surveys, and the current population survey, the feds do surveys, and University of Michigan Survey Research Center does surveys, and people do surveys, and a survey is a survey, is a survey, is a survey. And it could be that that's the way people out in the world see it, but it also may be that's not the way people out in the world see it. And the slippery slope argument probably depends more on that homogeneous idea. And, as I say, I'm not convinced that's necessarily the way that people out in the world perceive it.

DR. LINK: Well, let me throw this in, though. Is it generational? Because I would argue, Dick, that there actually is plenty of evidence on a number of studies, that when you try to get the 18-34 year old group, and you don't offer an incentive, you don't get anybody. You try to use a $2-3 incentive, and, again, you'll get a small percent. You bump it to 25-30 bucks, and all of a sudden, boom, the 18-34 year olds are now responding. So is it, perhaps, that it's more of a generational thing, which actually has long-term consequences, because if that generation is the generation that's shifting over, they're going to, presumably, carry that value system with them. So I don't know if that's --

DR. CANTOR: Well, I would argue that that's not an effect of incentives. And the fact that we've been giving incentives since `92 is not the reason why our response rates are going down. It's because the survey climate has changed for many other reasons, so maybe we should be thinking about other ways we can change our surveys to accommodate the way the world is changing. Maybe we should have shorter surveys.

(Laughter.)

DR. CANTOR: I know that's not a very popular one.

MS. HEATHER CONTRINO: Hi, Heather Contrino I'm with the Department of Transportation. I have a few little comments. You'll have to keep track. First, I wanted to say that I don't think that we're using incentives more because we think people expect them. That's just my opinion, and I think that there's been such an emphasis on response rates, and improving response rates, and demonstrating the myriad of different tactics that we're using to improve response rates, that incentives have become the norm. And I really like the discussion that we had, or started earlier on this total survey error measurement, and not just on non-response bias, but also on that whole equation. And I think it's important that we get back to that.

Also, one comment. I think what we're -- I feel like has been missing lately is that old discussion about where incentives work, and for what reasons; like, the barriers to participation, and how incentives or other strategies help us to overcome those barriers, like, obviously, in RDD studies, contact is a huge issue. How do we just get people to pick up the phone? And advance letter incentive is one way that we create that social contract, where we give them a couple of bucks, and they feel inclined to pick up the phone. So, I just wanted to throw that out there as my comments.

MR. DAVID JOHNSON: I wanted to get back to the question over here on interviewer effects. David Johnson, Census Bureau. Because I think that's something that I only have anecdotal evidence on, but I think would be interesting to look at. And for the SIPP, we use selective incentives. And we always hear from the field that the field reps, the fact that they have this in their pocket and are able to give it to somebody gives them more confidence to go in and get that interview. If that's really happening, then I think that could be a big impact, similar to the -- if the interviewers are really thinking that giving an incentive is not worth it for their answers, that could have a big impact. And I think I only have anecdotal evidence on this, and I don't know if anybody else has ever looked at the interviewer effects of these incentives.

DR. LAVRAKAS: Just to clarify the comment that I made earlier. It's the differential incentive, and I guess I didn't specify that the differential incentive that might be three or four times given to one type of person that the interviewers, and the field reps that I'm aware of, object to. In some cases, I'm aware of where a differential incentive might be 10 times greater than what the smallest incentive level might be for the same study

DR. DILLMAN: Don Dillman, Washington State University. The big surprise of the morning, to me, was something that Brian said, if I can just find it here. Three-fourths of the time it's after people participate that they get the incentives, and so only one-fourth is ahead of time. And then the multi-variate analysis there, breaking it down in size incentives was something I was wondering about. But here's the comment, and really the question.

I wonder if we need to be thinking about two very different trajectories for this research. There's one trajectory, when you're making a contract with somebody to pay them money afterwards, so you do this, then I'll pay you. There's quite another thing going on, I think, when we have token incentives in advance, because I think what we're trying to do is augment the rest of our communication strategy, and really pull people's attention to that. And that's a very different kind of thing in memorability, very different kinds of things, perhaps, on the slippery slope implication. I'd be interested in any reactions you have to that.

DR. CANTOR: Well, I mean, I think that goes to the distinction between a telephone and an in-person interview, I think. And that's partly what I was thinking of when we're thinking about why are people reacting to the incentives? So on a telephone survey, there it's drawing attention to the letter, so they actually read it, instead of throwing it away. I think my guess about Brian's data was a lot of the post-paid is in the context of an in-person survey, where people -- the interviewer comes to the door, and the concept of a prepaid interview, prepaid incentive really doesn't -- unless they've sent the money ahead of time before they even get there, it doesn't translate, as well.

MS. JENNIFER EDGAR: Jennifer Edgar from the Bureau of Labor Statistics. To build on something that David Johnson said, with this idea of slippery slope, one thing we may want to think about is interviewer expectations. We're researchers, and we see every study as independent, and individually important. Well, they're doing the same job over and over. Sometimes they get to give respondents an incentive, sometimes they don't. I don't know that we're giving them enough information to set up their expectations realistically.

DR. KULKA: Yes, I think that's a really good question, and one that is -- I want to clarify something. To my knowledge on the SIPP, interviewers give incentives at their discretion, so it's a tool they can use. And it's one of the only surveys I'm aware of where the interviewer can make a decision who they give an incentive to or not. Most of these other differential incentives, or this group gets it, so Paul's example with a certain demographic category gets it.

Speaking to the specific question, there's no question that even in `92 when people were talking about it, there was an expectation growing among interviewers that said I can use incentives on this survey, and I'm aware of several others I've been involved in where they've said why can't we use incentives on this survey? So I think the concern about interviewers' expectations, and I think that David Johnson mentioned this very well, interviewers begin to expect that they know they can use it, and then the question becomes are they putting other things in their arsenal to effect as much -- are we getting an extra boost from the incentives that they have, or are we getting just substituting the incentive for that boost. And I don't know the answer to the question, but I think it's something we have to consider.

And I still, by the way, think that the -- one of the research studies I'd like to see is if we'll start shifting to non-response bias is, does the practice of letting interviewers judge who to give incentives to or not have an effect? What is the effect on overall non-response bias? One could argue that it actually is going to make things worse, because your interviewers are making decisions based on the way somebody's dressed, or the way they talk, or something else. I'm not really saying that this is a serious thought, although, I could be right.

(Laughter.)

DR. KULKA: But it's the unknown factor. And I think until we test that in the context of something like the SIPP and find out what the effects are, I'd be concerned about having unintended consequences effects on the estimates from that, or a survey done that way.

DR. LINK: Brian, then we'll come over here.

DR. HARRIS-KOJETIN: Mike, sorry. We keep switching between two different types to come back to something Don and David both referred to. I don't believe I mentioned this, so I wanted to make clear, but maybe I did. It is true that we found about three-quarters of the cases where giving the incentive after-the-fact. I don't know what percentage of those are -- what percentage exactly are in-person cases, but I believe that they're certainly a sizeable chunk.

What I may have forgotten to say is that we weren't even able to determine this for about 40 percent of the cases. So based on the 60 percent of cases where it was clear, whether they were providing that either after-the-fact or before -- clearly, before participation, it went three-quarters/one-quarter, but we are missing some data there.

MS. DIANE WILLIMACK: No. I wanted to respond to Dick's remark about how interviewers under discretionary circumstances would use incentives. When I was -- oh, sorry. Diane Willimack, U.S. Census Bureau. I'm going to hark back to a previous life when I was at the National Agricultural Statistics Service. I did a study that it -- this was not an incentive. I was using promotional materials, and I had sent a promotional videotape for a difficult survey to C- a split sample farm operators, and nationwide. And when I sent it, when the organization sent it, I think we got something like a 5 percent boost in the response rates from this thing, as a pre-notification. It went with the pre-notification letter, so it was like the same idea as your incentive gets the attention of the respondent in reading the letter.

The following year for the same survey, NASS decided well, the interviewers would really prefer to just be able to give the videotape to whoever they think it would persuade, and no effect. It had no effect on response rates then. And it leads me to think that I'm not sure interviewers can make the right judgment on when it's needed, and when it's not. And, so, this part about the -- across the board it has -- everybody is treated -- I'm not going to contradict myself, but that this unconditional part, I think, is a key aspect.

DR. LINK: John?

MR. BOYLE: If you want to talk and look at the slippery slope -- if you want to look experimentally at the slippery slope, we've been doing incentives for physicians surveys for about 30 years, and the original rationale was the physicians viewed it as inequitable, since the sample frame is their offices, and their office numbers, to take 15 minutes out of their day to do a telephone interview. And, effectively, the incentive was sort of hitched to what a 15-minute office visit would be.

We haven't seen, although the number of surveys going to physicians has increased dramatically, you're not seeing auctions up to $100, $200, et cetera. I mean, you're still at the sort of $25, $50, depending upon what kind of response rate you want, and it's kind of viewed as a cost of doing business. And one of the questions I raise is, down the line, if we're correcting non-response bias, and so on, this may be just a cost of doing business, and the whole issue of the slippery slope may be just us trying to save money, because the respondents are willing to do it.

DR. LINK: Well, let me take us down a little bit different path -- picking up on what you said, David, because maybe there's another slippery slope here, which is that as survey researchers perhaps incentives have become a crutch for us? Everybody kind of laughed a little bit when David said we should make our interviews shorter. The fact of the matter is, I haven't seen a whole lot of design changes in the last 10 years that would really address a lot of the non-response issues. We know that lengthy surveys can lead to higher nonresponse, and we keep saying well, you know, but the client, they really need the 45-minute survey, so let's pass out the cash. Has it become more a crutch for us, as opposed to really looking at more deliberative design changes that we might actually make? Stanley, have you shortened your surveys?

DR. PRESSER: Well, I haven't shortened any surveys recently on that basis; however, I have done a literature review not too long ago which looked at the relationship between the length of the interview and the response rate. It turns out this is not a very big literature in terms of experimental literature. There are precious few studies in it, but the main bottom line is that within small changes, like going 15 minutes versus 20 minutes, or even 15 minutes versus 30 minutes, or comparing 60 minutes to 75 minutes, experiments don't show reliable effects. Now, that's not to say that if you compare 5 minutes with 90 minutes, you wouldn't find such an effect, but that's not in the realm of, I don't think, what David was suggesting, to change the 90 minute survey into the 5 minute survey.

DR. LINK: Are there other design issues, as well, beyond simply length, that essentially we use larger incentives because we don't want to make some critical changes in perhaps how we're going about doing things?

DR. PRESSER: Well, I agree with that completely, because I think that's what I was trying to suggest earlier, that there are things that I think we can try to do, both in terms of the arguments we make in terms of persuading people, but also there are potentially other things that we can do to change the experience of the survey. But those things are probably interconnected. The reasons we tell people that it's important for you to respond, and then the kinds of experiences they have. My hunch is these things are interconnected.

DR. BERRY: Just to enlarge on the fascinating IRB literature that I was alluding to this morning. One argument in the IRB literature is that incentives have an effect on IRBs, that IRBs are willing to entertain higher levels of risk for respondents. This is partly coming out of the medical IRB literature, because incentives are being paid. So there's actually a flavor in the literature that's suggesting that incentives have a sort of slippery slope effect on the IRBs, as well as on the survey organizations. Actually, the IRBs aren't worried about the survey organizations, particularly.

DR. GERI MOONEY: Actually, in 1992, we did an experiment with NSF which touched on exactly what Stanley was talking about, because NSF had this problem of identifying people that worked in the science and engineering workforce, which is a big RDD study, let's say, which is not reasonable. So the idea was they have a questionnaire that's based on a sample drawn from the long form of the census, so they were looking for alternatives to doing this. And so one idea was to send out a four-page questionnaire as an initial screener, and then when they've identified the scientists and engineers that they want, to do a larger survey later. So we did an experiment, which I think actually turned out to be the first federal experiment that ever had offered an incentive, and we gave $5 prepaid, and there was a four-page questionnaire. There was a twelve-page questionnaire, and there was a twenty-page questionnaire that asked everything you needed in one shot.

And the purpose of the experiment well, I thought, was to show that with $5 you could get as good a response rate as you could with a four-page questionnaire, which we did. But if you turned it around the way we're talking about it now, it shows that the short questionnaire actually got as good a response rate without the incentive. If you made it shorter, you got as many people to respond.

DR. PRESSER: I should probably just clarify. I think that's a neat -- the paper that Geri is talking about is in the Journal of Official Statistics. I recommend it to all of you, it's a good paper. But I should just clarify that my literature review that I described earlier about the lack of a relationship between the length of the interview and the response rate has to do with interview administered surveys, and that's not true for self-administered surveys.

DR. LINK: Another interesting area that I want to make sure that we cover is multi-mode surveys and other newer forms of surveying. I'm going to take cell phone sampling as an example, which is where a lot of folks are starting to conduct research. And here you have that interesting problem with some respondents where they actually incur a real cost, depending on their cell phone plan, but let's just say they incur a real cost for actually receiving that phone call. On the one hand, I think there's reasonable agreement that you really have to cover that cost, and then, perhaps, adding an incentive on top of that. But just in terms of reaching cell phones, and some of these other newer types of approaches that we're using, do you have any kind of insights, or anyone out there conducting research in those particular areas: how those incentives are used, how that use might be different from previous types of incentive use, and how respondents use them, that type of thing?

DR. LAVRAKAS: Just to make a comment, follow-up what Michael was asking us to talk about. There's a report that a task force at AAPOR set up last June, several of us in the room served on that task force. The report went to AAPOR Council at the end of January, and we anticipate some time this month to maybe see more broad dissemination. And we clearly -- and it's about the use of cell phone sampling, cell phone numbers in surveys in the U.S. That's what the report is about.

We made a very clear distinction between remuneration for the courtesy, in a sense, of someone who's incurring a cost, and we also noted that not a lot of people necessarily will want that for a variety of reasons; one being that they have to give some kind of identifying information to allow you to send that information to them, so offering it to them as a courtesy from a cost standpoint is going to require you to incur a cost that's less than all the people that you interview. But, in addition to that, the field is wide open, and that's, I guess, where we want responses with people have been using incentives beyond remuneration with cell phone people.

DR. LINK: Other comments, anyone? Well, a few folks in here are doing some research on that. Ed?

DR. EDWARD SPAR: I have a question going back to the papers this morning. Does it make a difference if you consider an incentive something that is where you are paying for services, versus whether or not you are thanking them for their services?

DR. KULKA: What I was -- my brief remarks earlier, I was trying to get at that a little bit, where we have this balance between the token of appreciation, and actually Don mentioned this issue, versus paying people for either out-of-pocket expenses, or other types of burdens. In fact, based on our morning discussion, over lunch I was thinking about the issue of what are the circumstances in which we think it's obvious that incentives or some type of remuneration is necessary? And I would argue that the case that one would probably come up with would be these cases where one has to do a number of enormously difficult things. One has to, for example, the old Air Force Health Study that I'm aware of. People had to fly to - it wasn't actually that bad, they had to fly to La Jolla.

(Laughter.)

DR. KULKA: But they had to go there. They had to undergo multiple days of examinations, at least half a dozen interviews, et cetera, and then they had to a whole lot of other things as they go. Well, they received a fairly modest incentive for that. I think it was $100 or less, plus the other expenses, and they could take their significant other with them. I don't know if that was an incentive, or disincentive, but there's certainly circumstances in which you're either paying for out-of-pocket things, or you're recognizing that you're asking for extraordinary things. And I'm aware of a number of things that are in the federal system where we're asking people to do a number of extraordinary things. I think we really need to distinguish that.

Having talked on that side, the other side of the issue is, I think maybe, to a certain extent, we've lost track of the places where the token of appreciation might actually be effective. I think we're -- when I said earlier, I think some of the one-time, the base case that was talked about in `92, and had been elaborated recently, there are places where we don't deviate all that much from them. And, yet, we tend to go to relatively significant incentives, rather than something that would be regarded as a token, because you're going to participate. And I think we really need to distinguish those parts of the equation. And I think lumping them together, and when we look at these response rates, or these summaries, I think is a disservice to what we need to do going forward.

MR. RYAN ARNOLD: Ryan Arnold, Claritas. My question is more with the level of the incentive. For some studies, I'm wondering, the first couple of contacts you maybe would not give an incentive. And then okay, they haven't responded, so the next couple of contacts, increasing that to a $2 incentive. And then a couple of contacts later, $5, and $10, until you get the sample size you need, or the response rate you're looking for. The plus of that is that it's cost-effective, so you're not giving everybody a $10 incentive. But I'm wondering if long-term, we're training respondents to wait for that $10 offer, instead of taking the survey initially, or at this lower level of incentive.

DR. KULKA: I don't know there's a direct answer to that question, the training question, but the closest comes probably in studies where we do train people over time, which are in panel studies. And there's a significant debate about how we do that. The hypothesis is that if you are paying people more, some people, or some people more during an early wave of a panel study, their expectations will go up exponentially, that later thought you can't pay them less, that you have to keep going. In fact, there are several studies ongoing with that assumption.

Eleanor Singer raised the issue a while ago, and I don't think anybody else is here that might know, that the Health and Retirement Survey actually did experiments with this. And they found that when they paid extra after the first round, that they could drop back to the base incentive and not lose their panel. So there's some evidence on both sides of that. You could argue that's a training effect, or you could say that there are other things that engage people in that study, but that's the only thing I'm aware of that really speaks to this issue.

DR. LINK: The participants here today comes from a diverse set of research backgrounds working in any number of areas, and one of the things I know that we want to get out of this workshop is, what is the future research agenda? Paul touched on that, and gave a number of examples today, and I think things have cropped up during our discussions. But I want to hear from the rest of you all, what are some of those areas that you really feel, either from your work, or what you've heard today, that you really think need to be the critical areas that we're looking at in terms of developing that research agenda, so we can get -- it was phrased this morning, to have a “robust science of incentives”, a robust understanding.

DR. JANICE BALLOU: Yes, as I was preparing to come to the conference -- Janice Ballou at Mathematica. This may be a far out thought, but the scientific research is supposed to be systematic, and I think with Paul putting up those ideas of things that we could put on our agenda to do, but I had this wild idea of why don't we do something like the response rate calculation with different attributes of incentive surveys so that people can track them in a systematic way, and we could get more data across studies that consistent; because, oftentimes, we're all doing our own different things, and sometimes it's driven by clients, and a whole range of different things we've raised. But if there was some template at least of what we're calling some of these things, and when we're doing them, so that at the end of an incentive survey you could say I've got five out of these ten things that we should be looking at. And then we could start to pull them together, and maybe start to build a knowledge base on some of these things on it. So that's my wild thought, a new little book like that has how to track your incentive success rate.

DR. LINK: Bob?

DR. GROVES: I have one wish. I think our treatment of the literature has been really quite sloppy. Let me tell you what I mean by that. In randomized clinical trials where the control group gets a placebo, and you're testing the efficacy of a drug, and you infer that the drug worked, the counterfactual is the placebo. It's very clear. In incentive studies, the counterfactual is all over the place. Sometimes a study uses a lot of call-backs, and refusal conversion, sometimes it's limited effort. And we have glommed together those studies, and even the meta analyses say gee, incentives work, but what we haven't been careful about is they work relative to what? And we have to be more careful on that. And the agenda that Paul was laying out, I would argue, should be more careful than in the past about laying out the control group conditions.

DR. LINK: Mike?

DR. BRICK: Mike Brick. I guess my -- go back to `92, and you look at where it sort of came out was looking at mean square error per unit cost. And that's a wonderful goal, and we're no closer to it now than we were then. And we're not going to get a whole lot closer because we measure bias so poorly. We can measure response rates well, but that tells us very little about the bias. And when we look at things, we look at composition, and that may not tell us anything about the bias of the estimates that we're interested in, either. And, so, maybe what we ought to do is, in our research agenda, make cost-effectiveness the primary objective of it, and then see if we can gain some other information about non-response bias, measurement bias, other types of things as we go through this. But I think the standard should be, is it cost-effective or not? And that may not strike some other people as being a scientific goal, but we're in a business here, and that's what our business should be about. Can we get something out that makes sense, and works for the industry?

DR. LINK: Back in the back corner back there.

MS. LATOYA LANG: Yes. LaToya Lang from CMOR. I'm approaching this whole analysis from a completely different standpoint. I'm the one that looks at legislation and the laws out there, and protects the business that you all engage in. So from my standpoint, I believe the future should focus more on the ethical side. Why do I say that?

There are currently two states right now who have introduced legislation to prohibit all gifts from physicians, so that includes research incentives. So one thing that needs to be considered is, are we using the proper term? Should we use the term "incentives", or is the term honoraria, or is the term gifts? I need some guidelines on that type of situation.

Also, too, it requires a distinction for me to determine whether or not it is a token of appreciation, or if it's a payment of services. Legislators care about the difference between those two things, and if it's defined as a payment of services, it can perceived as income. And that's not the perspective that you all want to go after, so the future should focus on the ethical obligations, ethical perspective of this whole industry, because, again, legislatures are taking effect now. They're listening at what's going on in the world, and physician gifts are the first aspects of incentives that are being challenged. And it's only going to continue. I have two states that challenged right now, and many more that I'm sure will follow each other, so that's where I think the focus should be.

DR. LINK: Don?

DR. DILLMAN: I wish somebody would do some really good research on the use of incentives specifically in relation to the web. And the kind that I'm thinking about is when we started the web, the slope we got led down there was we can't pay incentives ahead of time, so we tried some nifty ideas for post-payment. And then the whole post-payment thing has come up really, really big.

I think there's a significant difference between prepayments and post-payments, and the token payments. And I wish that we could really get that into the web research. And the reason I do is that I think we have to learn how to encourage people to go to the web. And more and more, I'm convinced, at least I think I am, that it's not that they can't -- a lot of them can't do it, and do it fairly well, but they kind of prefer not to. And this is something that could really change the balance, but I just haven't seen the good experiments done there, yet. I wish they would be.

DR. MOONEY: Geri Mooney from Mathematica. Actually, we've done three experiments on those topics in recent years, and the first -- there are two of them where we sent $10 prepay in the advance letter, and we held off sending the mail questionnaire for three weeks hoping that people would go on the web. And one was a study of college freshmen, and the other one was a study of people who had been in doctoral programs some time in the past 20 years, so they're people who were computer literate.

Interestingly, what happened both of those studies, we did, we got the immediate hit, people going on the web. It was successful from that viewpoint. But what happened after that period was people just ignored the mail questionnaire. It was like we'd send them a mail questionnaire, and send them another one, and they just wouldn't do it, so we did get a great web response, but we ended up having to go to the telephone more to get the response rate up, so it ended up costing us more money in the end because of all the telephone follow-up, which was not anticipated.

And then we did an experiment last year with some college students, where we gave them a differential incentive up front, and we said if you respond by web, we'll give you $30, and if you respond by mail we'll give you $20. And among college freshmen, and I think this is something that changes over time, like the college seniors, it didn't have that much impact, but the freshmen really liked the idea. And the response rate was so great, that we didn't have to do any telephone follow-up because so many people responded by the web. But, again, I mean, there are so many studies, so many conditions, so many different circumstances that it's really hard to draw great conclusions across a lot of work.

DR. LINK: Other questions from the floor? Last words of -- here's one more.

PARTICIPANT: I just want to follow-up on the last thing you said, that there are so many studies, and so many different versions. I think one line of thought that might help this, following up on something that David said earlier, as long as we don't know why incentives work the way they work, and what makes people think along these lines, we can think of many more versions of studies to do without improving our knowledge on the mechanism.

DR. LINK: Any final quick parting words of wisdom from the panelists? I think it is time for the seventh inning stretch. Let’s all thank the panelists up here. I think they did a great job.

[1] Dillman, D. A. (2000) “Mail and Internet Surveys: The Tailored Design Method”, John Wiley & Sons, Inc.

[2] Biner, P. M. and Kidd, H. (1994) “The Interactive Effects of Monetary Incentive Justification and Questionnaire Length on Mail Survey Response Rates,” Psychology and Marketing, 11 (5), 483-492.

[3] Trussell, N. and Lavrakas, P.J. (2004) “The Influence of Incremental Increases in Token Cash Incentives on Mail Survey Response: Is there an Optimal Amount?” Public Opinion Quarterly, 68, 349 – 367.

[4] Lavrakas, P.J., Burks, A.T, and Bennett, M. (2004) “Predicting a Sampled Respondent’s Likelihood to Cooperate: Stage I.” Presented at the 59th annual conference of the American Association for Public Opinion Research; Phoenix; Burks, A.T, Lavrakas, P.J. and Bennett, M. (2005) “Predicting a Sampled Respondent’s Likelihood to Cooperate: Stage III.” Presented at the 60th annual conference of the American Association for Public Opinion Research; Miami.

[5] Bailey, J., Lavrakas, P.J., and Bennett, M. (2007) “Cash, Credit, or Check: A Test of Monetary Alternatives to Cash Incentives.” Presented at the 62nd annual conference of the American Association for Public Opinion Research; Anaheim.

[6] Trussell, N., Lavrakas, P.J., Daily. G., Yancey, T., Bennett, M., Bailey, J., and Lai, J. (2006) “Improving Response Rates among Targeted Demographic Subgroups Using Large Cash Incentives.” Presented at the 61st annual conference of the American Association for Public Opinion Research; Montreal.