COPAFS
 

Survey Respondent Incentives: Research and Practice

March 10, 2008

Session 3: Integrating the Day and Personal Perspectives

Moderator: Judie Mopsik
Panelists: Geraldine Mooney, Robert Groves, Clyde Tucker, and Jennifer Madans

Ethics, Dr. Geraldine Mooney

The first panelist, Dr. Geraldine Mooney of Mathematica Policy Research, discussed her opinions of the ethical debates surrounding incentives. She first examined the question, “Why are the ethical concerns of incentive use being debated today?” At the 1992 seminar on incentives, ethical issues were hotly debated because OMB was reluctant to approve the use of incentives in surveys. OMB was concerned that incentives would (1) affect data quality, (2) be too coercive, (3) influence future decisions on survey participation, (4) be a slippery slope that would lead to ever-increasing costs of surveys, and (5) undermine the social contract and civic duty of survey participation.

Ethical concerns from the 1992 seminar. Dr. Mooney then examined each of these concerns using the framework “Have the concerns from the 1992 debate over incentives panned out?” The first topic was the effect incentives may have on data quality. Her review of the literature was that it shows that incentives have not harmed data quality—for example, item non-response has not been affected and in some instances data quality has even improved. One example, which highlights the improvement of data quality, is a longitudinal mail survey in which contact information was collected for future data collection efforts. It was found that the quality of the contact information was much better when an incentive was provided.

Next, Dr. Mooney examined the concern that incentives, particularly when offered to vulnerable populations such as low income individuals, are too coercive. Chiefly, the concern was that these vulnerable populations might take on additional risks, or be more likely to agree to risks, if they receive an incentive. Dr. Mooney cited literature from Dr. Eleanor Singer, who looked at medical research on vulnerable populations. Research by Dr. Singer and others has shown that people who receive incentives are no more likely to take risks than those who do not receive incentives.

The third concern was how incentive payments might affect future participation in surveys. Would people come to expect incentives, and so then be less likely to participate in surveys that do not offer incentives? Dr. Mooney referenced research from Dr. Singer and Dr. Robert Groves to address this topic. In this particular study, some sample members received an incentive, and others didn’t. At the end of data collection, the researchers told the respondents that some were paid and some were not. A majority of the respondents (74 percent) reported that they didn’t think this was fair, but 82 percent still participated in later rounds of data collection. As a side note, it was also mentioned that, as a further part of this same study, researchers explained to respondents that some had received an incentive and others had not because it is difficult to get everyone to participate and they needed a broad range of respondents; the response to this was basically, “oh okay.”

Dr. Mooney next moved on to the concern that incentives would be a slippery slope leading to ever-increasing costs. On this concern, she reported mixed findings. For example, a differential incentive by mode has provided enormous savings. To illustrate this point, she described a project that offered sample members a higher incentive to complete a survey on the web, which resulted in much less phone follow-up and, therefore, lower costs. However, other studies have not had this result of large cost savings. Another study provided a prepayment to sample members and at that time asked them to complete the survey on the web; while the initial bump was high, those who didn’t complete on the web all but ignored future questionnaire mailings, which led to more phone follow-up and higher costs.

The final concern from the 1992 seminar that Dr. Mooney addressed was that incentives would undermine the civic duty people feel to participate in data collection efforts. Dr. Mooney asked, “Where did this assumption come from that it is someone’s civic duty to respond to data collection requests?” She wondered if it is a reasonable assumption that people ought to be giving us all of this time. To illustrate this, Dr. Mooney mentioned that, although she thought it was her son’s civic duty to take out the garbage while growing up, he didn’t feel this way. That is, until he was provided with an incentive. While this is a humorous illustration, the situation may not be too dissimilar to data collection incentives, in that there is a basic economic principal in play. That is, that people act in their own self-interest.

Present-day atmosphere of data collection. Moving away from the incentive concerns of the 1992 seminar, Dr. Mooney moved on to the present-day atmosphere of data collection. First, she focused on the asking the question, is it more ethical to pay people to participate or to ‘hound them to death.’ The social context for participating in data collection requests has changed since 1992. Not only has caller ID come into existence, so that people are able to avoid researchers in a way that they were previously unable to do, they are also inundated with marketing surveys, telemarketers, and so on, and it is harder for the general population to differentiate between good research and all of the other calls. It also seems that people have less discretionary time. The pace of life has picked up, and work is omnipresent as evidenced by the presence of laptops, cell phones, and BlackBerrys, which keep people connected to work at much higher rates. Additionally, there has been an increase in women working, and single heads of households (usually female). Furthermore, a recent study says that the average American adult sleeps six and a half hours a night. These changes all contribute to the impression that the 30 minutes that someone may give us now to complete a survey is a greater sacrifice than it was 30 years ago.

So, is it fair to assume that people should give us all this time altruistically? This is not to suggest that altruism doesn't exist, because it certainly does and it's an important factor in why some people participate, but there are other factors, too. For instance, there is the salience of the survey topic, but if salience and altruism aren’t enough to convince someone to participate in a particular survey, an incentive may be what brings the decision to a tipping point. If we really need to hear from a particular sample member, and if getting a response rate of a certain level is important, then maybe an incentive is needed and it is just the cost of doing business. Sample members always have the option to say no, and incentives do not coerce people to respond. Respondents may be given a check for participating, but there are many who don't cash the checks. For those who don’t cash the check, it may have been enough that the survey was on a topic that was of interest to them.

Summary. To summarize, basically one size or answer doesn’t fit all projects when deciding whether to use an incentive, or what level of incentive to offer, and it isn’t unethical to pay incentives. Additional research on differential incentives is necessary. Some people are willing to respond because the subject is of interest to them, or because they are feeling altruistic; other people need a little extra push, so we give them some money. In the current data collection climate, we are all facing declining response rates, and the social climate is changing away from participation. Currently, we are using incentives to counteract this by paying money to tip the balance in favor of participation and to cause people to have enough interest to participate. Perhaps what is needed is to do more research on what else we could do to tip the balance (in favor of participation). There may be other ways, and other things that we could offer people besides money. But money seems to be the way we know how to do this right now.

Incentives as a Cost-Benefit Issue, Clyde Tucker

Clyde Tucker, of the Bureau of Labor Statistics, chose to speak on incentives as a cost-benefit issue. He first, however, readdressed a statement from earlier in the day: the point that we can’t lose sight of the role of the interviewers, because the interviewer is the key part of the transaction. Since interviewers are integral to the survey transaction, their thoughts about incentives and the ways in which incentives affect them and their behavior have a lot to do with how effective, or ineffective, incentives are going to be.

While Mr. Tucker admitted that the philosophical issue of whether to offer incentives can keep many ethicists and lawyers employed, he chose to speak about incentives from a survey methods perspective. There is a real need to look at the costs and benefits of incentives, but unfortunately we don’t know much about those costs and benefits.

The costs of incentives. First, Mr. Tucker addressed the costs. He mentioned that, for the large surveys done by large organizations, it is difficult to get a handle on the costs at the level necessary to calculate the cost benefits of incentives. While some aggregate costs are available, it is difficult to get the incremental costs on additional follow-up and non-response follow-up techniques that might be avoided if incentives were offered. In some instances it is possible to see top line numbers on the costs of doing surveys. You’re often left with the question: Do I have the extra money for incentives as an add-on to the current budget that I fought so hard to get?

The benefits of incentives. Moving to the benefits side of incentives, he noted that currently we only know about the effects of incentives on response rates. While we may know this generally, we may not necessarily know the effects incentives will have on any particular survey. Because of the interactions involved between the topic of the survey and the way it's fielded, everything about it is possibly different from every other survey for which we have information.

Even knowing what effect an incentive will have on the response rate in any particular study, there is still a decision to make about whether the increase is big enough to justify the cost. Additionally, it is important to ask whether these effects can be replicated over time.

Currently, the full effect of reducing non-response on data quality is not known. Generally, response rates are reported, because even with the multiple variations of calculating response rate we do know how to calculate them. Mr. Tucker pointed out that we don't know how to measure other areas as they relate to the costs and benefits of incentives. For example, what is the amount of non-response bias compared to the other errors? And do we have any idea how the use of incentives will affect respondents or interviewer behavior negatively in other areas besides response?

Summary. Mr. Tucker summarized by stating that, in addition to not having the cost data available, the field also lacks a scientific understanding of the effect of incentives. Time should be spent on better measurement of costs and benefits, which is as important as dealing with philosophical issues.

Suggestions for the Future of Incentive Research, Dr. Robert Groves

The next speaker from the panel was Dr. Robert Groves from the University of Michigan. Dr. Groves spoke on three topics related to the current field of incentive research: (1) the need to re-conceptualize incentive studies, (2) the need for theory development to shape future incentive research, and (3) advice for OMB’s regulation and approval of incentives in survey data collection.

Re-conceptualize incentive studies. Currently, studies on incentives are myopic in their focus on questions that relate to increasing response rates. It is now necessary to re-conceptualize the focus away from response rates; for example, while many studies in the current literature could have focused on other topics, such as costs or impacts on estimates, they did not.

There are three big areas that are overlooked when only focusing on increasing response rates. First, the studies do not address the fact that the difference between respondent and non-respondent estimates appears to increase as the response rate goes up. In some sense, there's no nice relationship between response rates and non-response error. Second, it seems clear that there is enormous variation within a survey of cost estimates on non-response error. So, the notion of talking about non-response error of a survey is ludicrous; we have to instead talk about how estimates might vary. Third, there is a problem of not looking at cost implications.

The current literature does point out one important notion: we’re not off the hook. There are enormous non-response errors documented in the literature. However, the problem is not very well predicted by response rates, so there's work to be done. There is a need to restructure the questions for future incentive research. Future research must look at error sources. There are glimpses of the connection between increasing response rates through incentives and their effects on response errors, so there is a need to look at both non-response and measurement errors.

Theory development needed for future incentive research. The second point is the need for incentive studies to be based in theory. One major problem with incentive literature now is that it lacks focus. By combining different incentive amounts, payment methods, and payment timing schemes, thousands of experiments can be created. With this current line of thinking, there is no end to the number of experiments that we can conduct. If the field moves to an approach of creating and conducting experiments that are based in theory, the research will be much more focused.

One area to develop theory is in conceptualizing the various influences that a person faces when offered an incentive. Current literature says that it is easy to label incentives (cash or in-kind) as an extrinsic benefit of participation, but that there are also intrinsic personal gains to completing a survey. These intrinsic benefits may be the fulfillment of civic duty obligations, or having an interest in the topic and being able to discuss that topic. One way to think through the relevant theoretical constructs is to compare those intrinsic and extrinsic benefits, and decide under what circumstances they play off of and against one another.

Another area of theory to develop is the link between non-response rates and non-response error. First, to do so, it is necessary to think about when the sensitivity to incentives is related to variables that affect the key estimates of the survey. This is a challenging task because we have to approach the possibility that, across items in a questionnaire, the influences of non-response error driven by incentives could vary. Based on current experiments in which the non-incentive condition shows that participation is driven by topic interest, we glean some hints that incentives can bring into the respondent pool people who are uninterested in the topic. To the extent that that interest is related to the expected values of the variable you're studying, you might have a circumstance that would be reductive of non-response error through incentives. Second, we need to consider when survey participation without incentives is driven by variables highly correlated with incentive effects. Here, the literature notes that it appears that poorer people are more sensitive to incentive effects given a particular amount. If income is a key correlate of a set of survey variables, you begin to worry about the linkage between non-response rate and error driven by incentives.

Further, there are many other purely theoretical questions on which the literature has not yet focused. For example, one interpretation of the result that a prepaid incentive gets higher effects than a promised incentive is that the receipt of the incentive by the sample member is viewed as an act of kindness (from the interviewer, or the agency collecting the data). This logic would say that there is a heuristic used in the decision-making process, such as “they did something nice for me, I’ll return the favor.” In other words, there is a notion of reciprocation. However, if that’s the only mechanism in which incentives produce an effect, then we could predict that if the receipt of an incentive were made very salient so that decision making was explicitly related to the incentive, then the difference in effect between a prepaid incentive and a promised incentive would disappear or diminish. A key theoretical issue then is, when do incentives produce a heuristic decision making versus more central route careful decision making. This is where the difference between social exchange and economic exchange kicks in.

At issue here are a set of interactive effects of incentives. One is income, another is that it appears that incentives vary by the burden of the request, and yet another is that it appears that incentive effects might vary by interest in the topic. A theoretical issue is: how do those statistical interaction effects occur? What are the mechanisms by which they occur? Why is it that if the request for the survey is very burdensome rather than less burdensome, the incentive effect changes? What decision-making processes produce that change?

Another issue to be developed and researched is the two or three backfire effects discussed in the literature. For example, suppose you offer members of the treatment group in an experiment an incentive and they produce a lower response rate than the group that gets either a smaller incentive or no incentive. Why does this happen? How could the application of an extrinsic benefit produce decision making that yields a lower propensity to respond? These unique cases are worth restudying, because they may help us understand decision making.

Theory should also be explored surrounding the sequential nature of decision making, particularly in this world of mixed-mode surveys. If a potential respondent is offered one incentive, and then at a later date is offered a different incentive, what is the sequencing effect? How do we go through that decision making process? This is a big hole in the literature.

There are other dependent variables, and the ethics of incentives are a key factor in the discussion. One of the real questions is: whose ethics are going to drive this decision? It's easy to impose our own ethical standards on these decisions. However, it's harder to know whether we can be justified in allowing the respondent's ethical system to drive our decisions or not. This debate has not yet occurred, so whenever you discuss ethics this is worth talking about.

Finally, timeliness is a great variable that the survey world has walked away from. There’s a value to timeliness on some surveys that is extremely important; for example, if data are not available at a particular point in time, then the survey is worthless. When incentives improve the timeliness of data, that factor should be brought into the model. However, there are no good models for that now.

Advice for OMB. The third topic Dr. Groves spoke on was advice for OMB in reference to their documentation of incentive use. Dr. Groves first acknowledged the discipline that OMB is attempting to enforce by having documentation on incentive use. However, there is more that OMB could implement that would benefit the whole field. Dr. Groves imagined a web page with an archive documenting experiments across different surveys, categorized by key attributes that we could learn from and analyze over time.

This would provide value to the survey research field by identifying ubiquitous subgroups, or subgroups that are ubiquitously different in their reaction to incentives. The more understanding of this there is in the field, the more predictive the field will be about when non-response error might arise with or without incentives.

This final advice may be extreme, however, if you look back around 60 years ago, the federal government said that probability sampling was probably a good thing. What were the ingredients in that decision, and why are we still comfortable with the decision? It is because we believe in certain un-bias properties with appropriate estimators come out of probability sampling. We also believe in the measurability properties on sampling variance. We like to be able to calculate standard errors.

In today’s discussions a few common comments were raised: (1) the impact of incentives is probably varying over time (in that they produce their own impact on the population of later incentive effects), (2) we acknowledge that the marginal effects of incentives are a function of the base protocol to which they are being compared, and (3) we acknowledge that non-response error varies by estimate within a survey.

Wrapping up, it seems that we have the same case for measurability of incentive effects that was made for sampling variance decades ago. That is, since the effects of incentives on non-response error, response rates, and possibly even cost could vary by all of these attributes, how can we tolerate side methodological studies done once every 10 years? What informational value do they have for a particular estimate at a particular time, from a particular survey?

Instead, Dr. Groves suggested that we acknowledge there is value in a permanent randomized experimental component for incentives in every survey. Why should we tolerate having un-measurable incentive effects in surveys, merely because we're giving incentives to everyone or we're giving incentives to no one? Why don't we demand through randomized experimental designs of a portion of the sample those measurable properties on the impacts of estimates, impacts of cost, and so on?

If such a mandate took place, Dr. Groves suggested that he would embed those things within a responsive or adaptive design that would start out with randomized treatment groups assigned different incentives. This would allow for design decisions to be based on cost error trade-offs in the earlier stages of data collection, and for the design to be fixed for later replicate sub-samples for the final estimate.

What We’ve Learned and What We Still Need to Learn, Dr. Jennifer Madans

The last speaker on the panel was Dr. Jennifer Madans of the National Center for Health Statistics. Dr. Madans asked the questions: (1) What do we know now about incentives that we didn't know 15 years ago? and (2) What do we really want to know 15 years from now about incentives?

Currently, there are a lot of individual pieces of information in the field from the many studies and experiments that have happened in the last 15 years. Most of this information is never publicly reported but resides with OMB. After experiments are completed and the answer is found, the information isn’t always added to the larger body of knowledge. There is no way of combining all of this information into one body of knowledge, and we don’t know where the data are. Because of this, there is no conceptual framework.

Multiple variables and unique results. Dr. Madans pointed out that the state of affairs surrounding incentive research is that while we have lots of pieces of information, we have no clear guidelines, and no real understanding of the process. While we have many different experiments, we do not know if the results are unique to a particular study or if there are underlying principals that can help determine if we should use an incentive. Unless we are willing to combine this work into a body of knowledge, we may come back to this subject in 15 years and be in exactly the same place we are in today.

In each particular study and experiment, people can show where they came to a finding, but these are all unique cases. There are many varying factors, such as different respondents, topics, burden levels, and sponsors. We must determine if a result from an experiment is unique to that particular study, or if there are underlying principles that we can learn and apply to new data collection efforts.

In order to move this thinking forward, our future work should be guided by continually asking these questions: Should I use an incentive? In which situations should I use an incentive? How should I use incentives? How do I evaluate the effectiveness of the incentive? And finally, how do I add to the literature and the knowledge base? First though, as a community, we need to answer the final question of how to add to the knowledge base, otherwise we’ll be flying blind. However, that may be OK, too. Because there are so many different variables for each study, the decision to use an incentive may continue to be unique to each study. If that’s the case, then Bob's final recommendation is the way to proceed; you really should make your decisions regarding incentives for every survey individually. However, with this method you will not be able to build on and add to the knowledge base.

Dr. Madans highlighted that to her it is important to build the knowledge base and she thinks that someone needs to take on the challenge of building this body of knowledge, and designing surveys and experiments so that results are cumulative. While we may not know yet how to do that, we should start by taking a wider view of the results, examining how the experiments were done, and classifying the results and experiments in a way that will allow us to determine the effects of incentives.

Furthermore, we particularly need to take into account that, in addition to measuring response rate, we should also measure data quality. In order to do so, however, we need to determine how to measure data quality, and how to measure it best.

Incentives should be approached as a bigger research project and not as an individual topic.

Ethics and incentives. Dr. Madans then touched on the ethics of incentives. First, she mentioned that we seem to be more comfortable discussing how to develop a true research program for incentives than figuring out how to deal with the ethical issues. She was not originally going to speak on the ethics of incentives, but she chose to do so because she believes that we still have to deal with the ethical issues. There are different ethical issues to address when approaching the decision to use incentives at all, and whether to use a differential incentive.

The survey research community alone may not be able to work its way through all of the ethical issues, but there are others we can call on who do this for a living. Ethical issues do not need to be approached as a question of my ethics versus your ethics—our individual ethical standpoints may not be the answer, but they can be addressed through an ethical analysis that allows you to organize your thought process and work through the ethical implications. As a community, if that's not something that survey researchers are trained to do, then we should seek guidance from those who are. There are real questions that are not easy to answer, and they're very interesting philosophical questions.

Summary. Over the past 15 years the field has really moved. As a challenge to ourselves, we now need to approach both the ethical issues and the methodological issues, as a research program where the ultimate outcome is to build a knowledge base that will allow us to look for consistency and underlying principles. The alternative is to accept that incentives are just one of the tools we have among many to improve quality. Investigators will have to continue to justify their incentive decisions, as we do in our OMB packages. While it may be unique to that survey because there are just too many variables, Dr. Madans hopes that we can come to some kind of conclusion over the next 15 years.

Audience Discussion

Following the panel’s remarks, the floor was opened to audience questions and discussions.

Audience question 1. The first question, from Stanley Presser, asked the panel to discuss the point of choosing only incentives for randomization, of all the many different things that every federal survey could be asked to routinely incorporate into its day-to-day activities. Furthermore, why not select certain measurement issues and insist that they are carried out, for example, X percent of the sample should be experimenting with the measurement issue?

Dr. Groves addressed this question and agreed that he wouldn't be opposed to that idea. He feels, though, that incentives are a special case and deserve this level of attention. One reason is because of the ethical questions of incentives—imagine you're OMB and you're worried about the New York Times article that will come out some day that could say: Do you realize we're drawing samples of people and paying them to do these studies, and then that we’re paying them inequitably? There has to be a public defense of this and being able to show that it does make a difference scientifically is important. The second reason is the complicated relationship between non-response rates and error. If we don't have that contrast, then we're not going to accumulate information about what estimates might be most sensitive to these effects without that experimental contrast.

Audience question 2. Next, Richard Kulka addressed the point that incentives are one tool in the tool basket, which interacts with other factors. He asked Dr. Groves about the issue of essentially an ongoing living survey context, where you're making choices about phases based on your estimates. It seems that the concept of incentives is something you might do or not do within a given phase in that context. Mr. Kulka asked if Dr. Groves could talk a little bit about whether he thinks that makes sense.

Dr. Groves said that, if we conceptualized every survey design as subject to some uncertainties before we mount the design, then it would be prudent to examine design alternatives formally in order to discover, in some sense, the more optimal set of conditions.

The use of incentives works into this quite well because it is an area where there is great uncertainty about whether all subgroups have the same incentive effects, and whether incentive effects impact non-response error. Because of that uncertainty, the use of incentives is a perfect candidate to begin a design with a set of experimental treatments, and to use the intermediate results to choose a design that, it could be argued, would be more optimal. The criteria of that optimality are in the hands of the designer, and we could argue about it, but it seems to me that without that information, you can't make progress on that.

Right now we are guessing. We don't know the sensitivity of response rates to incentives when we begin a survey, and we guess at it. We usually guess once, and OMB approves or doesn't approve, and we run with it. It's a very low information value for the design and could be way sub-optimal. You could be blowing a lot of money, or not giving enough money. Viewing this as a sequential problem in an adaptive sense would be more prudent.

Audience question 3. The next participant’s question asked about a point Dr. Mooney made, that there is a history of getting better contact information when incentives are offered. This situation makes perfect sense for phone interviews, but the participant wondered if it is also the case for in-person interviewing? She also asked the question: “When you need data quickly, do incentives produce it?” Particularly in the mode of in-person interviewing, is it more effective if the interviewer is able to offer an incentive on the spot post-interview than if the respondent has to complete a form and receive the incentive by mail?

Dr. Mooney answered that the study she referred to earlier in the afternoon was a mail survey; she is unable to speak to whether offering incentives improves contact information for in-person interviews. Likewise, regarding payment on the spot for in-person interviews, most of her experience has been with mixed mode (mail, telephone, and web) so she could not provide feedback on this subject. Dr. Groves mentioned that he, too, was unable to answer the question on offering an incentive on the spot versus with a delay in order to increase timeliness.

Summary of the Day, Judie Mopsik

Ms. Mopsik first acknowledged that this seminar is the first time such a group has gathered for a full day to discuss incentives. Usually, we devote an hour at AAPOR, or an hour and a half at ASA, but never such an intensive gathering of people who have these kinds of interests. Ms. Mopsik proposed that those who are on planning committees for the major meetings keep in mind that the topic of incentives needs to be stratified a little bit, so that we can address issues around incentives as they relate particularly to either panel surveys, or mail surveys, or a survey that's either on the phone or in-person. As we move forward, we can start to clarify what we is needed in order to begin to build this archive of results and point to effects over time. We can work towards building a broader body of knowledge. The fact that something worked in a panel survey does not mean it's going to work on a cross-sectional survey.

Ms. Mopsik then mentioned that we need a paradigm shift in our approach to proposing incentives for OMB approval. Currently, when proposing incentives we think about the cost of the survey, the response rate, and the error, but we may need to step back and ask if we can address concerns using other means. Could a smaller sample size or a lower response rate address these concerns and still be able to measure some of the same things and address policy issues? The alternative is that we can continue to struggle in the years to come with the mindset that if you don't get a 70 percent response rate, you won’t be able to publish and therefore, you’ve got to do everything you can to get that done. This change will require a paradigm shift, which will not happen after one or even a few meetings.

One reason to think about this long-term is that, several years ago, the NICHD, CDC, and EPA launched the National Children's Study, which will run until about 2025 or 2035. The kids who start in that study will be 20 by the time it's finished. This brings up an interesting issue around incentives. The study begins with mothers and fathers in the study, and then moves on to focus on the kids. How do you keep those kids in the study? When are they going to feel the value of this? The Framingham Men's Study started when these people were adults, and they saw some value in being in that study, but I don't think that the 5- and 6-year-olds are going to see any value until they're much older, maybe after they've phased out of the study, and we start learning a lot more about what data were collected. So, understanding this issue is absolutely critical, and we have to keep meeting and talking about ways to understand it better.

Final audience question. Following Ms. Mopsik’s summary, she mentioned one final audience question that was asked: Does anyone have experience with what kind of incentives to give interviewers?

Neva Grey, from NORC, answered this questioned. She discussed an in-person field study, with a sample of a low-income population. There were language barriers, and the sample members were located in pretty rough neighborhoods. In addition to all of these challenges, it was also a long survey, so incentives were offered to the interviewers. The interviewers were unaware of the incentives at the beginning of the study, the method was that for a certain number of completes, interviewers were given $10 or $20 gift cards to Target or similar stores. As data collection was winding down and the response rate goal was not looking attainable, the interviewers who had received incentives really stepped in and continued on to help reach the goals.