Minutes of the December 5, 2008 COPAFS Meeting

COPAFS chair Ralph Rector started the meeting by describing the slate of candidates for the COPAFS Board. New positions include Chair Judie Mopsik, Vice Chair Felice Levine, Secretary Ken Hodges, Treasurer Don Muff, and new at large member Seth Grimes. Rector moved that the slate be approved, Nick Zill seconded the motion, and the slate was approved by acclamation.

Ed Spar. Executive Director’s Report

Ed Spar started his Executive Director’s Report by drawing our attention to a special two-sided table with budget numbers. On one side the report provides numbers as we usually see them, but the other reports budget numbers back to 2001 with adjustments for inflation. Spar credited Louis Kincannon (and others) with preparing the adjusted numbers, which provide a clearer picture of the cuts that some agencies have experienced. For example, the regular table shows the NCES budget increasing modestly from $80.0 million in 2001 to $88.4 million in 2008, but in adjusted dollars, the budget drops from $70.3 million in 2001 to $60.8 million in 2008.

Spar announced that the second half of the March 2009 COPAFS meeting will be devoted to presentations on proposed new standards for defining metropolitan and micropolitan areas. The primary issues concern the designation of combined areas, and the question of when and how often ACS commuting data should be used to update metropolitan and micropolitan areas. The COPAFS presentations are expected to coincide with a Federal Register notice on the topic.

Budget difficulties continue at NCHS, so as of 2009, they will only collect “core” vital statistics data – dropping the collection of the “enhanced” items used by a number of agencies. Spar explained that the cuts are not limited to vital statistics, as nursing home and hospital discharge surveys are being dropped, and the Health Interview Survey is being cut in half. The HIS cut will impair the geographic level for which it can be used (national and maybe region), and have a negative impact on other surveys that depend on the HIS.

For those interested in the Census Bureau’s population estimates, Spar noted that the Bureau will once again control the county population estimates to independent state estimates – something they stopped doing about 1994. The Census Bureau also is establishing a committee of the Federal-State Cooperative Program for Population Estimates to look into revisions to the process for challenges and revisions.

Spar described the House bill that would make the Census Bureau an independent agency, and establish fixed five year terms for the Census Bureau Director. The bill gained support from associations such as PAA and AStatA, but did not go through this year. However, there are expectations that it might be re-introduced.

Next, Spar described the obstacles to the cross-sharing of IRS data among the Census Bureau, BEA and BLS, and reported that work is underway to establish ways for agencies to share such data for statistical purposes. Much work remains to be done, but Spar said there has been some positive response, and expressed hope that the effort will move along next year.

The dates for next year’s quarterly COPAFS meetings are March 6, June 5, September 11, and December 4.

Bureau of Justice Statistics Report on Cybercrime Against Business.

Ramona Rantala. Bureau of Justice Statistics.

Ramona Rantala explained that the Department of Justice had already done pilot studies of cybercrime, but the current efforts were prompted in 2003 when they were directed to collect better data on the subject. They have since started a national survey in partnership with the Department of Homeland Security and numerous other partners. Data collection is contracted to RAND Corporation and Market Stategies, Inc.

The sample consists of 35,600 companies stratified by industry and company size, and drawn from the Dun & Bradstreet database. Only companies with two or more employees are sampled, which leaves out a large number of “mom and pop” operations.

The 2005 survey yielded 8,079 responses, and while the 23 percent response rate is low, the responses cover 36 economic sectors. Rantala also noted that this is a non-mandatory survey on a sensitive topic that, in effect, asks CEOs to report on failures, and how much they have cost their companies. And even this small sample is 10 times larger than anything else on this topic.

The survey provides the most comprehensive data available on topics including:

  • The nature of computer security incidents
  • Prevalence of incidents by industry and type of incident.
  • Monetary losses
  • Downtime
  • Types of offenders
  • Reporting of incidents to authorities
  • Vulnerabilities leading to breaches.

The survey classifies cybercrime into categories including cyber attacks, cyber theft and “other computer security incidents.” Of the 7,636 respondent companies that have computer systems (some of the 8,079 respondents have none), 67 percent detected at least one type of incident during 2005. Of these incidents, 58 percent were cyber attacks, 11 percent cyber theft and 24 percent “other.” Asked if victimized companies might be more likely (or less likely) to respond, Rantala commented that it could go either way, but that there is no way to know for sure.

Rantala described some of the survey’s findings. For example, the number of cyber theft incidents is low relative to cyber attacks, but they involve bigger monetary losses – as attacks are not about money. However, cyber attacks cause much more downtime – which involves less cost, as some productivity often is salvaged even when systems are down. Little is known about who is committing cyber attacks, but companies often report that they have a good idea of who is committing cyber theft. Attacks tend to come from the outside while theft comes from insiders. Much cybercrime is not reported to authorities, as many companies see little gain in doing so. This finding suggests that official statistics on cybercrime reflect just the tip of the iceberg.

When asked if there is anything on cybercrime or preventive steps on current mandatory censuses of businesses, Rantala said there is not, but that that is a good question. She also noted that they have considered the ACS as a possible vehicle for additional information on cybercrime.

Rantala wrapped up with a look at their future plans, which call for a scaled down questionnaire, surveying only a sample of industries each year (not all 36), and exploring mandatory reporting requirements.

The 2008 National Household Travel Survey.

Heather Contrino. Federal Highway Administration, DOT.

Contrino described the National Household Travel Survey (NHTS) as the only national data source on travel demand and travel behavior on the American public. Dating back to 1969, the NHTS measures travel by US households for all modes and purposes. The survey has a wide range of users, and is used for both national and local policy work.

The survey uses a national RDD with CATI design, and the population of interest is households with (landline) telephones. The minimum state sample is 250 households, and there is a cell phone-only national sample. There is also an “Add-On” survey with specific questionnaire content. Respondents answer a number of questions, and fill out a travel diary for a single day travel period. For that day, they report all trips by all modes by all household members age five and above. Each household reports travel for only one day, but the survey covers all days of the year.

Content includes basic information on households and household members, as well as geography, vehicles and trips. Some of the issues examined include congestion, safety, energy & environment, and alternative modes (such as transit, walking and biking). They have also looked at the impact of Internet shopping – not so much because of concern that people are no longer traveling to stores (they are), but because of “all those delivery trucks in our neighborhoods.”

The NHTS has two components. First is the National Study, which has a sample of about 25,000 households, and serves the Department of Transportation, Congress, and the administration. It is also a resource for states and metropolitan planning organizations (MPOs). Second is the Add-On Program. Started in 1990, this program allows states and MPOs to purchase additional samples for their area. Participants come from many states, and the program has a sample of about 125,000.

Contrino described some of the funding challenges they face, and noted that the popularity of the Add-On Program has helped them get funding for the National Study. The Add-On Program is a win-win – giving FHWA leadership in household travel survey data, and a bigger survey with more stakeholders. Participants get quality data consistent across the nation, and partnership status in a high profile program. Participants also get guidance and user support.

Add-On Program rules allow each participant up to five additional questions (Contrino commented that thankfully, not all do this). They can also do their own sampling plans within the constraints of the overall methodology. The level of involvement is left up to the Add-On participants.

Data collection for the 2008 NHTS began March 2008, and about 56,000 households have been recruited to date. A weighted interim data file will be developed by January 2009, and a final 2008 data file is due summer/fall 2009.

Future design goals call for improvements for cell phones and non-contacts, the need to better institutionalize the program within DOT, maintaining the Add-On Program, serving the user community, and the annual reporting of key measures.

Statistical Transition Work for the Department of Commerce

David McMillen. Presidential Transition Team

David McMillen joined the meeting after lunch to briefly describe his work (with Terri Ann Lowenthal) with the team guiding the new administration’s transition with respect to the Census Bureau ESA and BEA. As McMillen described it, their function is to provide the incoming administration with something concise to look at – describing the most important things they need to know about the agencies in question. They are to identify issues and describe options for addressing them – they are not to make recommendations.

When asked if they are identifying persons who could serve as the next Census Bureau Director, McMillen said they can describe the types of persons who might make a good Director, and can identify lists of persons who fit the description, but they are not empowered to recommend individuals or approach anyone about the position.

Asked if he is still a federal employee while serving in this role, McMillen explained that he is on assignment and using his home email for this work.

Plans and Preview of the First Release of the American Community Survey Multiyear Estimates.

Susan Schechter. U.S. Census Bureau.
Douglas Hillmer.
U.S. Census Bureau.
Alfredo Navarro.
U.S. Census Bureau.

Susan Schechter noted that we were just days from the December 9 release of the first ACS multi-year estimates. This is a major milestone that provides data for more than 13,000 areas (twice the number for 1-year estimates) which include about 95 percent of the U.S. population. Schechter suggested that the media likely will focus on communities with populations in the 20,000 to 64,999 range – for which ACS data are being reported for the first time. She also offered a reminder that, because they reflect data collected from 2005 through 2007, users cannot expect the 3-year estimates to reflect the impact of the recent economic downturn.

Freddie Navarro recalled some ACS basics that are important to understanding the nature of the multi-year estimates. The ACS is not longitudinal – it has an independent sample every year, and the sample is based on housing units, without regard to who the occupants are. Navarro explained that the 1-year estimates reflect data collected over 12 months, with all months given equal weight. The multi-year estimates are an extension of this logic – pooling 36 or 60 months of data in the same way the 1- year estimates pool 12 months. The multi-year estimates are averages of 36 or 60 months of data – not averages of 1-year estimates.

However, while the 1-year estimates are controlled to the Census Bureau’s county population estimates for July of the relevant year, the multi-year estimates are controlled to the average of the population estimates for the relevant years. For example, the 3-year estimates spanning 2005-2007 are controlled to the average of the most recent county population estimates for 2005, 2006, and 2007.

Some geographic definitions change over time, so Navarro explained that the multi-year estimates use definitions effective at the end of the estimation period. For example, a 5-year estimate (reflecting 2005-2009) for a city that annexed several blocks in 2007 would reflect the city including the annexed blocks. There is no attempt to “average” the geography over the multi-year period.

Inflation adjustments are another element to understand, as dollar valued items are inflation adjusted (based on the national CPI) to the most recent year of the estimation period. For example, income data from 2005 through 2007 are adjusted to 2007 constant dollar values for the reporting of 3-year averages for the period 2005-2007.

The presence of 1-year, 3-year, and 5-year estimates for some areas will require users to determine which estimates are best suited for their purposes. Navarro noted that the answer depends on the application, and involves a tradeoff between reliability and currency. One year estimates have lower reliability because of the smaller sample (just one year of data), but more currency because all data are recent. Multi-year estimates have greater reliability because of the larger sample, but less currency because some of the responses are several years old.

Navarro also described the issue of overlapping data in the annual series of mult-year estimates. Three-year estimates released this year covering 2005-2007 will be followed next year by estimates covering 2006-2008. Both sets include data collected in 2006 and 2007, so the difference is not a true reflection of change. The Census Bureau recommends that change be measured with ACS estimates without overlapping data.

Navarro concluded by asking that we keep in mind that the multi-year estimates are a new product presenting new choices and new concepts, and he and Susan recommended that users refer to the ACS handbooks and train-the-trainer materials now on the Census Bureau’s website.

American Community Survey Research and Evaluation Program.

Tony Tersine. U.S. Census Bureau.
Todd Hughes.
U.S. Census Bureau.
Jennifer Tancreto.
U.S. Census Bureau.

Tony Tersine described the ACS Research and Evaluation Program as a set of ongoing projects to improve the ACS. The program includes survey methodology and analytical research, as well as the ACS Methods Panel.

Tersine summarized the status of numerous projects for 2008. Projects that were completed (and report issued) include tests of the new field-of-degree question, respondent characteristics by mode, the quality of population estimates, and the use of minor civil divisions as design areas. Among projects completed (but report pending) are projects on non-response weighting adjustments, alternate population controls, disclosure avoidance for group quarters, the usability of American FactFinder, the effects of data quality filtering, and the evaluation of the 2008 tenure question.

Looking to 2009, Tersine said they have a list of about 50 projects. The list is still being prioritized, but the final list is expected to include key projects on the following.

  • Messaging to clarify ACS vs. the 2010 census (for those who will receive both).
  • Matching administrative records to evaluate the health insurance question.
  • Disclosure avoidance for 5-year estimates.
  • Small area data user preferences.
  • 5-year group quarters estimates.
  • Sampling rates for small governments
  • The use of subcounty controls
  • Comparing the quality of 5-year data vs. long form estimates.

Todd Hughes described the ACS Language Team Projects. The ACS questionnaire is in English only, but language assistance is available in numerous languages and for all response modes. Spanish language forms can be mailed on request. Hughes described a 2007 project to expand the languages for which materials are available for personal visit interviews, and research to test the success of that effort. Projects for 2009 look to create additional respondent materials in Spanish, Chinese, Russian, Korean, and Vietnamese; cognitive testing of a Spanish automated instrument; cognitive testing of a Spanish language questionnaire; and “multi-mode” language guides – for use with mail respondents and as a study guide for bilingual interviewers.

Jennifer Tancreto described the ACS methods panel as a vehicle for testing changes to the ACS, with the goal of improving existing questions, developing new questions, and testing new methods. There was no methods panel funding for 2008 so the Census Bureau suspended two tests as well as plans for future tests. However, funding is restored for 2009, so work will resume. Projects include a test of a brochure (available in five languages) on response rates for non-English speaking households, a dedicated toll-free number for non-English languages, and a test of an additional mailing to non-respondents that cannot be included in telephone follow-up. Work in the planning stage includes a test of Internet response options for the ACS, and a content test in which access to the Internet will be one of the items tested.

Concerns from COPAFS Constituencies.

No issues were raised, and the meeting was adjourned.

The Council of Professional Associations on Federal Statistics

20 F Street, NW, Suite 700, Washington, DC 20001 | www.copafs.org


©2010 The Council of Professional Associations on Federal Statistics. All rights reserved.