DECEMBER 2, 2011 COPAFS QUARTERLY MEETING MINUTES
COPAFS chair Felice Levine announced that Ed Spar will be stepping down as COPAFS executive director at the end of 2012. Spar had informed the Board over a year ago, and the process for identifying his replacement is well underway, with a search firm already under contract. Levine remarked on the organization’s good fortune in having Ed’s leadership over the years, and led the meeting in a hearty round of applause.
With Levine needing to depart for an appointment, past chair Judie Mopsik announced the slate of Board nominees for the coming year. Given a preference for Board stability during the executive director search, and a lack of volunteers for Board positions, all current eligible Board members agreed to serve for the coming year. The 2012 nominees included:
Chair, Felice Levine
Vice Chair, Maurine Haver
Secretary, Ken Hodges
Treasurer, Seth Grimes
Past Chair, Judie Mopsik
At large, Ralph Rector
At large, Linda Jacobsen
At large, Bob Parker
At large, Chet Bowie.
A motion to approve the 2012 Board was made and seconded, and the 2012 Board was approved.
Mopsik added that if any at the meeting are interested in the executive director position (or if they know someone who might be interested), they should contact the Board, so they can be put in contact with the search firm.
Executive Director’s Report. Ed Spar Spar thanked the attendees for the expression of appreciation, but went right to the budget situation, which he described as “grim to worse.” Pretty much all agencies are looking at numbers that are worse than previous year. Especially hard hit agencies include the. Census Bureau, Bureau of Economic Analysis, and Bureau of Justice Statistics – although there was comment that there might be some additional funding for BJS in the works.
Meeting dates for 2012 are March 16, June 1, September 14, and December 7.
A Review of Plans for the National Center for Education Statistics
Marilyn Seastrom. National Center for Education Statistics
Marilyn Seastrom described NCES as the primary statistical agency in the Department of Education, with a mission to collect and analyze education information in a manner that is timely, objective, and free of political or other bias. NCES data relate to the condition and progress of education at the preschool, elementary, secondary, postsecondary and adult levels in the US and other nations.
The NCES budget has been just under $240 million in recent years. Current staff includes about 120 direct hires (over half being statisticians), 100 immediate contract staff, and approximately 8,000 contractor staff for data collections. Because so much of the staff is “contract,” NCES has an unusually high ratio of funding to staff.
Seastrom presented a chart identifying the agency’s many divisions and programs – with major divisions devoted to assessment, early childhood international and crosscutting studies, elementary/secondary and libraries studies, and postsecondary studies.
At the Commissioner’s Office level, recent priorities include the State Longitudinal Data System Grant Program – a series of congressionally mandated grants focused on the K-12 or early childhood, or postsecondary workforce. Another priority is the department’s privacy initiative, including technical briefs to promote conversation on best practices for data security and privacy.
A key program in the Assessment Division is the National Assessment of Educational Progress (NAEP) – an on-going nationwide assessment of what American students know and can do in various subject areas. Seastrom remarked that 2011 has been the busiest year in NAEP history, with notable projects including a 2009 science report card, 2009 high school transcript study, and 2010 report cards on civics, history, geography, reading and mathematics. A State Mapping Report is using NAEP as a yardstick to compare proficiency standards across states. The report finds wide variation among state proficiency standards, with most standards being at or below the NAEP basic level, but some states moving toward more rigorous standards.
Seastrom presented on an impressive list of NCES programs and surveys, including a GeoMapping application that currently reports data at the school district level, but is being enhanced to produce boundaries for specific public schools – thus supporting the analysis of ACS and other data at the individual school level. Other programs include the Schools and Staffing Survey, the College Affordability and Transparency Center (providing information on tuition and net prices at postsecondary institutions), a Baccalaureate and Beyond study, a Program for the International Assessment of Adult Competencies, and the National Household Education Surveys (NHES).
Seastrom concluded by describing improvements being applied to the NHES, NAEP and other surveys. For example, the NHES is shifting from land-line phone to an address-based frame to improve representativeness, and collection instruments are being simplified to address concerns over literacy and accessibility. Seastrom also commented on differences between data systems in health and education. At NCHS, they paid for vital statistics, and if states did not provide data of sufficient quality, they were not paid. But in education, everything is cooperative. They can require states to participate in programs, but have no control over the quality of the data provided.
An Overview of American Demographic History.
Campbell Gibson, American Demographic History Chartbook
Census Bureau retiree and COPAFS regular Campbell Gibson described the website that he and some colleagues are putting together devoted to the demographic history of the United States as shown by data from the census. As Gibson put it, the census is a great source of historical data and trends – many going back to the first census in 1790.
For fun, Gibson had distributed a quiz with questions on US demographic history – with some answers provided in the presentation, but others available in data on the website, www.demographicchartbook.com. Some older census numbers are not readily available, and Gibson acknowledged “Historical Statistics of the United States: Millennial Edition” as the source of many of their data points.
Gibson proceeded with a series of charts illustrating interesting historical facts and patterns. For example, dividing the US into three divisions (North, South, and West), the North still has the largest population, but the South continues to gain, and might soon surpass the North. A table on state population totals reveals that only three states have had the largest share of US population – Virginia (1790 – 1810), followed by New York (1820 – 1960), and now California. In terms of population rank, only three states made the top 10 in both 1790 and 2000 (PA, NY, and NJ). The list of the 10 largest cities in 2000 would be familiar to many, but the top 10 list for 1790 includes Charleston, SC, Baltimore, MD, N. Liberties, PA, Salem, MA, and Newport, RI.
Data on urban, rural and farm populations show the rapid growth in urban population – from initially very small numbers – but also that this growth has slowed in recent decades. Notably, total rural population has held steady numerically, while the rural-farm population has dropped to very small numbers.
The data tracing change in metropolitan population are split, showing numbers based on the metropolitan district concept through 1940, and the metropolitan area concept thereafter. This is a reminder that some data items and concepts cannot be traced cleanly back to 1790, and indeed, many of the tables track data only as far back as possible. Data on race and Hispanic origin are a good example. The categories White, Black and Other are traced back to 1790, but Hispanic and not-Hispanic are traced only to 1850. The Other race category was first acknowledged in the 1860 census, and jumped dramatically when the Hispanic question was added to the short form in 1980. As Gibson noted, some improvisation is required in tracing some trends to the distant past. And many recent trends based on census long form data, now have to make the transition to estimates from the American Community Survey.
Gibson continued with a review of trends in characteristics including age/sex, average household size, home ownership, age at first marriage, fertility, and educational attainment. He also presented charts showing trends in labor force participation, internal migration, the foreign born population, national origin, and language. For example, among the White foreign born population in 1910 (the earliest year cited), the top five languages were English, German, Italian, Yiddish and Polish. Spanish ranked number nine, with a small number, just ahead of Hungarian.
Gibson concluded by suggesting that census data can make valuable contributions to the teaching of American history, and expressed hope that the website might encourage greater use of census data in the history classroom.
Rural Statistical Areas: A Rural-Centric Approach to Defining Geographic Areas
Michael Ratcliffe, US Census Bureau
Michael Ratcliffe described the Rural Statistical Area (RSA) as a proposed concept, and said the Census Bureau is trying to decide whether to go forward with it, and in what manner. The RSA concept is a response to dissatisfaction (in some circles) with the way urban and metropolitan definitions treat rural and non-metropolitan as residuals, combined with frustration with the limited geographic detail for which 1-year ACS data are reported. A state could have rural area with a population of 400,000, but no way to subdivide to smaller areas meeting the 65,000 population threshold for the reporting of 1-year ACS data.
The RSA concept was developed as part of a three-year joint research project between the Census Bureau and the State Data Centers (SDCs). The goal was to use rural counties (and potentially county subdivisions, and/or census tracts) as building blocks to create rural or predominantly rural sub-state areas of 65,000 or more people to enhance the analysis of 1-year ACS data. “Urban” would be the residual in this scheme.
The RSA delineation process started with the classification of counties based on the US Department of Agriculture’s Urban Influence (UI) codes – which distinguish non-metro counties according to factors such as adjacency to metro areas (large or small) and presence of an urban core. Counties of 65,000 or more population were designated as “standalone” RSAs – and are, in effect, the residual urban, areas. The next step was to aggregate counties in a meaningful way. Commuting was not a good basis, as there is little commuting among these counties. Instead, the approach created a “lattice net” or “aggregation net” that established initial groupings based on the UI codes and highway networks. Non-standalone counties were combined until the population threshold of 65,000 was reached. The initial groupings were later modified in an interactive process with the SDCs.
A number of imperfections were encountered in this first effort. For example, where does one place a rural county that is surrounded by “standalone” RSAs? And while the aggregation net works well in states like North Dakota and Oregon, it does not work well for Hawaii, a series of islands. Among the questions that arose during the delineation process were: Should RSAs be contiguous, or should priority be given to similarity of characteristics? Should RSAs cross state lines? This might make sense for many purposes, but predictably, the State Data Centers had little interest in areas crossing state lines. Also, can a variety of geographic building blocks be used for RSAs, or would it be better to use counties only?
Looking to “next steps,” Ratcliffe explained that the SDC Steering Committee has asked the Census Bureau to adopt RSAs as a standard tabulation geography. The Census Bureau is agreeable, but wants to review the concept further, and issue a Federal Register notice seeking comments. The Census Bureau also points out that the areas would have to be called something other than “Rural,” because some areas in the proposed approach would be clearly urban, or predominantly urban.
Research on Measuring Same Sex Couples
Nancy Bates, US Census Bureau
Martin O’Connell, US Census Bureau
Nancy Bates presented on Measurement Error in Relationship and Marital Status Questions. The problem is that as society changes, measurement must change, and the Census Bureau faces challenges as societal and legal definitions of marriage have changed, and new terms – such as same-sex husbands/wives, domestic partnerships and civil unions – have become widespread. Complicating matters are state to state variations in the recognition of same-sex marriage, and the lack of any federal level recognition.
To illustrate the challenge, Bates noted that the 2008 ACS recorded 150,000 same-sex spousal couples at a time when there were only about 32,000 legally married same-sex couples. In researching the discrepancy, one wonders if it traces to gays and lesbians selecting “husband/wife” even if they are not legally married – maybe because “husband/wife” is the first relationship response option listed. Or might it trace to heterosexual couples making errors in reporting gender?
An Interagency Workgroup on Measuring Relationships in Federal Household Surveys has conducted focus groups, and found that respondents have interpreted questions in a manner consistent with legal status. In other words, very few selected “married” or “husband/wife” if they were not legally married. The groups identified the need for response options reflecting new legal unions, however most same-sex couples were able to make a selection from the relationship categories provided. Marital status was a bigger problem, as there is no place to indicate a same-sex committed relationship.
The Workgroup also conducted cognitive interviews, in which they observed respondents answering different versions of the Relationship and Marital Status questions, and a question on Cohabitation. The interviews found that most responses to Relationship aligned with true legal status, but that category ordering could induce response errors. The recommendation called for further testing of an alternative version of the question, which more explicitly specifies same-sex an opposite-sex. The version seems to act as a consistency check to reduce gender misreporting, but might have a negative impact on unit response. Responses to Marital Status also were consistent with true legal status, but there was widespread misunderstanding of the terms “domestic partnership” and “civil union,” with civil union often thought to be the same as “common law marriage.” The recommendation on the Cohabitation question was to keep it when space allows.
Next steps involve further testing, especially with different modes, and a re-interview component.
Martin O’Connell described data on same-sex couple households – starting with a comparison of the totals reported by the 2000 census, ACS, and the 2010 census. The totals fluctuate, with 2000 census at about 250,000 same-sex married couples, ACS just under 400,000 through 2007, then dropping to under 150,000 due to forms and processing changes. The 2010 census count jumped back up to 350,000 – a total known to be unrealistically high. A revised version of the 2010 census count brings the number back down to about 130,000.
One might expect that the excess of same-sex married couples would trace to same-sex couples reporting as married when that status is not legally recognized. However, the Census Bureau noticed that the excess in the original 2010 number was most pronounced in areas with high levels of non-response follow up (NRFU). As O’Connell explained it, with the late switch to paper-based NRFU (after the handheld plan was dropped), an old NRFU form was used that had a presentation of the male/female response option that was prone to misreporting. In other words, accidental errors in the reporting of sex seem to have created much of the excess same-sex couples. The “old” NRFU form was the one from the 2000 census. O’Connell noted that the same problem existed with the 2000 census, but that it was not recognized.
When the excess of same-sex couples was noticed, it was too late to stop the presses on SF1, but the Census Bureau investigated the impact of sex misreporting by comparing reported sex with the likelihood of the name being of that sex. Names were scored according to how many times out of 1,000 they were male. Names scoring 0-50 were considered likely female, while those with scores of 950-1,000 were considered likely male. Based on these likelihoods, the Census Bureau identified “false positives” – couples that very likely were “same-sex” only because of a misreport on male/female. Almost one third of same sex couples were found to be probably opposite sex, and with these households reclassified, the same-sex totals dropped to levels consistent with recent ACS estimates – and are reported in the “preferred” version. As O’Connell noted, the percent of erroneous responses was small, but on a base of 60 million opposite sex households, it results in a sizeable percent error in the number of same-sex couple households.
Concerns From COPAFS Constituencies
No concerns were raised, and the meeting was adjourned.