Lessons learned from Vanderbilt’s study of Tennessee Pre-K

October 2, 2015

Newly released findings from Vanderbilt’s rigorous study of Tennessee’s state-funded pre-K program are a needed tonic for overly optimistic views. No study stands alone, but in the context of the larger literature the Tennessee study is a clear warning against complacency, wishful thinking, and easy promises. Much hard work is required if high quality preschool programs are to be the norm rather than the exception, and substantive long-term gains will not be produced if programs are not overwhelmingly good to excellent. However, the Vanderbilt study also leaves researchers with a number of puzzles and a similar warning that researchers must not become complacent and have some hard work ahead.

Let’s review the study’s findings regarding child outcomes. Moderate advantages in literacy and math achievement were found for the pre-K group at the end of the pre-K year and on teacher ratings of behavior at the beginning of kindergarten. However, by the end of kindergarten these were no longer evident and on one measure the no-pre-K group had already surpassed those who had attended pre-K. The pre-K children were less likely to have been retained in kindergarten (4% v. 6%) but were much more likely to receive special education services in kindergarten than the no-pre-K group (12% v. 6%). The pre-K group’s advantage in grade repetition did not continue, but it did continue to have a higher rate of special education services (14% v. 9%) in first grade.

By the end of second grade, the no-pre-K group was significantly ahead of the pre-K group in literacy and math achievement. The most recent report shows essentially the same results, though fewer are statistically significant. Teacher ratings of behavior essentially show no differences between groups in grades 2 and 3. Oddly, special education is not even mentioned in the third grade report. This is puzzling since prior reports emphasized that it would be important to determine whether the higher rate of special education services for the pre-K group persisted. It is also odd that no results are reported for grade retention.

If we are to really understand the Tennessee results, we need to know more than simply what the outcomes were. We need information on the quality of the pre-K program, subsequent educational experiences, and the study itself. It has been widely noted that Tennessee’s program met 9 of 10 benchmarks for quality standards in our annual State of Preschool report, but this should not be taken as evidence that Tennessee had a high quality program. Anyone who has read the State of Preschool knows better. It (p.10) specifies that the benchmarks “are not, in themselves, guarantees of quality. Arguably some of them are quite low (e.g., hours of professional development), even though many states do not meet them. Moreover, they are primarily indicators of the resources available to programs, not whether these resources are used well. In addition to high standards, effective pre-K programs require adequate funding and the continuous improvement of strong practices.

The State of Preschool reported that Tennessee’s state funding was nearly $2300 per child short of the per child amount needed to implement the benchmarks. More importantly, the Vanderbilt researchers found that only 15% of the classrooms rated good or better on the ECERS-R. They also found that only 9% of time was spent in small groups; the vast majority was spent in transitions, meals, and whole group. This contrasts sharply with the high quality and focus on intentional teaching in small groups and one-on-one for programs found to have long-term gains (Camilli et al and Barnett 2011). The Tennessee program was evaluated just after a major expansion, and it is possible that quality was lowered as a result.

Less seems to be known about subsequent educational experiences. Tennessee is among the lowest ranking states for K-12 expenditures (cite Quality Counts), which is suggestive but far from definitive regarding experiences in K-3. We can speculate that kindergarten and first grade catch up those who don’t go to pre-K, perhaps at the expense of those who did, and to fail to build on early advantages. However, these are hypotheses that need rigorous investigation. Vanderbilt did find that the pre-K group was more likely to receive special education. Perhaps this lowered expectations for achievement and the level of the instruction for enough of the pre-K group to tilt results in favor of the no-pre-K group. Such an iatrogenic effect of pre-K would be unprecedented, but it is not impossible. There are, however, other potential explanations.

Much has been made of this study being a randomized trial, but that point is not as important as might be thought. One reason is that across the whole literature, randomized trials do not yield findings that are particularly different from strong quasi-experimental studies. The Head Start National Impact Study and rigorous evaluations of Head Start nationally using ECLS-K yield nearly identical estimates of impacts in the first years of school. Another reason is that the new Vanderbilt study has more in common with rigorous quasi-experimental studies than “gold standard” randomized trials. Two waves were randomly assigned. In the first wave, just 46% of families assigned to pre-K and 32% assigned to the control group agreed to be in the study. In the second wave, the researchers were able to increase these figures to 74% and 68%, respectively. These low rates of participation that differ between pre-K and no-pre-K groups raise the same selection bias threat faced by quasi-experimental studies. And, uncorrected selection bias is the simplest explanation for both the higher special education rate for the pre-K group and the very small later achievement advantage of the no-pre-K group. I don’t think the bias could be nearly strong enough to have overturned large persistent gains for the pre-K group.

Even a “perfect” randomized trial has weaknesses. Compensatory rivalry has long been recognized as a threat to the validity of randomized trials. In Tennessee one group got pre-K; the other sought it but was refused. It appears that some went away angry. Families who agreed to stay in the study could have worked very hard to help their children catch up and eventually surpass their peers who had the advantage of pre-K. Alternatively, families who received the advantage of pre-K could have relaxed their efforts to support their children’s learning. Similar behavior has been suggested by other studies, including a preschool randomized trial I conducted years ago for children with language delays. Such behaviors also could occur even without a randomized trial, but it seems less likely.

Randomized trials of individual children also create artificial situations for subsequent schooling. If only some eligible children receive the program, do kindergarten teachers spend more time to help those who did not attend catch and “neglect” those who had preschool? Would kindergarten teachers change their practices to build on pre-K if the vast majority of their children had attended pre-K and not just some; perhaps they would only change with support and professional development?

Clearly, the Vanderbilt study has given the early childhood field much to think about. I am reminded of Don Campbell’s admonition not to evaluate a program until it is proud. However, programs may also be in the habit of becoming proud a bit too easily. We have a great deal of hard work in front of us to produce more programs that might be expected to produce long-term results and are therefore worth evaluating. Researchers also would do well to design studies that would illuminate the features of subsequent education that best build upon gains from preschool.

What we should not do is despair of progress. The media tend to focus on just the latest study, especially if it seems to give bad news. They present a distorted view of the world. Early childhood has a large evidence base that is on balance more positive than negative. There is a consensus that programs can be effective and that high quality is a key to success. Research does help us move forward. Head Start responded to the National Impact study with reforms that produced major improvements. Some states and cities have developed even stronger programs. Tennessee can learn much from those that could turn its program around. If it integrates change with evaluation in a continuous improvement system, Tennessee’s program could in turn become a model for others over the next 5 to 10 years.

–Steve Barnett, Director, NIEER

When Research and Emotions Collide

May 20, 2015

Certain practices evoke strong reactions among early educators. Kindergarten “red-shirting (Katz, 2000),” academic “hothousing” (Hills, 1987), and developmentally inappropriate practice raise ire, yet pale in comparison to the issue of retaining children early in their school careers. As an increasing number of states adopt policies supporting, even requiring retention, emotions run high among early educators, policymakers, and parents on the topic.

Retention has been common practice for many decades but is retention the right way to go? Everyone agrees that a student will be well served by possessing necessary skills to learn and apply new information, yet we recognize that not all children grasp new information and skills at the same level or at the same time. Thus, the debate over the merits and faults of retention persists.

So what does research have to say about retention? Among many in my generation, retention of young children was seen as bad practice and policy, shaped years ago by research conducted by Shepard and Smith (1987) and reinforced by Jimerson (2001) and others. But as a scientist I know research contributes to understanding, and I strive to let emerging research inform my opinion rather than the reverse. So I hit the journals to review the literature, learning the issue is more nuanced than one might imagine.

You can simply ask, “Does retention work?” but the answer may be hidden behind several doors, not all of which lead to the same conclusion. The answer you get depends on the questions you ask, such as:

  • Does the design of the research influence results?
  • What criteria are used by states and schools to base retention decisions on, and do different criteria yield different research findings?
  • What does research says about the short- and long-term academic and social/emotional/behavioral effects of retention?
  • Does the age or grade when retention occurs make a difference in students outcomes?
  • Is retention an effective educational strategy for young children below third grade?
  • Does retention affect certain groups of students differently?
  • Are there effective alternatives to retention?

These questions were among those examined by the Southeast Regional Comprehensive Center Early Childhood Community of Practice and CEELO, when early education leaders from several state departments of education were invited to explore retention as an effective education strategy for young children.

I’ll spare you the details of research shared in this “succinct” blog, but here are a couple of my research-informed takeaways about a practice which affects nearly 450,000 elementary school children annually, a quarter of whom are kindergartners and 60% boys. Both teacher- and test-based methods for determining retention are associated with short-term academic gains (typically restricted to literacy) that fade, even disappear, over several years. Research shows mixed results on the impact of retention on short-term social/emotional/behavioral development while there is evidence of adverse long-term effects, including school drop-out. Retained children are 20–30% more likely to drop out of school. The fairness of retention policy has been called into question, fueled by a recent report from the Office for Civil Rights, confirming that retention disproportionately affects children of color, those who are low-income, and those with diagnosed learning difficulties, with wide variation in rates across states. Additional research shared with the Community of Practice about retention’s complexities can be found here.

I came away further convinced that the decision to retain a young child, while well-intentioned, is an important, potentially life-changing event; one that should include consideration of multiple factors as to its advisability for a particular child. Inflexible policies based on a single point-in-time assessment, on a single topic or skill (e.g., literacy), may be politically popular, expedient, and, as some would argue, fair, but the research doesn’t convincingly support the practice to ensure intended short- and long-term outcomes for all students.

Further, costs associated with retention are typically absent from policy discussions. We know significant numbers of children are retained in the early years, including kindergarten (Table 1), and average K-12 student costs hover around $12,000 per year. The cost of retention and lack of comparison to less costly, effective alternatives such as remediation or peer tutoring should cause staunch proponents to rethink their position. Combined with long-term costs associated with drop-out, crime, and unemployment, retention makes little cents or sense when signs point to the supplemental interventions–not to sitting through another year in the same grade repeating every subject–as having great impact.

While some encouraging short-term results have been associated with retention, policymakers shouldn’t wave the checkered flag just yet. We would be wise to examine the full body of research evidence, considering both short- and long-term consequences and the critical importance of providing children, parents, and teachers with timely educational and emotional support throughout a student’s career. Layer in the evidence questioning retention as a cost-effective use of resources, and the caution flag should be brought out. When it comes to declaring victory through retention, too much contrary evidence exists and too many important questions remain to allow our emotions to set policy in stone.

American Indian/  Alaska 
Native HI/ Other Pacific Islander
Black/ African American
Hispanic/ Latino of any race
Two or more races
US 4% 7% 2% 8% 5% 4% 5% 4%
AL 6% 8% 5% 14% 5% 9% 9% 5%
AK 4% 6% 4% 8% 2% 4% 3% 3%
AZ 3% 5% 2% 7% 4% 3% 3% 3%
AR 12% 11% 13% 14% 26% 13% 11% 8%
CA 3% 9% 2% 5% 5% 3% 4% 4%
CO 2% 5% 2% 4% 2% 2% 3% 2%
CT 5% 12% 3% 16% 8% 8% 8% 3%
DE 3% 5% 2% 0% 4% 4% 3% 2%
DC 3% 33% 2% 0% 4% 4% 3% 1%
FL 5% 9% 3% 4% 7% 5% 7% 4%
GA 6% 4% 3% 11% 5% 7% 8% 5%
HI 12% 21% 7% 13% 11% 14% 12% 13%
ID 2% 3% 3% 3% 1% 3% 1% 1%
IL 2% 2% 1% 2% 2% 1% 3% 2%
IN 5% 5% 3% 0% 6% 6% 6% 4%
IA 2% 11% 2% 3% 3% 4% 3% 2%
KS 2% 4% 2% 0% 2% 3% 2% 2%
KY 4% 8% 3% 5% 2% 5% 5% 4%
LA 4% 3% 2% 0% 5% 4% 4% 4%
ME 4% 5% 4% 14% 6% 5% 5% 4%
MD 2% 0% 2% 27% 3% 4% 2% 2%
MA 3% 5% 3% 8% 5% 5% 7% 2%
MI 7% 12% 5% 7% 6% 9% 11% 6%
MN 2% 7% 1% 11% 4% 3% 2% 2%
MS 8% 10% 7% 5% 8% 14% 1% 8%
MO 3% 5% 2% 6% 4% 4% 4% 3%
MT 4% 6% 0.0% 6% 4% 6% 4% 4%
NE 4% 9% 2% 19% 3% 4% 4% 3%
NC 5% 9% 3% 5% 6% 5% 6% 4%
ND 5% 8% 14% 27% 13% 10% 3% 4%
NV 2% 3% 1% 2% 4% 2% 1% 2%
NH 3% 0% 1% 0% 5% 5% 0% 3%
NJ 3% 6% 1% 3% 5% 4% 5% 2%
NM 4% 6% 2% 0% 5% 4% 3% 4%
NY 3% 4% 2% 4% 4% 3% 3% 2%
OH 4% 6% 5% 6% 7% 7% 7% 3%
OK 7% 9% 5% 8% 8% 8% 6% 7%
OR 2% 7% 1% 2% 2% 2% 2% 2%
PA 2% 0.0% 1% 0% 3% 2% 2% 2%
RI 2% 16% 1% 0% 4% 3% 5% 1%
SC 5% 6% 2% 3% 5% 5% 7% 4%
SD 4% 12% 4% 0% 6% 7% 5% 3%
TN 5% 3% 2% 15% 4% 5% 7% 5%
TX 4% 6% 3% 8% 3% 4% 7% 5%
UT 1% 1% 0.0% 1% 1% 1% 1% 1%
VT 3% 0% 2% 0% 6% 0% 1% 3%
VA 4% 4% 2% 4% 5% 5% 4% 3%
WA 2% 6% 1% 4% 2% 3% 2% 2%
WV 6% 0.0% 3% 0% 7% 7% 7% 6%
WI 2% 2% 2% 6% 3% 2% 2% 2%
WY 5% 10% 4% 33% 17% 7% 3% 4%

SOURCE: U.S. Department of Education, Office for Civil Rights, Civil Rights Data Collection, 2011–12.

–Jim Squires, Senior Research Fellow

Young immigrants and dual language learners: Participation in pre-K and Kindergarten entry gaps

February 18, 2015

In a recent webinar, NIEER discussed what it means to be Hispanic and a DLL (a dual language learner) or Hispanic and come from a home with immigrant parents. We showed that Hispanic children, DLLs, and children with an immigrant background have lower rates of participation in center-based care (including Head Start) pre-K programs than White non-Hispanic children. We considered the impacts on enrollment of home language and of varied immigrant backgrounds, which make this group quite heterogeneous. We found that enrollment rates do show that while non-DLL Hispanics and Native Hispanics had enrollment rates above 60 percent, much like White children, about 45-50 percent of DLLs and Immigrant background Hispanics were enrolled in center-based care.

Pre-K participation of Hispanics in center-based care

That is, only one in two DLL Hispanics or Immigrant Hispanics attend a center-based program. This suggests that aspects of language and immigration status are likely defining why children participate.

We then wondered about similarities between these enrollment patterns and kindergarten entry gaps. Using Whites as the group of reference, it turns out that Hispanic DLLs and Hispanic immigrant children have very large performance gaps in reading, math, and language. These two groups pretty much drive the overall Hispanic gaps observed at kindergarten. What about Hispanic children who are both DLL and of immigrant background? Hispanic DLL children from an immigrant background show very large performance gaps, unlike Native-born English-speaking Hispanics, who fare quite well relative to Whites. It appears we are failing this group.

Kindergarten gaps for Hispanic students, math, reading, and language

Patterns are somewhat different when we look at socio-emotional developmental gaps. These do not resemble those for reading, math, and language. On the contrary, while most Hispanics differ little from Whites in terms of approaches to learning, self-control, or problems with externalizing and internalizing behaviors , Hispanic DLL children who are Native-born show large gaps across all of these domains except for internalizing behaviors.

Kindergarten gaps for Hispanic children, social-emotional skills

Putting this all together, clearly policy makers should focus on increasing access, outreach, and participation in high-quality early education for any and all Hispanic children, but especially for Hispanic DLL children and children whose parents are immigrants. Moeover, policy makers and practitioners both should recognize how diverse Hispanics are as a group, and how the needs of DLL Hispanic children differ depending on their family histories .

Addressing these issues in early care and education begins with obtaining a better understanding who our children are and who are we serving (and not serving), including:

  • screening language abilities
  • developing guidelines and standards that address the needs of these groups
  • promoting the proliferation of bilingual programs
  • and, planning ways to engage and effectively work with diverse groups of Hispanic children.

How well we do this in the first years of their lives will have important consequences for their developmental pathways and their opportunities, and this will be reflected in the our society 15-20 years from now.

–Milagros Nores, PhD, Associate Director of Research

The research says high quality preschool does benefit kids

October 21, 2014

In a response for the Washington Post Answer Sheet, Steve Barnett, director of the National Institute for Early Education Research deconstructs a new Cato Institute policy brief by David J. Armor, professor emeritus of public policy at George Mason University, who also has a piece on washingtonpost.com arguing his position under the headline “We have no idea if universal preschool actually helps kids.” We do know. It does. Here are some excerpts from the post, which can be read in its entirety here, outlining what the research really says:

First, if one really believes that today’s preschool programs are much less effective than the Perry Preschool and Abecedarian programs because those programs were so much more costly and intensive, and started earlier, then the logical conclusion is that today’s programs should be better funded, more intensive, and start earlier. I would agree. Head Start needs to be put on steroids. New Jersey’s Abbott pre-K model (discussed later) starts at 3 and provides a guide as it has been found to have solid long-term effects on achievement and school success. Given the high rates of return estimated for the Perry and Abecedarian programs, it is economically foolish not to move ahead with stronger programs.

Blog set 3Second, Armor’s claims regarding flaws in the regression discontinuity (RD) studies of pre-K programs in New Jersey, Tulsa, Boston, and elsewhere are purely hypothetical and unsubstantiated. Every research study has limitations and potential weaknesses, including experiments. It is not enough to simply speculate about possible flaws; one must assess how likely they are to matter. (See the extended post for more details.)

Third, the evidence that Armor relies on to argue that Head Start and Tennessee pre-K have no long-term effects is not experimental. It’s akin to the evidence from the Chicago Longitudinal Study and other quasi-experimental studies that he disregards when they find persistent impacts. Bartik points to serious methodological concerns with this research. Even more disconcerting is Armor’s failure to recognize the import of all the evidence he cites from the Tennessee study. Tennessee has both a larger experimental study and a smaller quasi-experimental substudy. The larger experiment finds that pre-K reduces subsequent grade retention, from 8% to 4%. The smaller quasi-experimental substudy Armor cites as proof of fade-out finds a much smaller reduction from 6% to 4%. Armor fails to grasp that this indicates serious downward bias in the quasi-experimental substudy or that both approaches find a large subsequent impact on grade retention, contradicting his claim of fade-out.

Among the many additional errors in Armor’s review I address 3 that I find particularly egregious. First, he miscalculates cost. Second, he misses much of the most rigorous evidence. And, third he misrepresents the New Jersey Abbott pre-K programs and its impacts. (See the extended post for more details.)

When a reviewer calls for policy makers to hold off on a policy decision because more research is needed, one might assume that he had considered all the relevant research. However, Armor’s review omits much of the relevant research. (See the extended post for more details.)

Those who want an even more comprehensive assessment of the flaws in Armor’s review can turn to Tim Bartik’s blog post and a paper NIEER released last year, as little of Armor’s argument is new. For a more thorough review of the evidence regarding the benefits of preschool I recommend the NIEER papers and WSIPP papers already cited and a recent review by an array of distinguished researchers in child development policy.

If all the evidence is taken into account, I believe that policy makers from across the political spectrum will come to the conclusion that high-quality pre-K is indeed a sound public investment.

–Steve Barnett, NIEER Director

Is New York City Mayor Bill De Blasio’s method for expanding Pre-K a model for other cities?

September 19, 2014

In this week’s edition of The Weekly Wonk, the weekly online magazine of the New America Foundation, experts were asked: Is New York City Mayor Bill De Blasio’s method for expanding Pre-K a model for other cities? NIEER Director Steve Barnett and Policy Researcher Coordinator Megan Carolan were among those who weighed in. Their responses can be read below. Please visit the original post here to see all responses.

Steve BarnettSteve Barnett, NIEER Director:

Whether NYC offers a good model for other cities to follow in expanding pre-K is something that we will only know after some years.  However, it is not too soon to say that NYC offers one important lesson for other cities.  When adequate funding is available, cities (and states) can expand enrollment quickly on a large scale at high standards.

A key reason for that is there is a substantial pool of well-qualified early childhood teachers who do not teach because of the field’s abysmally low financial compensation and poor working conditions.  When we offer a decent salary, benefits, and a professional working environment many more teachers become available.  Of course, NYC also put a lot of hard and smart work into finding suitable space and recruiting families to participate.   Whether NYC achieves its ultimate goal of offering a high-quality education to every child will not be known for some time, but this will depend on the extent to which NYC has put into place a continuous improvement system to build quality over time.

It would be a mistake to assume that high quality can be achieved at scale anywhere from the very beginning no matter how slow the expansion. Excellence in practice must be developed on the job through peer learning, coaching and other supports.  If NYC successfully puts a continuous improvement system in place and quality steadily improves over the next several years, then it will have much to offer as a model for the rest of the nation.

Megan Carolan, Policy Research Coordinator

When New York City opened the doors to expanded pre-K for thousands of 4-year-olds earlier this month, it marked a huge departure from the scene just a year ago, when Mayor de Blasio was still seen as a longshot candidate and Christine Quinn was focusing on preschool loans. Other cities looking to expand their early childhood offerings may wonder how New YorkMeganColor changed so quickly.

Preschool wasn’t a new expansion for de Blasio: expanding pre-K was a hugely personal priority for the Mayor and his wife, and de Blasio has been highlighting the shortage of seats when he served as Public Advocate from 2010 until his mayoral election. The de Blasio camp built partnerships both at a personal and political level from the start; the public debate with Governor Andrew Cuomo was never over whether to fund preschool, but how to fund it to balance the needs of the state and the city. Coalition-building didn’t stop there. In order to both solidify political support for this endeavor, and to build on existing capacity, the Mayor was clear about including community- and faith-based providers.

Despite the image of tough-talking New York swagger, what really aided the rapid expansion was compromise and building partnerships (some of the very social skills kids will learn in pre-K!). Bring together diverse stakeholders as well as local and state officials in an effort so clearly supported by residents put pre-K in the fast lane. No two cities will have the same mix of existing systems and political ideologies, but collaboration and compromise are key to meeting the needs of young learners across the country.

Formative Assessment:  Points to Consider for Policy Makers, Teachers, and Researchers

April 16, 2014

Formative assessment is one area in early childhood education where policy is moving at lightning speed. There’s been a lot of support for the appropriateness of this approach to assessment for young learners. Many policy makers and data users have “talked the talk,” perfecting the lingo and pushing the implementation of policies for this approach. Yet there are essential questions to consider when rolling out a plan or process for a state. In the brief released by the Center on Enhancing Early Learning Outcomes (CEELO), I outline several considerations for policy makers in moving such initiatives. They’re briefly outlined below, along with considerations for teachers and researchers.

For Policy Makers

Policies around formative assessment in early childhood education will be most successful when the below “top 10” items are considered thoughtfully before implementing.

Overall Considerations for Policymakers Responsible for Formative Assessment Systems

  1. Does the purpose of the assessment match the intended use of the assessment? Is it appropriate for the age and background of the children who will be assessed?
  2. Does the assessment combine information from multiple sources/caregivers?
  3. Are the necessary contextual supports in place to roll out the assessment and use data effectively? (e.g., training, time, ongoing support)
  4. Does the assessment have a base or trajectory/continuum aligned to child developmental expectations, standards, and curricula?  Does it include all key domains?
  5. Does the assessment have a systematic approach and acceptable reliability and validity data?   Has it been used successfully with similar children?
  6. Are the data easily collected and interpreted to effectively inform teaching and learning?
  7. What technology is necessary to gather data?
  8. Are the data useful to teachers and other stakeholders?
  9. What are the policies for implementation and what is the roll-out plan for the assessment?
  10.  Will data be gathered and maintained within FERPA and other security guidelines? Are there processes in place to inform stakeholders about how data are being gathered and held securely to allay concerns?

I encourage all stakeholders in assessment (policy makers, administrators, parents/caregivers, etc.) to exercise patience with teachers learning the science of this process and perfecting the art of implementing such an approach. Although many effective teachers across the decades have been doing this instinctively, as we make the approach more systematic, explicit, and transparent, teachers may have a steep learning curve. However, with the considerations above as a part of the decision-making process, teachers will find it easier to be successful.  This policy report provides a guide and framework to early childhood policymakers considering  formative assessment. The report defines formative assessment and outlines its process and  application in the context of early childhood. The substance of this document is the issues for  consideration in the implementation of the formative assessment process. This guide provides a  practical roadmap for decision-makers by offering several key questions to consider in the process of  selecting, supporting, and using data to inform and improve instruction.This policy report provides a guide and framework to early childhood policymakers considering formative assessment. This guide provides a practical roadmap for decision-makers by offering several key questions to consider in the process of selecting, supporting, and using data to inform and improve instruction.

For Teachers

The intent of formative assessment is to implement the process of using data (observation or other) to inform individualized instruction. The link between this type of embedded assessment and instruction should be seamless. Teachers work with great effort at this on several different levels. Effective early childhood teachers:

  • use immediate feedback from children in the moment and adjust the interaction based on this feedback.
  • collect evidence over time to evaluate the child’s growth and to plan long-term learning goals. These goals are reviewed periodically and adjusted based on new evidence.
  • look at aggregate data across their classrooms.  They examine the data for trends and self-reflect on their teaching practices based on what the data are showing.

For Researchers

We must move forward by setting a strong research agenda on the effects of formative assessment in early childhood classrooms–and not allow policy to outpace research.  We need further research around using formative assessment processes to collect, analyze, and use the data to improve teaching and learning in the early childhood classroom. This must first include randomized trials of formative assessment, to examine the impact on classroom quality and child outcomes. The field needs a clear understanding of how teachers are trained and supported in collecting and using the data, and just what supports are needed for success. This should be coupled with a qualitative understanding of how teachers are using data in their classrooms. Finally, an understanding of who is using the data, in what capacity–and how it fits within the larger assessment system–should be components of any examination of formative assessment.

Shannon Riley-Ayers, Assistant Research Professor, NIEER and CEELO

What the new OCR early childhood data do and do not tell us

March 26, 2014

Recently released to great interest is the Office for Civil Rights (OCR) Early Childhood Data Snapshot. I want to call additional attention to this document and the survey behind it for two reasons. First, these new data identify serious educational problems that deserve more than one day in the sun. Second, these OCR data have significant limitations that policy makers, the media, and others should understand when using them. Public preschool education is delivered by a complex, interagency, mixed-delivery system that makes it more difficult to measure than K-12. Unless key limitations of the OCR survey are taken into account, users of the data can reach incorrect conclusions. For example, it was widely reported that 40 percent of school districts do not offer preschool. This is untrue: at the very least, every preschooler with a disability is offered a free appropriate education. The OCR survey also undercounts the provision of preschool education nationally, and its accuracy varies by state, which makes cross-state comparisons particularly perilous. Finally, definitions of such key terms as “suspension” are not what most people would assume, which complicates the interpretation of some high-profile findings.

Data from this OCR survey point to problems with access to preschool education and with policies regarding suspensions from preschool programs and retention (grade repetition) in kindergarten.

  • Every child should have access to high-quality preschool education. Yet, nearly half of all 3- and 4-year-olds do not attend any preschool program, public or private, and even at age 4, when attendance is more common, just 64% of 4-year-olds not yet in kindergarten attend preschool, according the 2012 Current Population Survey.
  • The only “zero tolerance” policy that should apply in preschool is that there should be no preschool suspensions. Yet, a substantial number of preschoolers are suspended each year, with boys and African-American children more likely to be suspended than others. States and LEAs should examine their data, practices, and policies closely to prevent this problem.
  • States should look closely at their policies regarding kindergarten grade retention. Does it really make sense to pay for more than 1 in 10, or even 1 in 20, children to attend kindergarten twice? Better access to high-quality preschools, and added services in kindergarten such as tutoring for children who are behind, could be much more cost-effective. States with high kindergarten retention rates should be looking into why they are retaining so many children and what can be done to reduce these rates.

Universal access to high-quality public preschool addresses all of these problems. Better teachers, smaller classes, and more support from coaches and others would reduce suspensions. Such preschools would have more appropriate expectations for behavior, and teachers who can support the development of executive functions that minimize behavior problems. The lower quality of preschools attended by African-American children may partly explain their higher rates of preschool suspension. Finally, good preschool programs have been shown to reduce grade repetition, though bad policies are likely behind many of the high rates of kindergarten retention.

The importance of the problems identified by the OCR data raises another key issue to which most of this article is devoted: to use the data appropriately we must understand the limitations of the data and make sure we interpret them correctly.

Access is Complicated

Let us begin with the finding that “40 percent of school districts do not offer preschool.”  Federal and state laws require that every child with a disability be offered a free, appropriate education from ages three to five. Yet OCR data do not seem to consistently include these children when reporting preschool special education at either the LEA or school level. One reason is that some “school districts” include only older children, e.g., high school districts and vocational school districts. (About 1 percent of high school districts also provide preschool, typically to serve children of teen parents or as a vocational training program.) Limiting the analysis to districts with kindergarten, 70 percent report that they provide preschool, which still seems low. This is partly because some agencies other than LEAs are responsible for preschool special education services. It is also possible that some LEAs mistakenly stated that preschool was not provided. Turning to the number of children reported served, rather than the number of districts serving them, we find a similar problem. School reports undercount the numbers of preschool children receiving services, and the undercount is a bigger problem in some states than others. (A complete copy of the questionnaire can be downloaded here.)

The most obvious explanation for these undercounts is that the OCR survey respondents interpret the questions asking about children served in public school buildings.  At the district level, the OCR survey asks LEAs to first report the number of schools and then to report on their provision of preschool services. This may have led some districts to respond positively only when they served preschool children in public school buildings. At the school level, the OCR survey asks individual schools to report on whether they offer preschool programs and services “at this school” and the enrollment count table specifies “only for schools with these programs/services.”  Whether or not this has any influence on LEA interpretation of the survey, it seems likely that each school reports only preschool offered physically in that school.

Different Data Sources Yield Different Counts

Just how different are the OCR numbers on enrollment from estimates of total enrollment in preschool education offered by states and local education agencies derived from other data sets?  The OCR survey reports 1.4 million enrolled. Data from the Current Population Survey, minus Head Start enrollment, leads to an estimate of about 1.8 million children attending state and local preschool education programs, indicating that the OCR survey is low by about 400,000 children or 22% of the total. In terms of preschool special education services, the OCR data report about 300,000 children, but the Office of Special Education Programs reports 430,000 3- and 4-year-olds receiving special education services under IDEA, and there are additional preschoolers served who are older (while younger children are included in the OCR data). Preschool special education may account for a substantial portion of the undercount, but it seems unlikely to account for the majority of the problem. In sum, the OCR survey undercounts of numbers of children receiving public preschool education from states and LEAs when those served outside public schools are included.

State Approaches Vary

As states differ in how they fund and operate preschool education, the extent to which the OCR data comprehensively capture preschool enrollment varies greatly by state. Looking state by state, it appears that the OCR survey performed fairly well in measuring regular preschool enrollment in most states. However, it grossly undercounted preschool provision in Arkansas, California, Florida, Georgia, New York, Oregon, Pennsylvania, and Vermont. These states make extensive use of private providers for public preschool education. In addition, the OCR figures diverge significantly from the IDEA counts for 10 other states. There are a number of possible reasons for more widespread “undercounting” of preschool special education including: contracting with private providers for special education, responsibility for preschool special education in agencies other than LEAs, and service delivery in homes and other nonpublic school settings. Some preschoolers receive only individualized therapy or other services under IDEA, rather than a publicly provided classroom experience, but neither the OCR nor other data sets allow for the determination of how many children receiving IDEA services are in classrooms funded by public education.

For some states, the data appear to be reasonably accurate when compared to data for the same year from NIEER.[1] Data from the NIEER Yearbook as well as the OCR report are compared below for select states. For states like Georgia and Florida, where many programs are not funded through LEAs, this comparison indicates that the OCR numbers are very incomplete measures of the number of children provided with public preschool education. Relative to total enrollment in state-funded preschool education (which does not include all LEA provision or all preschool special education), Florida is undercounted by about 120,000 and Georgia by more than 30,000. Even in states where funding flows through districts, many children seem likely to have been unreported because they are not served in public schools, which seems to be the case in New York. Also interesting is the case of Wyoming which served 2,207 preschoolers aged 3 and 4 under IDEA, yet the OCR report has Wyoming serving just 13 children under IDEA. While the discrepancies could result primarily from OCR school level respondents counting only children served in public school buildings, this may not be a complete explanation.


NIEER Preschool Yearbook OCR Report
State-Funded Pre-K Enrollment
IDEA Enrollment, 3s and 4s (from Office of Special Education)
Public School Preschool Enrollment
Special Education Enrollment
Florida 175,122 21,007 57,286 16,351
Georgia 82,868 8,561 50,779 8,612
New Jersey 51,540 10,683 48,186 9,839
New York 102,568 45,390 56,540 3,857
Wyoming 0 2,207 624 13

New Jersey allows us to conduct a more fine-grained comparison of OCR data with data from LEAs that include children served by private providers. A simple statewide comparison might suggest reasonably full reporting for New Jersey. New Jersey enrolled about 51,000 children in state-funded pre-K which is not very different from the OCR number. However, about half of the 51,000 in state-funded programs attended private providers (including Head Starts) contracted with districts. New Jersey’s districts vary greatly in the extent to which they serve preschoolers through private providers.  When we look at the numbers district by district, we find that the OCR and district totals closely correspond for districts serving children only or overwhelmingly in public school buildings, but  for districts relying heavily on contracted private providers the OCR numbers correspond closely only to the numbers in public school buildings. The OCR report identifies more than 20,000 preschoolers served in New Jersey public schools who are not funded through the state pre-K programs, which just happens to be close to the number served under contract who are not in the OCR data. This strengthens our conclusion that the OCR data represent only children in public school buildings. This is not to fault the OCR survey in the sense that this is what it is designed to do, but this is not how the OCR data have been widely interpreted, nor is it adequate as a survey of preschool education offered through the public schools (and not just in their own facilities).

Suspension and Retention Data

Given the limitations of the OCR data on numbers of children served, the total numbers should not be used as estimates of all children provided preschool education by the states and LEAs. They much more closely approximate the numbers served in public school buildings. Comparisons across states, LEAs, and schools, should be approached with great caution. It is unclear exactly how this might affect the percentage of children reported as suspended, but it seems unlikely to overturn either the general conclusion that suspensions occur at a disturbing rate or that they are higher for African American children and boys. However, comparisons of suspensions across states or districts might be distorted by limitations of the data.

Another aspect of the survey with the potential for misunderstanding is presented by the definition of “suspensions.”  In the OCR survey the definition includes not just children who have been sent home, but also those temporarily served in other programs offering special services for children with behavior problems. Such placements are not necessarily bad for children or to be avoided. However, the data do not allow for any division between children sent home and children sent to more appropriate placements. Nevertheless, the high rate at which children are temporarily removed from their regular classrooms for behavior problems is cause for concern.

The accuracy of the kindergarten retention data also deserves scrutiny. Earlier this year, NIEER collected state data on grade repetition by grade level from state sources of information, though not all for the 2011-12 year. Across all 27 states for which we obtained data, our figures averaged 8/10 of a percentage point lower. Comparing only those for which we had 2011-12 data, our figures averaged ½ of one percent lower. At least judged relative to the only other source we have, the OCR retention data seem reasonably accurate. That the OCR data are slightly higher might reflect efforts to minimize the appearance of a problem.  There are some large discrepancies for a few states. Arkansas had 12 percent kindergarten retention in the OCR data and 6 percent in the state data we obtained; Michigan had 7 percent kindergarten retention in the OCR data and 12 percent in the state data we obtained. For such states, it may be useful to review the data on a district-by-district or school-by-school basis to identify reasons for the discrepancies. Even with kindergarten retention there can be differences due to interpretation. For example, should children who enter a transitional kindergarten program after kindergarten be considered retained?  What about children who enter kindergarten after a year of transitional K?  Any problems with the data would not negate the conclusion that some states have very high rates compared to others and that this deserves consideration by policy makers.

Overall, OCR has provided a valuable service by collecting these early childhood data. Without the OCR data, there would be no basis for raising the issue of preschool suspensions and no way to track progress on this issue in the future. Similarly, without the OCR data there would be no basis for comprehensive state-by-state comparisons on grade retention at kindergarten. Nevertheless, great care must to be taken to recognize the limitations of the OCR data, and the federal government should do more to reduce those limitations. OCR is already working to improve the next survey. Ultimately, they may have to go beyond a school-based survey, because much of public education for preschool children takes place outside of public school buildings even when it is under the auspices of the state education agency (SEA). And, in some states public preschool education is not entirely under the SEA. Possibly, states could supplement LEA data by providing the same basic information for preschoolers they serve outside public school buildings. In addition, procedures might be added to verify that respondents properly understand all questions, especially for states where the responses seem at odds with data from other sources. Some data might be collected in more detail: preschoolers suspended at home with no services separated from those in alternative placements; preschool education children in classrooms separated from those served elsewhere; and, transitional K separated from repetition in regular K.  If you have additional suggestions, particularly based on knowledge of your state’s preschool services systems, OCR would undoubtedly welcome them.

– Steve Barnett, NIEER Director


[1] Though NIEER data report on enrollment in state-funded pre-K enrollments, they do not include LEA preschool services that are not part of state-funded pre-K or IDEA; NIEER data will not capture the full undercount.


Get every new post delivered to your Inbox.

Join 254 other followers

%d bloggers like this: