Short-Term Test Scores Miss the Real Story on School Choice
The longer term the results, the better voucher programs look
As a researcher who works for an advocacy organization, I’ve come to refer to Indiana, Louisiana, and Ohio as the Terrible Three. This is not because I have anything against them, and as the respective homes of the Indianapolis 500, Jambalaya, and a university with a mascot described as “anthropomorphic buckeye nut,” who could?
No, these states are a burr in my saddle because they are the three states where researchers have found negative results for private school choice programs. Given the overwhelmingly positive tilt of research on private school choice, these particular studies get shouted from the rooftops.
As I ranted to Ginny Gentles on her podcast recently, I don’t think these studies are the slam dunks that opponents think that they are. To be clear, I’m not dumping on the researchers who conducted them. They are not the ones out there trumpeting the findings and they did the best they could given the circumstances. In a saner world we would be able to talk about the relative strengths and weaknesses of research in a more dispassionate way and learn a bit from each study rather than use it as a sledgehammer to bludgeon our ideological opponents. But that snake gave Adam and Eve an apple and it all sort of went downhill from there.
Setting my criticisms aside, there has been an interesting pattern that has emerged in the research literature on school choice, one that was just reinforced by a paper on Ohio’s voucher program released last week by the Urban Institute.
Matthew Chingos, David Figlio, and Krzysztof Karbownik found that Ohio’s voucher students:
“were substantially more likely to enroll in college than students who remained in public schools (64 versus 48 percent). The differences in college enrollment were especially large at four-year colleges (45 versus 30 percent) and selective colleges (29 versus 19 percent). The enrollment impacts were strongest for male students, Black students, students with below-median test scores before leaving public school, and students from the lowest-income families.”
If those names sound familiar, it is because two of them authored the 2016 paper that found negative math and reading results for voucher students in Ohio.
They are not the only people to have this experience. The 2021 paper on the voucher program in Louisiana by Heidi Holmes Erickson, Jon Mills, and Pat Wolf that found negative achievement effects (at least in math) found no negative attainment effects.
It was different research teams in Indiana, where a 2018 Paper by Mark Berends and Joe Waddington found negative results in math and null effects in English. But, a 2021 paper by Megan Austin and Max Pardo found positive attainment effects. As Pat Wolf summarized for Education Next, “Adjusted for their background, high-school students who participate in the Indiana Choice Scholarship Program enroll in college within a year of graduating from high school at a rate of 61 percent, 9 percentage points higher than the rate of 52 percent for similar students in traditional public schools.”
These findings only reinforce what Colin Hitt, Pat Wolf, and I argued in an AEI paper and subsequent book chapter back in 2018. As we wrote then:
“A growing number of studies are finding that school choice programs can improve high school graduation rates, college attendance, and earnings—without producing gains in test scores. Conversely, studies of other school choice programs have found large short-term test score gains but no lasting benefits in terms of graduation rates or college attainment. Improving test scores appears to be neither a necessary nor sufficient condition for improving the later-life outcomes that truly matter.”
Our analysis at the time was of 39 impact estimates across 20 different programs. I would bet if we updated the study today, the result would be the same.
(As an aside, when we published that paper Mike Petrilli melted down like he had just been told Santa wasn’t real and published a 5-part(!!) reply to it. It only took Pat one post to patiently walk through every logical and methodological mistake that Mike made.)
So what is happening here? Let’s ask the authors of these studies to see what they think.
Back in 2016, Figlio and Karbownik wrote:
“Participation in the EdChoice program likely reduced students’ reading and mathematics scores relative to what would have occurred in the public sector—for those students who had previously attended the highest performing of the EdChoice-eligible schools. This may be because the students attended lower-quality private schools than the public schools that they left (especially because the public schools likely performed somewhat better as a consequence of the EdChoice program, though the improvement in the public schools is nowhere near as large as the estimated reduction in participants’ scores after going to private schools). It may also be that the private schools attended are not necessarily lower quality but are focused on different sets of skills and competencies, or it may be that the private schools attended under the EdChoice program may not have emphasized the state assessments to the degree to which the public schools did.” (Emphasis mine.)
Their opinions haven’t changed. In their most recent paper, published now nearly ten years later, they write:
“Our findings of positive impacts on college enrollment and degree attainment indicate that state test scores might not be the best way to judge the performance of private schools, which often have different curricula from public schools and might face different incentives to concentrate on than state examinations.”
It is totally possible that the private schools participating in these choice programs are worse at teaching math and reading than their neighboring public schools. But given the lack of effect, or even positive effects, seen over the longer term, this just doesn’t seem to be the most likely scenario. Rather, it’s probably the case that these schools do not teach in ways aligned with the state tests that were used to judge their performance and/or are successful in instilling non-tested skills and dispositions that make students do better later in life. Not such a sad story after all.
Given all this, perhaps I should rename the Terrible Three.
The Thriving Triple?
The Terrific Triumvirate?
The Tantalizing Treble?
I’ll stop now.
Very helpful. I would like to read more about what is going on at these private schools that is not being measured.