Skip to main content

Overconfident Experts as Poor Predictors

Experts are poor predictors of the future.

*”In one study, college counselors were given information about a group of high-school students and asked to predict their freshman grades in college. The counselors had access to test scores, grades, the results of personality and vocational tests, and personal statements from the students, whom they were also permitted to interview. Predictions that were produced by a formula using just test scores and grades were more accurate.”

*In another study, “data from a test used to diagnose brain damage were given to a group of clinical psychologists and their secretaries. The psychologists’ diagnoses were no better than the secretaries.’ “
 
*In the 1990s, a psychologist interviewed 284 experts who “made their living ‘commenting or offering advice on political and economic trends.’” He asked these experts to predict whether particular events would occur soon in parts of the globe in which they knew well and in areas about which they were familiar but were less knowledgeable. Would Mikhail Gorbachev be pushed out as leader of the Soviet Union? Would the U.S. go to war in the Persian Gulf? Which nation in the world would emerge as a big market? He collected over 80,000 predictions.
 
“The results were devastating,” Daniel Kahneman writes. “People who spend their time and earn their living studying a particular topic produce poorer predictions than dart-throwing monkeys…” (p. 219). Moreover, when confronted with their errors, experts would admit to the mistakes but gave a list of excuses.
 
I could give more examples of economists who predicted a healthy economy in 2007 rather than one in which a housing market bubble would burst and the Recession of 2008 would plunge the U.S. into high unemployment and low economic growth.  But I won’t.
 
OK, overconfident experts are poor predictors of the future. So what?
 
It matters when educational policymakers use expert opinion to make decisions that have big effects on the lives of students, teachers, parents, and the direction of schooling. Consider value-added measures. On this issue, the experts differ. See here and here. Yet federal  policymakers in their confidence that value-added measures are the heart-and-soul of accountability have chosen one set of experts over another, ones who are confident that student test scores must be included in any evaluation of teacher effectiveness. In granting waivers to states from portions of NCLB, for example, U.S. Secretary of Education Arne Duncan, tied that relief to states increasing the number of charters, adopting Common Core Standards, and–the punch line–insuring that student test scores would be part of teacher evaluations.
 
In ignoring one set of test and measurement experts who know intimately the flaws of value-added measures and the untoward consequences of adopting schemes that judge teachers on each year of student test scores in math and reading (and a horde of new tests that have to be developed), the Obama administration has adopted a political recipe for disaster far worse “than dart-throwing monkeys.”
 
Note the adjective “political” that I used in the previous sentence. I do not mean that as a negative since politics is inherent to any educational policy making and action. I used the word to suggest that political rationality–retaining support of educational entrepreneurs, corporate leaders concerned about failing U.S. schools, those who support charters–trumped vanilla-plain rationality, clear thinking that could avoid the inevitable ill effects of value-added measures that are nearly certain to develop while these metrics are used to fire teachers and determine salaries.
 
Those predictable ill effects, that is, outcomes that are already occurring or have occurred, are no longer on the horizon they are in plain sight. Such effects include even more teachers avoiding low income and minority schools than do now; parents pressing principals to get rid of teachers who are rated “ineffective” or merely “effective;” constant tweaking of the teacher evaluation system to give more weight to some factors over others; lawsuits filed by teachers fired on the basis of one or two years of student test scores contesting the algorithm used by district administrators. These outcomes can be anticipated.
 
That is a post-mortem of a policy choice. I do wonder if a pre-mortem might have helped policymakers avoid flawed value-added measures. Perhaps not because the political judgment of choosing one set of experts over another to insure support of key constituencies carries far more weight, regardless of the predictable ill effects of defective policies. Moreover, decision-makers who glow with optimism often are allergic to others pointing out problems and identifying defects in a policy not yet implemented. Sadly, in the political world of educational policymakers and experts, there is no vaccine for overconfidence. It comes with the territory. So kiss pre-mortems goodbye.

This blog post has been shared by permission from the author.
Readers wishing to comment on the content are encouraged to do so via the link to the original post.
Find the original post here:

The views expressed by the blogger are not necessarily those of NEPC.

Larry Cuban

Larry Cuban is a former high school social studies teacher (14 years), district superintendent (7 years) and university professor (20 years). He has published op-...