About Books

Winter 2018

By Richard Barbieri

“‘What is truth?’ said jesting Pilate, and would not stay for an answer.” Francis Bacon’s wry 400-year-old comment is as relevant now as in 1597. Today we could set beside it the question: “What is research?” 

Research can mean anything, such as looking up information about an author or a state for a school report; running a longitudinal study of a large cohort of randomly selected individuals for several decades to gather mortality and morbidity data; and sending a probe to the farthest reaches of the galaxy to learn more abut the origins of our universe. As have many other aspects of human life, the quantity and nature of research has accelerated geometrically over the past century and a half. (Bacon may never have known the word “research,” as its first appearance was in 1577, and it doesn’t appear anywhere in his writing.)

The very definition of research is often the subject of debate—even among specialists. Different researchers have asserted, for example, that “the plural of anecdote is data” and that “the plural of anecdote is not data.” (Google gives the nod to “not,” with 55,000 citations versus 45,000 for the yeas.) 

The purposes of research can be filed under three headings: knowledge, prediction, and amelioration. Aristotle may not have been a good researcher (he thought thinking happened around the heart and believed in spontaneous generation), but he got it pretty much right when he said, “All men by nature desire to know.” Some research, such as when the universe began or what killed the dinosaurs, satisfies the desire for knowledge characteristic of most human beings. But knowing the frequency of earthquakes in an area when designing buildings there, or understanding a disease process in order to interrupt it, has practical benefits. Studying a disease to show its progress or lessen its effects has clear benefits

Probably the most interesting research to most of us is about people, and it seems to fall into two categories: how they learn and why they don’t learn. More specifically, we want to know how they take in and evaluate new information and make good decisions and why they often fail to, or even seem to resist, doing those things.

Although we have made significant strides in understanding how to remove some barriers to learning—from Claude Steele’s stereotype threat to Carol Dweck’s dichotomy between growth mindsets and fixed mindsets to research on dyslexia—we have made little progress in changing minds that are apparently closed. We have a greater level of understanding about these frustrating behaviors, drawn most often from behavioral psychology and behavioral economics, as described in such books as Dan Ariely’s Predictably Irrational and Daniel Kahneman’s Thinking, Fast and Slow. But diagnosis and remediation are still poles apart. 

Exactly a half century ago, Arthur Koestler’s The Ghost in the Machine popularized an explanation for the self-destructive human traits he had witnessed firsthand in the 1930s and 1940s in Europe. His description of the triune brain’s inability to master the impulses leading to individual and mass violence is still a plausible postulate, affirmed in many ways by research of which he could have hardly dreamed. 

Among the legion of books examining the mind’s workings, Robert Sapolsky’s Behave makes a fair bid to become the Thinking, Fast and Slow for readers wanting to connect neuroscience, biology, and the social sciences.

Sapolsky seeks an exhaustive analysis of behavior, interdisciplinarily and chronologically. He follows a chapter on a specific, instantaneous behavior with eight more, moving causally backward to “One Second Before,” “Days to Months Before,” “Back to the Crib,” and finally “Centuries to Millennia Before.” 

To do this, Sapolsky draws on research from many fields. Besides offering three lengthy appendices covering the relevant details of neuroscience, endocrinology, and protein expression, he provides supporting data from the familiar—collective versus individualist cultures, implicit bias, compassion among nonhuman species—to “where did he find that?” information. We learn that temperature of a drink we’re holding connects to the warmth or lack of warmth we perceive in the personality of another person; planes with first-class sections are four times as likely to experience disruptive behavior from coach passengers; a shocked lab rat can reduce its stress level most effectively by biting another rat of lower status. Each example has its place in Sapolsky’s overarching behavioral explanations.

Unfortunately, after 750 pages of analysis, Sapolsky concedes, “If you had to boil this book down to a single phrase, it would be ‘It’s complicated.’” But he’s not content to stop there. Instead he ends: “Eventually it can seem hopeless that you can actually fix something, can make things better. But we have no choice but to try. And if you are reading this, you are probably ideally suited to do so. You’ve amply proven you have intellectual tenacity. You probably also have running water, a home, adequate calories, and low odds of festering with a bad parasitic disease. You probably don’t have to worry about Ebola virus, warlords, or being invisible in your world. And you’ve been educated. In other words, you’re one of the lucky humans. So try.”

Philip Tetlock and Dan Gardner’s Superforecasting is about the art/craft of using data to make predictions, mostly in the arenas of economics and politics. But this is no self-help book. It’s the result of a multiyear competition, funded by the Intelligence Advanced Research Projects Activity (IARPA), “an agency within the intelligence community that reports to the director of National Intelligence,” which aims to improve U.S. intelligence gathering. Tetlock, whose earlier study Expert Political Judgment: How Good Is It? How Can We Know? showed that most pundits are no more accurate than random guessers at predicting such phenomena as the breakup of the Soviet Union, created a random team of volunteers called the Good Judgment Project (GJP). His “intellectually curious laypeople” were matched against teams from MIT, the University of Michigan, and the intelligence agencies themselves, as well as a control group. The result: “After two years, GJP was doing so much better than its academic competitors that IARPA dropped the other teams.”

How did this happen? The authors tell such an engrossing story, full of anecdote, character, and observation, that it seems almost unfair to summarize its conclusions. It would be like offering a group of equations or some 3D models instead of inviting the reader into a volume on the Manhattan Project or the discovery of DNA. 
Nevertheless, consider some characteristics of superforecasters and how they might be applicable in more local circumstances. Among successful forecasters’ “Ten Commandments”:
 
• “Focus on questions where hard work is likely to pay off.” For example, don’t try to look too far out in time, or spend much time in an area that is highly unpredictable. You might use local research to plan next year’s school events during times that are least likely to include major storms, but have an indoor alternative for graduation no matter where you are.
• “Break seemingly impossible problems into tractable sub-problems.” Their illustration is estimating the number of piano tuners in Chicago by asking “what four facts would be needed to answer the question?” and gathering plausible or concrete answers to each.
• “Strike the right balance between under- and overconfidence, between prudence and decisiveness.” Offer a job to the first candidate who wowed you on her first visit, or wait so long to interview others and check references that she takes another position?
 
Finally, there’s research that leads to amelioration of a problem. Iris Bohnet confidently offers such research in What Works: Gender Equality by Design. Beginning with the well-known example of a strategy that succeeded in diversifying many symphony orchestras—putting a screen between the auditioning musician and the committee—Bohnet, a professor at Harvard Kennedy School, discusses techniques that have shown efficacy not only in gender-balancing, but in increasing diversity and performance in a range of environments.

Bohnet describes how errors such as the halo effect, confirmation bias, and other behavioral tendencies reduce our effectiveness in hiring, performance evaluation, and other aspects of work, school, and life. She mentions a number of well-known behaviors, such as judging candidates by names on résumés and unconsciously favoring height—it has been 120 years since we chose a president of below-average height. But some of the most powerful research discoveries she describes are particularly applicable to schools. Noting that “fit” is often cited as a vital aspect of hiring (and, I would assume, of admissions as well), she observes, “We should be worried, however, when employers keep selecting employees … who look like them.” Since fit is highly subjective, often based first on visual cues, and powerfully enhanced by commonalities—clothing, hobbies, schools attended—learning to set such details aside is crucial to finding better ways to bring students, and employees, into our schools.

But Bohnet cites a method of evaluating people that is even more detrimental, and that is probably the dominant one used in schools, at least for employees. She observes that in all fields, “unstructured interviews receive the highest ratings for perceived effectiveness,” but “the data showing that unstructured interviews do not work is overwhelming.” Both the data provided, and the causal connections between structured interviews and better results, are highly persuasive. Taken together, the research conclusions from What Works, might be the ones that, if adopted, would make the biggest difference in our schools.
Richard Barbieri

Richard Barbieri spent 40 years as teacher and administrator in independent schools. He can be reached at [email protected].