Applying new rigor in studying education

Gina Kolata / New York Times News Service /


What works in science and math education? Until recently, there had been few solid answers — just guesses and hunches, marketing hype and extrapolations from small pilot studies.

But now, a little-known office in the Education Department is starting to get some real data, using a method that has transformed medicine: the randomized clinical trial, in which groups of subjects are randomly assigned to get either an experimental therapy, the standard therapy, a placebo or nothing.

The findings could be transformative, researchers say. For example, one conclusion from the new research is that the choice of instructional materials — textbooks, curriculum guides, homework, quizzes — can affect achievement as profoundly as teachers themselves; a poor choice of materials is at least as bad as a terrible teacher, and a good choice can help offset a bad teacher’s deficiencies.

So far, the office — the Institute of Education Sciences — has supported 175 randomized studies. Some have already concluded; among the findings are that one popular math textbook was demonstrably superior to three competitors, and that a highly touted computer-aided math-instruction program had no effect on how much students learned.

Other studies are underway. Cognitive psychology researchers, for instance, are assessing an experimental math curriculum in Tampa, Fla.

The institute gives schools the data they need to start using methods that can improve learning. It has a What Works Clearinghouse — something like a mini Food and Drug Administration, but without enforcement power — that rates evidence behind various programs and textbooks, using the same sort of criteria researchers use to assess effectiveness of medical treatments.

Hurdles to overcome

Without well-designed trials, such assessments are largely guesswork. “It’s as if the medical profession worried about the administration of hospitals and patient insurance but paid no attention to the treatments that doctors gave their patients,” the institute’s first director, Grover Whitehurst, now of the Brookings Institution, wrote in 2012.

But the “what works” approach has another hurdle to clear: Most educators, including principals and superintendents and curriculum supervisors, do not know the data exist, much less what they mean.

A survey by the Office of Management and Budget found that just 42 percent of school districts had heard of the clearinghouse. And there is no equivalent of an FDA to approve programs for marketing, or health insurance companies to refuse to pay for treatments that do not work.

Nor is it clear that data from rigorous studies will translate into the real world. There can be many obstacles, says Anthony Kelly, a professor of educational psychology at George Mason. Teachers may not follow the program, for example.

“By all means, yes, we should do it,” he said. “But the issue is not to think that one method can answer all questions about education.”

In the United States, the effort to put some rigor into education research began in 2002, when the Institute of Education Sciences was created and Whitehurst was appointed the director.

Shift in focus

“I found on arriving that the status of education research was poor,” Whitehurst said. “It was more humanistic and qualitative than crunching numbers and evaluating the impact.

“You could pick up an education journal,” he went on, “and read pieces that reflected on the human condition and that involved interpretations by the authors on what was going on in schools. It was more like the work a historian might do than what a social scientist might do.”

At the time, the Education Department had sponsored exactly one randomized trial. That was a study of Upward Bound, a program that was thought to improve achievement among poor children. The study found it had no effect.

So Whitehurst brought in new people who had been trained in more rigorous fields, and invested in doctoral training programs to nurture a new generation of more scientific education researchers. He faced heated opposition from some people in schools of education, he said, but he prevailed.

The studies are far from easy to do.

“It is an order of magnitude more complicated to do clinical trials in education than in medicine,” said F. Joseph Merlino, president of the 21st Century Partnership for STEM Education, an independent nonprofit organization. “In education, a lot of what is effective depends on your goal and how you measure it.”

As the Education Department’s efforts got going over the past decade, a pattern became clear, said Robert Boruch, a professor of education and statistics at the University of Pennsylvania. Most programs that had been sold as effective had no good evidence behind them. And when rigorous studies were done, as many as 90 percent of programs that seemed promising in small, unscientific studies had no effect on achievement or actually made achievement scores worse.

For example, Michael Garet, the vice president of the American Institutes for Research, a behavioral and social science research group, led a study that instructed seventh-grade math teachers in a summer institute, helping them understand the math they teach — like why, when dividing fractions, do you invert and multiply?

The teachers’ knowledge of math improved, but student achievement did not.

“The professional development had many features people think it should have — it was sustained over time, it involved opportunities to practice, it involved all the teachers in the school,” Garet said. “But the results were disappointing.”