Skip to content
Link copied to clipboard

A bipartisan push for more evidence-based public policy - but would it really improve decisions?

The bulk of the report makes noncontroversial recommendations about ways to increase the amount of and access to data on government programs, arguing for more of both.

Speaker of the House Paul Ryan, R-Wis., on Capitol Hill in Washington, Tuesday, Nov. 7, 2017.
Speaker of the House Paul Ryan, R-Wis., on Capitol Hill in Washington, Tuesday, Nov. 7, 2017.Read moreAP Photo/J. Scott Applewhite

With little fanfare, The Commission on Evidence-Based Policymaking, established by Congress, issued a final report in September, and the recommendations from that report are rapidly making its way into bipartisan legislation sponsored by Paul Ryan and Patty Murray. The bulk of the report makes noncontroversial recommendations about ways to increase the amount of and access to data on government programs, arguing for more of both. It encourages greater coordination and use of data in policy decision making. A concluding section proposes somewhat more formal rules or procedures for obtaining and using data. For example, it recommends that each federal department have what is in effect a data-based evaluation czar. The report builds on efforts in some states funded by the Pew and MacArthur Foundations that go a little further, suggesting that states should give greater priority to policies with better or more conclusive evidence.

These are well-intentioned and well-conceived sentiments, and may marginally improve government policy making. But there are two harder questions on which the report is silent. One is what to do with an innovative and logical policy proposal that has strong supporters, but little evidence on effectiveness. By analogy, if this were a prescription drug—one with a strong foundation in basic science, a plausible  mechanism of action, but no evidence of effectiveness from clinical trials—the FDA would not approve it. We know that many public policies have sounded like a good idea at the time but turned out to be either ineffective or harmful. Would policymakers, faced with an immediate problem (and aren't all policy problems immediate), delay implementation of a plausible policy until evidence is generated about its effectiveness?

The other problem occurs when a politically controversial policy proves to be the most cost effective among a set of policies. It could be controversial because of its high cost (even if more than matched by high benefits), or because a competing policy is less effective but much less costly. Would policymakers be willing to "pull the plug" on a more lifesaving alternative because it is much more expensive than an alternative which is almost but not quite as effective?

In both cases some proponents will want to go ahead with a program that either has as yet no evidence about  its effectiveness (which does not mean it is necessarily ineffective) or has evidence that says it is not effective enough for what it costs. These are precisely the conflicts where evidence is needed the most and is most missed when it is absent; these are the cases where evidence will make a difference.  Understandably, the commission did not focus on these harder cases, and discussed instead logical programs that will be supported by evidence, so everyone can win.

Using evidence for priority setting is not mentioned in the commission report but is a part of the evidence-based environment. It exposes this cutting edge because it may lead to the shelving of a great, untested idea with many advocates in favor of a moderately attractive idea with good evidence behind it. All organizations, private for profit as well as government, face this quandary. Should we gamble on a big idea or put our resources into something tried, true, and unexciting? It will be interesting to see if the argument for the power of evidence survives this test.