Commentary and Analysis of “Evidence Use in Policymaking” (a presentation by Mattie Mattie)
Mattie Mattie's talk on "Evidence Use in Policymaking" explained the complexities policymakers face when integrating evidence into decision-making processes. She explained the challenges of determining appropriate funding levels for programs based on evidence of their effectiveness.
Mattie highlighted a USAID study where treatments aimed at improving reading skills resulted in over a 200% increase in literacy rates among participants. These improvements persisted a year after the support ended, impacting nearly 50,000 children. Despite these positive outcomes, Mattie investigated the following: "Would you fund this program for $100,000? What about $1 million? Or $10 million?" This type of question shows the difficulty in translating empirical results into concrete funding decisions. Significant barriers identified by policymakers include the "uncertainty about how to turn evidence into action" and the "difficulty interpreting the implications of evidence." To address this, the Office of Evaluation Sciences (OES) partners with federal agencies to identify behavioral barriers and design experiments to overcome them. In Mattie's discussed experiment, 191 high-ranking federal employees across 22 of 24 federal agencies participated, yielding over 1,000 observations. The study revealed that complexity plays a key role in how policymakers respond to evidence about program impact. Simplifying decision problems was found to enhance decisiveness among policymakers.
Mattie's emphasis on the challenges of interpreting and utilizing evidence in policy making resonates with our class discussions on the complementary roles of lab and field experiments. While field studies, like the USAID program evaluation, provide real-world insights, they can be noisy and context-specific. This aligns with the notion that field studies are not always more valid or insightful than lab studies. Moreover, the difficulty policymakers face in understanding the implications of evidence mirrors the challenge of distinguishing behavioral mechanisms, a topic we've explored in class. Matti''s work with the OES to simplify complex information for policymakers exemplifies the importance of designing studies that measure latent variables and implement randomized interventions. This approach aids in identifying underlying mechanisms with heterogeneous impacts, rather than merely addressing superficially appealing questions. Lastly, Mattie's caution against the complexity that hinders decision-making echoes our discussion on avoiding unnecessary experimentation that may lead to unintended harm. Simplifying evidence helps prevent misinterpretation and fosters more effective policy implementation.
Also, John A. List's article advocates for practices that enhance the role of experiments in public policy, several of which are reflected in Mattie's approach. The need for high post-study probability before advancing policies is evident in Matti's emphasis on robust evidence to justify funding decisions. Matti's collaboration with federal agencies to conduct large-scale experiments aligns with List's call for leveraging multi-site trials to understand variations in program impacts. By involving a wide range of agencies and policymakers, Mattie's work captures diverse perspectives and contexts, enriching the applicability of the findings. The focus on simplifying complex evidence for policymakers connects to List's point about policymakers needing to understand the non-negotiable components of a program. By presenting aggregated metrics and side-by-side comparisons, Mattie ensures that essential program impacts are clear, aiding policymakers in making informed decisions. Finally, Mattie's work exemplifies the importance of following up scaled programs with correct empirical approaches, as suggested by List. By continuously engaging with policymakers and refining the presentation of evidence, she contributes to the fidelity and effectiveness of policy implementations based on experimental findings.
What other mechanisms other than the OES can enhance the integration of behavioral Insights into policy design and implementation?