Enough of Randomistas: Why Development Investing Should Prioritize Critical Infrastructure Over Behavior Change.

The Randomista Revolution in development economics suggests that, rather than prioritizing investments in physical infrastructure, policymakers should first prioritize behavioral interventions because large investments like roads, schools, and clinics are harder to test (due to high costs and other complexities). However, and unfortunately, this Randomista Revolution is steering development economics in the wrong direction. The hype around Randomized Controlled Trials (RCTs) has made the field too focused on behavior measures that are not operationalized and instead of looking at broad, long-term development metrics, the attention is stuck on small, isolated interventions that are flashy in nature. This leads many Randomistas and stakeholders that buy into their proposals to assume that what works for individuals will work for entire economies (a fallacy of composition).


Even though Randomistas have had significant influence due to their behavioral discoveries, they have some serious blind spots. Randomized interventions in developing countries often do not lead to meaningful economic growth because they ignore bigger challenges like climate change, carbon emissions, trade barriers, and global price shifts. No single country can control these variables on its own. Furthermore, the obsession with RCTs has made economists overly confident that local results can scale nationally, but these quick fixes often overlook factors such as institutions, structural changes, and geographic/environment that drive development. This flawed approach still makes it easy for them to sell ideas to policymakers, who often do nott have the expertise to dig deeper into the results. As a result, flashy but limited solutions get more attention than the structural changes that could drive real progress. Another problem is that just because a small intervention helps some people it does not mean it scales up to solve systemic problems. This kind of thinking has led to policies that do not always address the deeper issues holding back real progress. For example, Pritchett, Lant and Sandefur examine several intervention programs focused on microcredit, education, health, and migration, looking at both experimental and non-experimental estimates within the same studies. They also compare how experimental results vary across different studies. Their findings highlight major inconsistencies, raising serious doubts about whether these interventions work as intended. This lack of construct and external validity makes it hard to justify using these results for national policies or scaling up these programs.

Some Randomistas argue that behavioral barriers such as procrastination, lack of information, or cognitive overload can limit the effectiveness of otherwise beneficial infrastructure, and the insights from these RTCs can influence how people could engage with these large infrastructure investments. A comprehensive discussion of this idea can be found in The Behavioral Foundations of Public Policy by Eldar Shafir. The core argument is that designing effective policies requires an understanding of the behaviors that shape how people respond to policy interventions, rather than large infrastructure investments with potential debt risks. Even with these critiques, there is still a big issue, many of these over-hyped studies largely ignore how much context and people’s preconceptions can vary, not just between countries but even within the same country. A great example is the study “The Behavioralist as a Tax Collector” by Hallsworth, List, Metcalfe, and Vlaev, which looked at how social norm messages affect tax payments in two large experiments in the UK. The authors claimed that adding social norm messages to tax reminder letters increased tax payments, with messages saying what most people do (descriptive norms) working better than those telling people what they should do (injunctive norms). But the study has some big flaws, especially in how it interprets social norms. First, every message tested led to higher payments, so it is unclear if social norms were really the reason. Second, when comparing different messages, it is hard to tell whether the effect was from the norm itself, the message being more eye-catching, or people reading it as a threat rather than a friendly reminder. For example, one message said: "Nine out of ten people in the UK pay their tax on time. You are currently in the very small minority of people who have not paid us yet." That could easily come across as a warning of consequences rather than a social norm nudge. What is worse, the authors actually acknowledged in their appendix that some people saw these messages as threats, but they barely mentioned this in the main study.

References:

  • Ashraf, N., Glaeser, E. L., & Ponzetto, G. A. M. (2016). Infrastructure, incentives, and institutions. American Economic Review: Papers & Proceedings, 106(5), 77–82.

  • Commission on Growth and Development. (2008). The growth report: Strategies for sustained growth and inclusive development. Ahluwalia et al.

  • Pritchett, L. (2014). The folk and the formula: Fact and fiction in development (Working Paper). Center for Global Development.

  • Pritchett, L., & Sandefur, J. (n.d.). Learning from experiments when context matters.

  • International Monetary Fund. (n.d.). Is it time for an infrastructure push? The macroeconomic effects of public investment. World Economic Outlook: Legacies, Clouds, Uncertainties.

  • Hoff, K., & Stiglitz, J. E. (n.d.). Striving for balance in economics: Towards a theory of the social determination of behavior. World Bank & Columbia University.

  • (2016). Does infrastructure investment lead to economic growth or economic fragility? Evidence from China. Oxford Review of Economic Policy.

  • Ashraf, N., Glaeser, E. L., & Ponzetto, G. A. M. (n.d.). Infrastructure, incentives, and institutions.

  • Miguel, E., & Kremer, M. (2004). Worms: Identifying impacts on education and health in the presence of treatment externalities. Econometrica, 72(1), 159–217.

Next
Next

Commentary and Analysis of “Building a Culture of Experimentation” by Stephan Tomke (HBR.org)