by Murat Iyigun, guest blogger
Listening to Angus Deaton’s dead-on critique of the current state of the literature on health and development at an AEA session this past weekend, I could not help but think about the costs and benefits of methodology fads in our profession in general, and growth & development economics in particular.
As Dani has blogged about in the past, there has been fascinating progress in the field of development in the past two decades. In the late-1980s and 1990s, macroeconomics-oriented, cross-country growth analyses ruled the day (see here and here, for instance). But due to various credible methodological concerns about simultaneity, reverse causality and omitted variables biases, this strand began to recede into the dusty, public-access portion of JSTOR. These days, the economic development literature is dominated by the microeconomics-based, randomized experiment studies (to take two very imaginative and useful applications, see this and that).
On the one hand, such cycles and transitions are a normal and required component of the process of scientific advance: we make observations, formulate hypotheses, adopt a methodology and test ideas. As scholarly scrutiny exposes the deficiencies, we revise, modify and retest. On the other hand, I am of the view that there are significant and dangerous intellectual costs associated with the compartmentalized and disjoint manner in which development economics has evolved.
Reading the current literature in the field, one could be forgiven to think that there is nothing to learn from a study unless it brings microeconomic evidence at the most disaggregated level (the title of my current project: “Corruption, Institutions and Growth: Evidence from Grandma’s Bingo Circuit”). Of course, fads come and go because there are decreasing returns in many things. However, the challenge for the existing randomized-experiment, micro-development literature runs deeper than that.
Part of the problem is a fundamental incompatibility in what makes this literature really valuable and the quirks of academia and publication recognition: Top-quality journals publish research that has broad and general implications, whereas the real benefit of the “evidence from this—evidence from that” and “Look ma! No hands” studies is the implicit recognition that “results may vary” depending on local conditions. The proponents of this school are typically quick to argue that, as evidence accumulates, we shall be able to reach more robust and general conclusions. But as research accumulates, we are more often than not likely to end up with credible arguments and evidence on both sides of the debate (see the current health and development literature). And as village-by-village evidence trickles in, does your benevolent social planner promise to hold structural change at bay ?
In this regard, why are country case studies not as valuable as the randomized experiment studies in development economics and policymaking? The success of the randomized-experiment literature in becoming so popular and successful in academia can only be explained by the perception that it lends itself to generalizations.
Specialization and costly, time-consuming investment in tools and methods are an integral part of academic research. But such specialization and vested interests should not blind us to the pros and cons of multiple approaches (even if some of those approaches have revealed their flaws and others are a hard sell). While the incentive system in academia makes this a difficult proposition, I do not think we can keep ignoring it in development economics policymaking.