Some years ago I served on a committee at the Kennedy School charged with evaluating the quantity and quality of the School's published output. One of the exercises I remember doing is to correlate the scholarly output of individual faculty members with their op-ed and similar publications. My hypothesis was that if we were doing our job right those who wrote the most op-eds would be also the most prolific scholars. Otherwise what authority would the op-eds really have?
I will not reveal the results of that exercise, but it recently gave me an idea for a similar analysis of the economics blogosphere. The question is: what is the relationship between a blogger's popularity and his scholarly impact? (As far as I can tell we are all males strangely enough.) So I asked my assistant Dana Brudowsky to do some research.
We obtained rankings of blog popularity from Aaron Schiff's site, which are in turn based on Technorati data. We based our rankings of scholarship on Google citation data, using Publish or Perish. Publish or Perish is a software that retrieves Google Scholar citation data and computes a number of indices based on them. We used the simplest among them, namely the total number of citations to an economist's total body of work, to rank economist bloggers in terms of impact of scholarship. These citation totals range from 35,835 (in the case of Gary Becker) to a low of 3. (Yes, I am aware of all the problems with this... Believe me, that is not how we make promotion decisions at Harvard, even though we certainly look at the numbers.)
We limited our search to economists with a Ph.D. and who seem to have a university affiliation. When a blog has multiple authors, we gave each author the same blog rank (but of course different scholarship ranks based on Google citation data). Obviously, we could not include blogs that are anonymous. By the end of the day, we had identified about 100 blogging economists with footprints in Google scholar.
And the result? Going into this, my expectation was that blog popularity and scholarship would have little (or perhaps even a negative) correlation. After all, the skills of a blogger (writing quickly and well, working for short-term results, spending a lot of time reading and digesting others' work) are not necessarily those that a scholar who wants long-term impact needs to have. Plus, there is the time spent on the blog--which does mean less time for research. Remember the Acemoglu response: I am too darn busy writing research papers... And one can certainly be an excellent and popular blogger--providing stimulating commentary on others' work--without having large scholarly output or high impact.
And yet the correlation between how well one does on bloggership and on scholarship turns out to be positive and statistically highly significant. The rank correlation between the two is 0.27, and it is significant at the 99% level of confidence. Here is the scatter plot:
Wait a minute, you say, this does not look like a very strong positive relationship--and you are right. Even though it is statistically significant, the quantitative significance is not that impressive. In fact, if one excludes the top 10 scholars the correlation is no longer statistically significant at conventional levels. Here is the data for those who want to play with it, or check for (and report) errors.
One way to interpret the results is to say that high-impact scholarship appears to be a sufficient but not necessary condition for successful bloggership. Once one leaves out the very top scholars, there is very little relationship between scholarly impact and popularity as a blogger. (I should add, to cover myself against the statistics police, that these statements are conditional on having a blog.) Cyberspace creates its own pecking order.
And now to the real puzzle that this little research project left me with: why are there (apparently) no women economics bloggers?
UPDATE: OK, I think I have overlooked two women economist bloggers who fit the criteria: Lynne Kiesling and Chiara Lombardini-Riipinen. I am sure that I have overlooked many other male ones as well (including Craig Newmark, who submitted a comment below). Craig Newmark also points to some anomalies in the Google Scholar citation data. Some of these are readily explained. The number of papers seems inflated because Google Scholar treats different versions of each paper as a separate "publication." So a paper circulated under two working paper series counts as two papers. On the other hand, I have no idea what "years" (which comes from Publish or Perish) is referring to. In any case, I believe the citation data that I use is unaffected by either of these, since it is the sum of citations to individual "publications," with no overlap.
UPDATE2: See further clarification on Google Scholar and Publish or Perish here.
Recent Comments