[This is the second entry in an unplanned–we’ll say serendipitous–blog series at the intersection of pop culture and organizational theory. For my earlier musings on the Dilbert Principle, click here.]
My interest in actually watching baseball (and, really, most sports) has dipped quite a bit in the last few years, but I do still enjoy reading about the long-term impacts of the so-called “Moneyball revolution” which first impacted baseball and has since spread into seemingly every area of life. For the unfamiliar, Moneyball was a 2003 Michael Lewis book which told the story of how the cash-strapped Oakland Athletics turned to sabermetrics (advanced baseball statistics) to find undervalued and overlooked players who could help them score runs and win games.
The idea of “moneyball” is often mischaracterized as the mere use of statistics (traditional or newly developed) to evaluate something or rank some things, but the Athletics’ approach was built on a handful of specific philosophical assumptions which go beyond simple number crunching. First, the Oakland approach assumed that traditional measures of baseball success–wins, losses, RBI, and so on–did not always accurately reflect a given player’s contributions to the team goal, which was winning games by scoring more runs than the other team. Second, because other teams would continue to prioritize players who succeeded in those traditional categories, Oakland could cheaply acquire players who might not put up gaudy numbers in the old measures but who, advanced statistics revealed, nevertheless contributed to the key goal of winning games by scoring more runs than the other team. And third, this relentless pursuit of value, as defined by player contributions to the overall goal of winning games by scoring more runs than the other team, would make it possible for teams with smaller budgets to punch above their weight and compete with better-funded rivals.
Given the popularity of Lewis’s book and the subsequent movie adaptation, it is perhaps no surprise that “moneyball” has become shorthand for any kind of statistical approach to evaluation, ranging from basketball (such as the Houston Rockets’ three-pointer-heavy philosophy of “Moreyball”) to higher education itself. It is this latter realm that I am most interested in, and so it is to a quick overview of “Moneyball in academia” pieces that we now turn.
Readers may not know that I briefly attended law school in the fall of 2012, and while I quickly decided that that was not the career path for me, I did first encounter the Moneyball-meets-academia phenomenon that semester with the publication of an article co-written by John Yoo (yes, that John Yoo) and James C. Phillips. Titled “The Cite Stuff: Inventing a Better Law Faculty Relevance Measure,” the paper aims to quantify the contributions of law school professors to their institutions–and, in turn, to rank law schools on that basis–by determining which faculty members’ publications are most frequently cited (hence, relevant). Though the original paper has been taken down from SSRN, a write-up at Above the Law provides the following quotation explaining the paper’s purpose:
“…Finally, this study proposes an alternative faculty ranking system focusing on the percentage of a law school faculty that are “All-Stars” (ranked in the top 10 in citations per year in an area of law). This alternative ranking system improves upon some of the weaknesses of previous faculty quality ranking methodologies and argues that citation-based studies do measure something important – relevance of scholarship.” (1)
The paper was rightly criticized for limiting its sample to the USNWR top 16 schools, meaning that it did not actually unearth any hidden gems buried at supposedly less prestigious schools, but the idea of attempting to quantify faculty scholarly contributions has remained popular. Actually, Yoo and Phillips were not the first scholars inspired by Moneyball; a 2004 review essay by Paul L. Caron and Rafael Gely in the Texas Law Review covered some similar ground by exploring the faculty data of law school deans before and during their rise to the position. (2) Academics from other departments have gotten in on the act, too. A group of researchers proposed a version of “academetrics” to help smaller schools compete for top job candidates in the small field of recreation, park, and leisure studies (3), while another team offered a measure of “network centrality” to evaluate scholarly connectedness in terms of citations and co-authorship. (4)
While I have concerns about these and similar approaches which measure faculty and job candidates using scholarly output as the sole or primary variable, I do think colleges and universities have an obligation to themselves, to students, to faculty/candidates, and to other stakeholders to do their due diligence in 1) defining what qualities make for a successful department member in their specific contexts; 2) thinking critically about which variables best reflect those qualities; and 3) creating pools of candidates who are qualified based on those metrics, whether or not those candidates fit the widely held mental image/CV of a traditional academic.
All metrics will have their issues–student teaching evaluations, whatever their utility in allowing students to voice legitimate complaints or concerns, are notoriously problematic–but that doesn’t mean that all metrics are equally valid or that departments should just default to a prior estimation of institutional prestige as a measure of quality (even though this seemingly happens). If we as scholars are truly as innovative and creative in answering difficult research problems as we want to believe we are, then who better to tackle these issues?
(1) Quoted in “The 50 Most Relevant Law Professors,” Above the Law, September 14, 2012, https://abovethelaw.com/2012/09/the-50-most-relevant-law-professors/.
(2) Paul L. Caron and Rafael Gely, “Book Review Essay: What Law Schools Can Learn from Billy Beane and the Oakland Athletics,” Texas Law Review 82 (2004): 1483-1554, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=501402.
(3) Dan Dustin et al, “Academic Moneyball,” Schole: A Journal of Leisure Studies and Recreation Education 29, no. 2 (2014): 43-52, https://www.nrpa.org/globalassets/journals/schole/2014/schole-volume-29-number-2-pp-43-52.pdf.
(4) Erik Brynolfsson and John Silberholz, “‘Moneyball’ for Professors?” MIT Sloan Management Review, December 14, 2016, https://sloanreview.mit.edu/article/moneyball-for-professors/.