As economist Doug Williams explains in his study of law school mismatch:
“The mismatch hypothesis is based on the assumption that classroom instruction is directed to the median student. If this assumption is valid, students too far below the median may struggle to understand class discussions and to keep up with the pace of instruction. Consequently, mismatched students learn less and may even reduce their effort if they become discouraged, leading to even less human capital accumulation.” (p. 176)
Williams goes on to formalize, or mathematically model, this idea in the same article. Of course, as Williams notes, this is merely a hypothesis, true only to the extent that its assumptions hold up in practice. Research on this topic – as it applies to higher education — is difficult and limited, partly because colleges and universities rarely measure learning in a consistent way across students in different classes or different schools where mismatch levels might vary.
So far as we know, the only attempt to measure mismatch through a controlled experiment occurred a few years ago in Kenya. A number of American economists, including Esther Duflo of MIT (who has received a MacArthur “genius” grant as well as the John Bates Clark Medal), have championed the idea of doing controlled social experiments in less developed countries, where costs are lower and institutions often more flexible than in the United States. Often the experiments test strategies for alleviating poverty in distressed countries. In the mismatch experiment, Duflo and her collaborators administered tests to several thousand Kenyan schoolchildren, and then allocated them to different types of classroom. In Type 1, schoolchildren across the ability distribution were mixed in the same class; in Type 2, schoolchildren were divided into two groups: the higher-scoring students were put in one group of classes (“Type 2A”) and the lower-scoring students were put in a second group of classes (“Type 2B”). Aside from their student makeup, the classes were kept as similar as possible; for example, teachers were assigned randomly to the various classes.
Near the end of the study period, students in these various classes were re-tested. The results were unambiguous: students in the Type 2 classes (both 2A and 2B) showed markedly more learning than students in the Type 1 classes. In other words, learning was greater in classes where student ability was more homogeneous. The fact that both the 2A and 2B students did so well suggests that mismatch affects students at both ends of the preparation distribution – i.e., one can be “positively mismatched” if one’s preparation level is way above those of one’s classmates, presumably because one isn’t optimally challenged.
American bar exams offer one of the few opportunities to study learning mismatch, because law students across the whole spectrum of American law schools must generally take a bar exam to become a licensed attorney, and the bar exams purport to measure what bar-takers have learned in a variety of specific law school courses. The principal available data source linking law school records to bar exam outcomes is the Bar Passage Study, conducted by the LSAC in the 1990s. Building on Sander’s earlier work, Williams used this data to assess learning mismatch in law schools. Using a wide variety of distinct tests, Williams found strong and consistent support for the learning mismatch hypothesis.
Although there has been intense debate about the general issue of law school mismatch – and that merits a separate discussion – the Williams findings on law school mismatch have not been meaningfully disputed. His article received careful peer-review and was published by the Journal of Empirical Legal Studies – perhaps the leading journal in its field — in the summer of 2013.