Political Partisan Gerrymandering and Political Science

Recent years have seen a tremendous surge of public interest in partisan gerrymandering, including robust reform efforts and multiple high-profile court cases. Political scientists have played an important role in this debate, reaching an unusually high level of public engagement.Yet this public-facing period has to some extent obscured promising avenues for future research within the discipline.I review the history of political science and redistricting and describe how research on this topic has been shaped by the newfound interest. The goals of the law differ from those of political science, so research that focuses squarely on the former often misses opportunities to advance the latter. I lay out the contours of this difference and then suggest reframing the existing metrics of partisan gerrymandering to make them useful for more traditionally scientific questions. Finally, I offer some ideas about what those future questions might look like when reframed in this way. [N]ew technologies may produce new methods of analysis that make more evident the precise nature of the burdens gerrymanders impose on the representational rights of voters and parties ... approaches to measuring gerrymandering: balances of context setting.


INTRODUCTION
Partisan gerrymandering, broadly considered, is the practice of deliberately and unfairly manipulating district lines to favor one party over others. Once the province of savvy politicians and dedicated technicians, in recent years the topic has appeared in popular publications and occasionally hit the front pages of major newspapers. A series of court cases have attempted to place constraints on the practice and have gained some traction, and though the US Supreme Court recently dealt these cases a blow at the federal level [Rucho v. Common Cause (2019)], state efforts continue. Reform movements have also sought to limit gerrymandering through independent commissions or a priori rules.
The topic falls squarely in the wheelhouse of political science, so this policy conversation has been a great opportunity for the discipline to raise its profile. But the role of political science has been deeper still, with policy makers inviting political scientists to offer solutions. For example, members of the Supreme Court have generally agreed that partisan gerrymandering exists, but they have had trouble articulating a consistent definition. Supreme Court Justice Anthony Kennedy's vague reference to "new technologies" in the quotation above seemed to place responsibility in the lap of scholars and other deep experts to define the problem for the court, and since Kennedy was the swing justice on the issue for many years, his requests had major consequences.
This combination of a policy window and an open invitation to contribute mobilized political science (and a few other disciplines) to offer a range of suggestions. These contributions have been focused on the main question that has concerned the courts: How do we define a partisan gerrymander in a way that is fair, clear, fits within the dictates of existing law, and avoids upending too many redistricting plans across the country? Playing this role has been a positive development for political science, but it has also elided the important ways in which the goals of political science differ from those of the law. The strong focus on legal requirements has obscured a rich potential for more strictly political science work in the area of partisan gerrymandering.
In this article, I reconsider the recent work on partisan gerrymandering from the perspective of empirical political science. I propose a narrower view of the core concept of interest-which I call partisan advantage-and explain how most of the existing metrics are either different operationalizations of that core concept, hypotheses about the circumstances when that concept should matter, or explanations for what drives observed patterns. The objective is not to disparage the legal work or to suggest it would be wrong to respond to a call for such work in the future. Instead, I want to explore how existing scholarship can speak to the questions of measurement and relationships that are at the core of most political science research, and so promote opportunities for new work.
I begin by describing the long shared history of political science and districting. I then offer an account of more recent developments that have turned the discipline sharply toward legal questions in recent years. I distinguish between the requirements of the legal community and the typical goals of political science, and demonstrate how the work in political science has become preoccupied with the legal questions. Finally, I offer a reframing of this work, and finish with some suggestions for fruitful areas of future research.

POLITICAL SCIENCE AND REDISTRICTING
The topic of redistricting first became popular in political science as an outgrowth of the revolutionary US Supreme Court redistricting cases of the 1960s [most notably Baker v. Carr (1962), Wesberry v. Sanders (1964), and Reynolds v. Sims (1964]. These cases established the principle of "one person, one vote," which required at least approximate population equality across districts to ensure equality of representation. The immediate consequence was a mid-decade redistricting in virtually every state in the nation. These extraordinary developments prompted new political science work to describe redistricting in general and to use the mid-decade redistricting as an explanation for other phenomena (Erikson 1972, Tufte 1973. The one person-one vote cases prompted more than a single mid-decade redraw; they fundamentally altered the role of redistricting in American political life. In the 70 years prior to the decisions, redistricting had all but vanished as a point of partisan contention. Many states had stopped redrawing their districts altogether (Engstrom 2013). The one person-one vote decisions suddenly mandated that districts be redrawn with each new census to reflect the population shifts that had occurred. This ensured at least the opportunity for a fight over the lines every 10 years. The result was a sharp increase in redistricting conflict (Cox & Katz 2002).
Redistricting had never been examined with more sophisticated empirical tools. Political science had been in its disciplinary infancy when the topic had last been an important part of American politics. What followed was a flurry of research on the causes, mechanics, and consequences of redistricting. Partisan dynamics were naturally a major theme of this research. This body of work included both theoretical explorations of measurement (Grofman 1983, Gugdin & Taylor 1979, Niemi & Deegan 1978 and empirical descriptions of the partisan effects of redistricting decisions (Abramowitz 1983;Basehart & Comer 1991;Born 1985;Brace et al. 1987;Cain 1984Cain , 1985Campagna 1991;Campagna & Grofman 1990;Niemi & Jackman 1991). The conclusion was quite consistent: Partisan gerrymandering either was a minor factor in American elections or actually had the opposite of its intended result (Born 1985, Gelman & King 1994a.
These meager effects gradually turned the research community away from the topic of gerrymandering. It did not help that the Supreme Court seemed unable or unwilling to intervene to police the practice, leading many to declare the legal cause of action effectively dead (Hasen 2004, Stephanopoulos & McGhee 2015. The most innovative and often-cited study on partisan gerrymandering during this period concluded that observable partisan advantage might just as easily be a product of underlying political geography, thus reinforcing the status quo understanding (Chen & Rodden 2013). Scholars became more interested in the effects of redistricting on competition, incumbent protection, and polarization (Brunell & Grofman 2005, Buchler 2005, Carson & Crespin 2004, Cox & Katz 2002, Desposato & Petrocik 2003, Hetherington et al. 2003, McCarty et al. 2009). Because redistricting commissions were often advanced as a way to end the conflict of interest of incumbents drawing their own districts, it implied a positive effect on competition that scholars could test (Abramowitz et al. 2006, McDonald 2006. Redistricting was even useful for causal identification of otherwise unrelated phenomena (Ansolabehere et al. 2001, Fraga 2016. In short, research on redistricting continued, but the study of partisan gerrymandering waned.

THE "GREAT GERRYMANDER"
The outcome of the 2010 redistricting cycle badly upset this status quo. Republicans won unified control of 12 more state governments in the 2010 election. Republican legislators in key statesincluding Michigan, Wisconsin, and North Carolina-openly sought to expand their advantage with this newfound power. Even in states where party control did not change, such as Ohio, Pennsylvania, and (for the Democrats) Maryland, the majority party showed a new willingness to use redistricting to press its advantage. One commentator called it "The Great Gerrymander of 2012" (Wang 2013).
This effort provoked a substantial backlash that is still working its way through the political and legal system. Politically, it placed Democrats much more firmly on the side of reforms like independent commissions. 1 Legally, it prompted a number of lawsuits to test if a legal standard for gerrymandering was truly dead. Initially, the most important of these lawsuits came out of Wisconsin; this case, Gill v. Whitford (2018), became the first successful partisan gerrymandering case at the district court level. But it was not the last of these suits. The backlash has prompted an unprecedented political science engagement in the legal and reform worlds. The new demand for academic solutions is not abstract or hypothetical; in many cases, advocates have specifically requested new academic work. The good-government organization Common Cause has gone so far as to run a competition for gerrymandering solutions to invite academics to get more involved. 2 The growing interest has drawn contributors from other fields as well, including economics, biophysics, and mathematics. All share the common goal of directly contributing to the broader legal and reform efforts that are afoot. This highly engaged research has taken three general approaches to measuring gerrymandering: balances of wasted votes, counterfactuals, and context setting.

The Balance of Wasted Votes
One approach to measuring gerrymandering is to compare the votes each party "wastes"-that is, votes that do not contribute directly to a victory. These metrics tend to suggest that a party gains a larger advantage as the discrepancy between its seat share and vote share grows. The metric in this group that has arguably received the most attention and scrutiny is the efficiency gap (EG) (McGhee 2014, Stephanopoulos & McGhee 2015. The EG defines wasted votes as all the votes cast for a party in a district that the party loses, as well as the votes cast in excess of 50% in a district that the party wins. The EG then takes the difference between the two parties' wasted votes and divides this difference by the total votes cast in the election. The algebra of the EG implies that a fair redistricting plan will give the majority party a winner's bonus of seats beyond 1 The first bill passed by the new Democratic majority in the US House of Representatives in 2019 was HR 1, a grab bag of reforms that included a mandate for independent commissions to draw congressional lines. Likewise, the party's current leader on redistricting, former Attorney General Eric Holder, has taken a firm public position against gerrymandering, including when proposed or enacted by Democratic majorities (Corasaniti 2018). 2 At the time of this writing, the competition has been run three times and produced a number of papers, including some cited in this article (https://www.commoncause.org/our-work/gerrymandering-andrepresentation/gerrymandering-redistricting/partisan-gerrymandering-writing-competition/#). simple proportionality: While a party should receive 50% of the seats for 50% of the vote, it should receive an extra 2% of the seats for every additional 1% of the vote beyond that point.
The EG was the first metric to take this general approach and became known through its use in the Whitford case, but alternatives have since been proposed. Many of these alternatives address perceived problems with the EG itself. Chambers et al. (2017) argue from certain principles of economics that simple proportionality makes more sense, without the winner's bonus: a party's vote share should just match its seat share. Nagle (2015) makes a similar argument but from a different direction: He modifies the EG by dividing each party's wasted votes by the total votes cast for that party in an attempt at a more voter-centric version. (Dividing a party's wasted votes by the total votes it has received gets at the probability that a vote by one of that party's supporters will be wasted.) This voter-centric EG also implies that proportionality is optimal, though any deviations from proportionality receive a nonlinear penalty, as opposed to the simple linear version offered by Chambers et al. (2017). Finally, declination (Warrington 2018) approaches the same general notion of wasted votes by leveraging abnormalities in a plot of district vote shares, abnormalities that would suggest that the redistricting party took an unusual interest in which party won each seat. The declination is highly correlated with the other wasted-votes measures but nonetheless produces specific results that can differ from them.

Counterfactuals
The second general approach to the problem tries to avoid identifying any particular relationship between votes and seats as especially fair. Instead, it identifies fairness for a specific outcome or set of outcomes and explores how a redistricting plan would perform under those conditions. The most common of these measures is symmetry (Gelman & King 1994b, King 1989. Under symmetry, a redistricting plan is fair if parties with equal vote shares receive the same seat share. Parties can have equal vote shares in one of two ways. First, if both parties have exactly half the vote, each should hold half the seats as well. Second, if one party has vote share V and seat share S, then if the other party were to receive vote share V it should also receive seat share S. For example, if the majority party received 60% of the seats for 55% of the vote, the opposition should receive the same 60% of seats under a perfect inversion where it claimed 55% of the vote instead. If the majority received 70% of seats for 55% of the vote, the opposition should also get 70% in the counterfactual, or 63% for 63%, 51% for 51%, etc. Neither the size nor direction of the seats-votes discrepancy matters-only that the other party obtain the same advantage or disadvantage if the tables are turned. Apart from the special case where both parties receive exactly 50% of the vote, the outcomes necessary to calculate symmetry never occur in the same election. So in almost all cases, 3 symmetry is a counterfactual, usually conducted by shifting the vote shares of all districts by some uniform (or almost uniform) amount. Taking the example above, for a vote share of 0.55, the outcomes in every district would be shifted down 0.1 to make the mean outcome across all districts 0.45, and all the seats that changed hands as a result would be used to calculate a new seat share.
The other common metric in this approach is the difference between the median of the distribution of district outcomes and the mean (the mean-median difference, or MMD) (Krasno et al. 2016). A party is disadvantaged when its mean vote share is larger than its median vote share.
On the surface, it appears that this metric is much simpler than symmetry and does not require any counterfactual. However, the MMD's logic still requires an implicit counterfactual (McGhee 2017). The MMD identifies the extra vote share beyond 50% needed for a party to control exactly half the seats (in a fair system, a party should need exactly half the votes to claim half the seats). Thus, like symmetry, the MMD imagines a competitive system (in this case, where each party holds half the seats) and explores the fairness of the outcomes under those conditions. In fact, the MMD is highly correlated with symmetry and under some conditions is simply a linear transformation of it. 4

Setting Context
The final family of fairness metrics tries to place a party's seat share into some context, and deems a plan unfair if it falls well outside what this context would expect. Early versions just compared outcomes under a given redistricting plan to similar outcomes in the previous plan (Born 1985, Cain 1985, Kousser 1996. More recent work has suggested placing a plan in the context of other plans drawn at the same time in other places, either by simply comparing the plan at issue to a broader distribution of plans that have actually been drawn or by simulating a plan constructed from randomly selected districts in other jurisdictions (Goedert 2014, Wang 2016. The most popular approach leverages massive improvements in computing power to draw a wide range of alternative plans in the same jurisdiction as the plan in question (Chen & Rodden 2013, Chikina et al. 2017, Krasno et al. 2016, Tam Cho et al. 2015B. Fifield, M. Higgins, K. Imai, A. Tarr, unpublished manuscript). These other plans-which are always random except for a few constraints such as compactness and contiguity-offer a sense of what might have been drawn absent the intent of the redistricting authority.

POLITICAL SCIENCE AND THE LAW
The core ideas behind most of this work predate the 2010 redistricting cycle and the legal reaction to it. Nonetheless, the renewed visibility of redistricting has turned the debate over these metrics toward the legal and policy ramifications. Even among political scientists, work has increasingly appeared in the pages of law reviews or hybrid journals such as Election Law Review, and has focused on the goals of the law rather than those of political science. Political science and the law can often be quite compatible, but they have different objectives.
Legal standards for deciding cases are a complex blend of considerations. First are highly abstract normative concerns. What might a fair decision look like, and what are the principles from which this idea of fairness is derived? Second, what is the legal basis for the decision? Any decision must be justified by some legal text or case law. Usually, the relevant law is clear and the decision is a matter of the facts at hand, but ambiguous cases require applying old language to new situations or stitching together ideas from sometimes disparate laws, precedents, or dicta. Third, even after working through the normative concerns and finding a legal basis, the law must identify the boundary between cases that should be decided one way and those that should be decided the other. As such, the law spends considerable energy on hypotheticals and edge cases to locate this dividing line. Finally, because a decision must ultimately be accepted to be effective, the potential scope is also relevant. Upsetting too many vested interests at once can risk a backlash that undermines the legitimacy of the courts.
Together these objectives for the law can combine in strange and unexpected ways, often producing something more akin to a compromise than a clear victory for one side or the other. This is as it should be, since the law is an inherently political effort that seeks to provide fair outcomes while holding a polity together.
Contrast these legal goals with those of a discipline like political science. Empirical political science, whether quantitative or qualitative, seeks to describe the state of the political world and the factors that drive it. While the law is worried about developing standards to decide cases, political science is concerned with accumulating knowledge. This steers its attention to two broad goals. The first is careful measurement. If a scientific debate is to help accumulate knowledge, those involved in the debate must, at a minimum, agree on what they are fighting about. The best measurement helps to establish this agreement. It defines concepts systematically (to avoid cherry picking information) and transparently (to ensure that everyone involved in the debate can identify any flaws and potentially replicate the same measure in the future).
The second broad goal of empirical political science is to identify robust relationships. Political science seeks to describe how the political world is organized and what drives political outcomes. At its best, it finds causal relationships, but even simple correlations can be informative where causality is difficult to establish. Central tendencies dominate when exploring these relationships. Deviations from these tendencies are relevant only if they undermine the scholar's preferred explanation or are so large as to overwhelm the main pattern itself. Otherwise, they can be treated as unexplained error: interesting for some future study, but ignorable for the present one.
Of course, this description does not perfectly encompass all political science work. Large areas of the discipline, especially in the subfield of political theory, are dedicated to normative questions, and such questions often implicitly motivate much empirical work as well. Likewise, strong empirical work can consist of nothing more than systematic and transparent measurement of a single concept, and carefully chosen case studies can reveal as much as broader studies of central tendencies. Neither are political science goals always in tension with the goals of the law. The law often needs information about the status quo to understand the nature of any normative concern and the potential scope of a decision. Systematic and transparent measurement provides a clarity that is helpful for crafting intelligible legal standards everyone can agree on.
Yet it is also easy to see how the approaches of the law and political science might clash. For example, it is nonsensical to speak of either the legal basis or the scope of impact for a political science analysis. (A political science argument might have a broader "scope" if it potentially affects a larger number of topics within the discipline, but that is a different sense from the legal one I use here.) While normative questions may be implicit in empirical work, they are rarely the main focus-but they are a major concern for the law. The law spends considerable time on unusual cases and strange hypotheticals in order to tease out boundaries, while political science often ignores or discounts such cases as outliers. More generally, the law cares about correctly classifying a wide range of cases, since it must be prepared to make a decision in every case that might come up. From a political science perspective there will always be cases that do not fit a broader pattern; their existence does not necessarily say anything about the pattern itself.
Neither field takes the "correct" approach. The differences flow from the objectives of the two fields and are generally in service to them. But when the fields engage with one another, as they often do, the distinctions become clearer and the goals of political science necessarily become subordinate to the goals of the law. This engagement has never been stronger than in the past few years. The legal basis, boundary cases, and scope of impact for one metric or the other have loomed especially large. Even much of the political science work has turned strongly toward these sorts of questions. Advocates and critics of the EG have advanced arguments that would look more at home in a legal brief than in a typical work of empirical political science. Supporters have noted that the metric does not require counterfactuals and supports a winner's bonus consistent with most existing plans (Stephanopoulos & McGhee 2015). When they use the EG to define a gerrymander, they are also careful to set boundaries that are both relatively clear and limited in scope. All of these positions respond to concerns raised by Supreme Court justices.
Critics have attacked the EG on similar legalistic grounds. A common refrain is the EG's variability over time and by method of calculation, which raises questions of setting boundaries and containing the scope of a ruling (Best et al. 2017, Chambers et al. 2017. The voter-centric version of the EG reflects a normative concern for the harm caused to individuals rather than to parties (Nagle 2015). In one case, a team of scholars questioned the EG on more conventional measurement grounds, but when their logic pointed to simple proportionality as an alternative, they dismissed this result as impractical because the courts would never permit it as a standard (Chambers et al. 2017). The notion that the metric might be used for any other purpose was not considered.
Other metrics have been subjected to similar scrutiny. Just as the EG has been lauded for avoiding counterfactuals, symmetry and the MMD have been criticized for requiring them (Stephanopoulos & McGhee 2015). Symmetry and the MMD have also been praised for their temporal stability-which helps assure courts that the metric is a sound basis for a decision (Best et al. 2017)-and for supporting the normative principle of majority rule (McGann et al. 2015). Sampling has received some criticism for emphasizing only the intent to produce a certain partisan outcome while failing to identify when that partisan outcome is harmful-a criticism that draws on the need for normative principles and boundary lines in legal standards (McGhee 2017). Some political scientists have stepped entirely outside this debate, noting that a gerrymander is so multidimensional that it requires a range of metrics in combination (Cain et al. 2017). This "everything bagel" approach makes the most sense if one is trying to help courts decide a wide range of individual cases, each case with its own special circumstances.
Thus, much of the recent work on redistricting by scholars outside the legal field has nonetheless focused on legal concerns. This is entirely sensible; scholars should be prepared to engage with important contemporary questions. But there is no reason why scholars cannot both engage with these legal topics and move forward on more conventional scientific questions at the same time. It is to this topic that I turn next.

REFRAMING THE LITERATURE
As partisan gerrymandering has moved toward the center of political and policy debates, the political science work on the topic has come to be seen primarily through that lens. But it is also possible to think about partisan gerrymandering the way it was originally viewed in the discipline: as a concept to be measured systematically and analyzed in relation to other factors in the political world. In the remainder of this review, I suggest a fruitful frame for such work and identify questions that flow naturally from it and have yet to be answered.
To identify useful political science questions, the first step is to shift the variable of interest from the partisan gerrymandering of the legal cases to a more focused notion of partisan advantage. Partisan advantage, as I use the term here, means the gains that a party obtains from a system of aggregating votes. Gerrymandering, by contrast, is a complex mixture of both partisan advantage and notions of fairness that help establish when partisan advantage has gone too far. To my knowledge, the political science literature has never made this distinction, even before the topic became so central to the public debate. This is not to suggest that fairness is unimportant or even beyond the purview of political science-just that it should be set aside for the purposes of measurement. Measures must be as precise as possible; the more they combine multiple concepts, the less useful they are for analysis. Gerrymandering in its very essence is a combination of more than one idea: It represents both the gains that a party has obtained and how we are supposed to feel about those gains. Mixing the two makes perfect sense for a legal standard, which must have normative force and help the courts choose a side. But for political science research, it can muddy the analysis and even lose sight of the concept of interest entirely.
What is that concept of interest? What are the gains from partisan advantage? When speaking of a system of vote aggregation, a party obtains greater advantage when it wins more seats for the same vote share than it would under some other system. The system affords the party in question more power without any change in its public support.
Note first that partisan advantage has meaning only relative to other possible systems. The relationship between seats and votes is a property of a given system, and not an advantage per se; the advantage is identified by comparing such seats-votes properties across different electoral regimes. Note also that seats must at some point be part of the discussion. Seats determine power in a legislative democracy, and since parties exist to obtain power and control the government, partisan advantage must always include some outcome related to seats if it is to be relevant.
This definition of the concept of interest does not limit the available metrics or the scope of analysis so much as change how we think about them. For instance, all of the metrics that capture disparities between vote share and seat share are potentially useful measures of partisan advantage; each to some degree reflects the decoupling of power and public support that is at the heart of the concept. In fact, once we shift to a partisan advantage frame, the available pool of metrics is arguably too limited. Comparative politics has long had metrics for disproportionality that differ in some ways from the gerrymandering metrics considered here but still seek to capture a similar seats-votes discrepancy (Taylor et al. 2014). Likewise, viewing all these metrics as operationalizations of partisan advantage shifts the grounds for the measurement debate. The debate currently centers on specific cases or hypotheticals where the results of one metric seem out of sync with some intuitive notion of fairness. If we instead view these metrics as a way of understanding the properties of partisan advantage, then outlier cases matter less than commonalities. How do these metrics correlate with each other and differ in their marginal effects? Do they all tap a common dimension, or do some differ from others? What are the consequences of each for relational measures such as first differences or regression coefficients? In each of these cases, it is the central tendencies that matter, even if there are exceptions to the patterns. They are the foundational measurement questions for any solid empirical work.
One can also reconsider symmetry and the MMD from this perspective of partisan advantage. These two metrics are really interactions between partisan advantage and the level of support that a party receives. They highlight the level of advantage that a party obtains under certain special conditions. It is easy to imagine questions where such an interaction would be useful. If one wants to test whether parties are more concerned about maintaining their majority than their current seat share per se, the MMD and the version of symmetry with a 50% counterfactual would both be quite useful as outcome variables. One can also branch out from these particular scenarios. For example, one might hypothesize that parties seek to protect their current seat share even if their majority is not seriously at risk, in which case the counterfactuals might include lower majority party vote shares that do not encompass values of 50% or below. Or perhaps parties seek to maximize their expected seat share across a range of plausible vote shares, whether higher or lower. These are all reasonable hypotheses to test, and existing counterfactual metrics can be useful for this purpose. 5 Finally, the context-setting methods take on a very different cast when partisan advantage is the concept of interest. In the world of litigation and reform, methods like sampling are often promoted as fairness itself. In a political science literature focused on partisan advantage, these methods become explanations rather than outcomes in their own right. When a computer draws millions of maps according to a strict set of constraints, the partisan outcomes of those maps have been purged of every other systematic influence. Any difference between the actual map and this distribution of alternatives therefore controls for these systematic influences, leaving a far smaller number of variables to explain the result. The same is true of any of the contextual methods. The goal is always to account for the constraints that might force a particular partisan outcome and so serve as an alternative explanation for observed results.
In short, to look at partisan redistricting through the lens of empirical description and analysis is to refocus the discussion to a particular outcome-partisan advantage-and turn existing metrics into realizations of, hypotheses about, or explanations for that concept. The goal is not to replace more public-facing work that engages with the law or policy reform but to suggest a parallel track that can operate as a more explicitly empirical approach to the same general topic. In the final section of this article, I build on this discussion to suggest some future topics of research that might prove promising.

FUTURE RESEARCH
Of the many goals that the literature on partisan redistricting might take up, three seem especially useful: (a) identifying the circumstances that encourage a party to pursue a greater advantage through redistricting, (b) elucidating the downstream consequences of partisan advantage, and (c) understanding the partisan dynamics of single-member districting as an electoral system.
The first of these goals looks to move beyond the existing understanding that unified partisan control of the redistricting process leads to a greater partisan advantage for one party. There appears to be some variation in the size of this effect, even anecdotally. Certain states, such as Ohio and North Carolina, are consistent centers of intense partisan conflict over redistricting, while most other states produce plans with modest seats-votes discrepancies. Why does it seem that some state legislatures are more aggressive about pressing their advantage? What is the difference between the districts that a state legislature draws for itself and the ones that it draws for Congress-is a state more likely to press its advantage in one as compared with the other? Do parties want a larger advantage at all costs, do they prefer security against contrary partisan tides, or do they want some combination of the two? For this line of research, it would be helpful to greatly expand the number and range of simulated plans: to run full simulations for more states and countries over a longer period of time. Chen & Cottrell (2016) have done this work for the US Congress in one redistricting cycle, but adding other cycles and expanding the analysis to state legislatures would begin to isolate different explanations for the patterns in their data.
Since a number of other countries also use single-member districts entirely or in part, testing the simulation approach on an international basis would provide still more context. In many cases, the data may be difficult or even impossible to get, but the goal should remain to expand the scope of the method as much as possible.
The second goal is to explore the consequences of a large partisan advantage on downstream outcomes of interest. Warshaw & Stephanopoulos (2019) and Caughey et al. (2017) have done some of this work, looking at the consequences for parties as institutions and for broad political and policy outcomes, respectively. Many more outcomes can be explored, including but not limited to more detailed policy areas (such as public finance, the welfare state, and social policy), estimates of public opinion on topics like trust and approval, the quality of the correspondence between public opinion and representation, and the degree of polarization among elites. Time can also be a conditioning variable here: Are the consequences immediate or delayed, and are they temporary or enduring? Finally, as with all areas of research, the challenge of identifying causality can also generate additional studies and creative research designs.
The third path for future research is somewhat harder to define but no less important. The literature on gerrymandering can often treat single-member districts as sui generis. In one sense, this is clearly wrong, since single-member districts are but one realization of a set of variables that together describe the properties of electoral systems (Taagepera & Shugart 1989). At the same time, the fact that only one person is elected from each district makes geography and personality more important factors in single-member district elections than they would be in a system with large multimember districts or no districts at all. In the comparative literature, single-member districts are often presented as more or less naturally producing the winner's bonus (though it is described as an empirical rather than theoretical phenomenon) (Lijphart 1994), and some have even suggested a precise seats-votes relationship for such systems that amounts to a "law" (Kendall & Stuart 1950). Strong geographic and personal effects make the reality far more variable across both time and jurisdictions. Moreover, our understanding of exactly why there is a winner's bonus, how large we should expect it to be, and when it should be stronger or weaker is still in its infancy.
The simulations literature exemplifies this limited understanding. Despite the sophistication of the sampling algorithms, analysis of them remains relatively descriptive. Simulation studies generally note the distribution of seats-votes pairs for a given state and how probable the actual districting plan is according to that distribution. What affects those probabilities? Greater geographic clustering by one party is certainly a major factor in producing natural partisan advantage (Chen & Rodden 2013), but how much is necessary? What is the precise relationship between partisan intermingling in geographic space and the observed seats-votes relationship? Is there a tipping point? Single-member district systems also allow some discretion in the way the districts are drawn (indeed, this is the whole point of gerrymandering). How much does compactness constrain the outcome? Would different compactness levels produce different results? How does the compactness of the patterns of human settlement themselves contribute to the role that compactness constraints play in the outcome?
Although the simulations approach leaves many questions unanswered, it may also offer a way to answer them. Analysts ordinarily take a state's political geography as given. With a set of simulated maps, it should be possible instead to populate the state with many different distributions of voters and so explore answers to some of the above questions. Mathematicians can and will work on analytical solutions to the problem [in fact, at least one political science study has already offered a definition of the "natural" seats-votes relationship for different numbers of parties and geographic distributions of voters (Calvo & Rodden 2015)]. But the set of possible districts is so incredibly large, even in a small state, that approximate relationships and conditional solutions will likely be the order of the day (Altman & McDonald 2010). D. Cottrell (unpublished manuscript) has done some of this work using fully simulated data-i.e., data that were never derived from a real state at any point-and while starting with a real state adds verisimilitude, it is not essential. What is helpful is the application of sampling to capture the almost limitless flexibility of the linedrawing process and its consequences for seats-votes relationships. Such work could establish the role of geography more firmly and so integrate single-member districts more completely into the broader comparative discussion.

CONCLUSIONS
Partisan gerrymandering is having a moment in the spotlight. Court cases and reform movements have put the arcane subject in the center of American political debate more than at any point since probably the late nineteenth century. Political science's close relationship to the policy questions raised in this conversation has drawn it into a level of public engagement that goes beyond the norm. This has been an important and healthy development for the discipline. The cost of this extraordinary engagement, if there is one, is that the subject of redistricting as a topic of scientific inquiry has taken something of a hiatus over the last few years. Measurement debates about partisanship in redistricting have focused on the relevance of the measures to the needs of the extant court cases.
Thinking of these metrics in terms of their utility for political science questions changes our concept of these measures and their applications. The core concept of interest is the extra seat share a party receives in one electoral system relative to the seat share it might receive with the same vote share (or range of vote shares) in a different system. Many of the existing metrics tap into that concept directly and are highly correlated with each other. Others explore partisan advantage under certain conditions and so are useful for testing specific hypotheses about when and where partisan advantage might matter. Still others test explanations for partisan advantage and so are useful as controls in an analysis of causes and correlates.
Reframing the discussion this way also helps highlight potential topics of analysis for future political science research. There are many opportunities for new analysis of the causes and consequences of partisan advantage in redistricting. There is also a real opportunity to seat singlemember districting more firmly in the broader comparative literature on electoral systems. Sophisticated methods of exploring the space of drawable districting plans seem to offer an especially promising avenue for exploration.
Political science has benefited from its intense engagement on partisan gerrymandering and redistricting policy overall. There remain parallel opportunities to expand our knowledge of these systems even as the discipline offers solutions to normative and legal questions. In fact, the normative and legal work that has been done in many cases enables further research to develop our understanding. It is an exciting time to be doing this type of work.

DISCLOSURE STATEMENT
The author is not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.

ACKNOWLEDGMENTS
The author thanks Vincent Hutchings for his comments on an earlier version of this review.