Political Misinformation

Misinformation occurs when people hold incorrect factual beliefs and do so confidently. The problem plagues political systems and is exceedingly difficult to correct. In this review, we assess the empirical literature on political misinformation in the United States and consider what scholars have learned since the publication of that early study. We conclude that research on this topic has developed unevenly. Over time, scholars have elaborated on the psychological origins of political misinformation, and this work has cumulated in a productive way. By contrast, although there is an extensive body of research on how to correct misinformation, this literature is less coherent in its recommendations. Finally, a nascent line of research asks whether people’s reports of their factual beliefs are genuine or are instead a form of partisan cheerleading. Overall, scholarly research on political misinformation illustrates the many challenges inherent in representative democracy.


INTRODUCTION
Because political knowledge is widely viewed as a foundation for representative democracy (Delli Carpini & Keeter 1996), the low level and uneven distribution of this resource has raised serious normative questions (Althaus 2003, Gilens 2001). Yet an even greater concern is misinformation, which occurs when people hold incorrect factual beliefs and do so confidently 3 (Kuklinski et al. 2000, p. 792). Hochschild & Einstein (2015, p. 14) refer to this phenomenon as "dangerous," and Flynn et al. (2017, p. 127) observe that misinformation has "distorted people's views about some of the most consequential issues in politics, science and medicine." The problem, initially articulated by Kuklinski et al. (2000), does not seem to have abated; if anything, Bode & Vraga (2015, p. 621) write, the American political system currently "abounds" with misinformation.
At the time of their writing, Kuklinski et al. (2000, p. 812) observed that the "misinformation landscape" needed exploration, and the authors suggested several areas for future research. This review evaluates what scholars have learned since the publication of that study, with an emphasis on research conducted in the United States. We begin by defining political misinformation and distinguishing it from related concepts such as rumors and conspiracy belief. In the second section, on causes, we discuss the psychological origins of misinformation, focusing on the various motivations that influence the reasoning process and that can cause people to become misinformed. Next, we canvass the literature on how to correct misinformation and, more fundamentally, whether it can be corrected. Our third section summarizes this body of work, which spans the disciplines of political science, communications, and psychology. As research on misinformation has evolved, researchers have posed new questions, the most intriguing of which pertain to measurement. The fourth section considers an important and unresolved puzzle raised by this nascent body of work: whether people's reports of their own factual beliefs are genuine or are instead a form of partisan cheap talk. As we note there, the answer to this question has important implications for the study of misinformation as well as the interpretation of survey data more generally. In the final section of our review, we highlight key insights that have accrued and note where progress has yet to be made. Kuklinski et al. (2000, p. 792) stated that a person is misinformed when he or she "firmly [holds] the wrong information." In a study of Illinois residents, the authors found that people had erroneous beliefs about welfare policy, such as the size of the typical welfare payment and the characteristics of people receiving assistance. Despite their inaccuracy, respondents reported high confidence that their beliefs were right. According to the authors, to be misinformed is different from being uninformed, a state in which a person has no factual beliefs about the topic 4 under inquiry.

WHAT IS POLITICAL MISINFORMATION?
This distinction has significant normative implications insofar as the misinformed base their political opinions on inaccurate beliefs. When large segments of the public are misinformed in the same direction, shared misperceptions can systematically bias collective opinion (Kuklinski et al. 2000), undermining the idea that "errors" in individual-level preferences cancel out in the aggregate (e.g., Page & Shapiro 1992). Even more worrisome is the prospect that misinformed people take political action on the basis of incorrect information, becoming what Hochschild & Einstein (2015) call the "active misinformed." 1 Misinformation is different from pathologies such as the belief in rumors and conspiratorial thinking, both of which have become subjects of scholarly attention (e.g., Shin et al. 2017, Weeks & Garrett 2014. Berinsky (2017, pp. 242-43) defines rumors as "statements that lack specific standards of evidence" but that gain credibility "through widespread social transmission" (also see Sunstein 2009). Rumors are not "warranted beliefs" in the sense of being supported by scientific or expert opinion, but as Flynn et al. (2017, p. 129) point out, they occasionally turn out to be true. This characteristic distinguishes rumors from misinformation, at least as the latter term is defined by Kuklinski et al. (2000), where the "mis" prefix indicates that the information is unambiguously false. Yet the differences between these concepts are easily blurred. For example, the claim that the Affordable Care Act (ACA) included a provision for death panels has been characterized as both a rumor (Berinsky 2017) and an example of misinformation (Nyhan 2010). In other studies, it remains unclear whether rumors are a type of misinformation or a vehicle that spreads it (see, e.g., Berinsky 2017, p. 243).
Conspiratorial thinking is distinct from both misinformation and rumor belief. Conspiracy theories explain political or historical events through references to "the machinations of powerful people" (Sunstein & Vermeule 2009, p. 205; see also Muirhead & Rosenblum 2019). In addition, conspiratorial beliefs are rooted in stable psychological predispositions (Brotherton et al. 2013, Oliver & Wood 2014. Misinformation, by contrast, can originate within the individual (e.g., the desire to hold beliefs that are consistent with one's worldview; Kuklinski et al. 2000) as well as 1 As originally defined, "misinformation" was a characterization of mass public opinion. As scholars became interested in the origins of this problem, the term also was used to describe information emanating from the mass media and elite debate. 5 from environmental-level factors such as elite rhetoric or media coverage (Gershkoff & Kushner 2005). Again, these distinctions are imprecise, and one recent study characterized conspiracy theories as "a species of a broader genus of political misinformation" (Miller et al. 2016, p. 825).
We are not the first to note the definitional murkiness surrounding these concepts (see Flynn et al. 2017, p. 128) and consider this area ripe for further theoretical development.
There is a long tradition of studying false beliefs in other fields, particularly psychology, where the phenomenon is labeled the continued influence effect (CIE) (Johnson & Seifert 1994, Wilkes & Leatherbarrow 1988. There are valuable lessons to draw from this literature, which we highlight in later sections. However, CIE studies stand apart in two ways: Research in this area focuses on the basic cognitive mechanisms, e.g., memory processes, that make false information difficult to correct; and the subject matter in CIE experiments tends to be nonpolitical. This literature has been usefully reviewed by Lewandowsky et al. (2012Lewandowsky et al. ( , 2017. For the purposes of this review, we restrict our attention to misinformation as the concept was defined by Kuklinski et al. (2000): incorrect, but confidently held, political beliefs.

WHAT CAUSES POLITICAL MISINFORMATION?
In considering the causes of misinformation, it is useful to remember that "[c]itizens bring to politics the same psychological architecture they bring to all of individual and social life" (Leeper & Slothuus 2014, p. 138). Thus, research from psychology provides a foundation for theorizing about this phenomenon and its boundary conditions. A crucial insight from psychology is that there is a general human tendency to strive toward particular end states or goals, and these motivations influence all facets of the reasoning process, from seeking out and evaluating evidence to forming impressions (Kunda 1990). 2 Motives may come in many forms, but the contrast between accuracy and directional motives has been a fruitful dichotomy.
Accuracy motives indicate a desire to make the correct decision. Directional motives, by contrast, reflect the desire to arrive at a specific conclusion, e.g., one that maintains consistency 2 As Leeper & Slothuus (2014, p. 139) observe, motivations should not be equated with outcomes. Instead, motivations "manifest in strategies that individuals-consciously or unconsciously-employ in an effort to obtain those desired end states" (emphasis original). 6 with one's attitudes. Of course, both goals can be held by the same person but be more or less influential depending on the decision-making context.
An important aspect of political information processing in particular is that preexisting attachments to a political party or ideological worldview impart strong directional goals.
Directional motives contribute to the problem of misinformation insofar as they lead to biases in how people obtain and evaluate information about the political world (e.g., the well-known confirmation and disconfirmation biases; see discussion of Lodge & Taber 2013 below). As an illustration, a person with a strong preexisting opposition to immigration may hold inaccurate beliefs that reinforce his or her policy preferences; e.g., he or she might overestimate the foreign-born population (Hopkins et al. 2018). Another example comes from Prasad et al.'s (2009) study of the 9/11 terrorist attacks. When Republican participants who believed Saddam Hussein was involved in the 9/11 attacks were presented with evidence that there was no link, these individuals defended their beliefs by counterarguing the evidence and bringing other supportive claims to mind. Other contexts may elevate different goals, such as the desire to be accurate, and in that situation a person will engage in alternative strategies for processing political information, e.g., seeking out a diversity of viewpoints (Druckman 2012). Lodge & Taber (2013) offer an account of the "competitive tension" between accuracy and directional goals, concluding that the latter often dominate. This occurs, the authors elaborate in their John Q. Public (JQP) model, because all the objects we encounter as part of our daily life (e.g., candidates, issues, and groups) are infused with affect, with positive feelings for liking and negative feelings for disliking. Moreover, these feelings arise within just a few milliseconds of exposure to an object, early in the stream of processing and outside our conscious control (Lodge & Taber 2013, p. 19; also see Murphy & Zajonc 1993). But because people know how they feel about an object, all subsequent information processing, from the retrieval of considerations in memory to the evaluation of facts and arguments, is biased in the direction of initial affect. These "hot cognitions" (also called "affective tallies" in Lodge & Taber's earlier writings) provide the directional motive that can bias subsequent conscious reasoning through selective exposure, attention, and judgment processes (Lodge & Taber 2013, p. 150). People motivated by directional goals "assimilate congruent evidence uncritically [the confirmation bias] but vigorously counterargue incongruent evidence [the disconfirmation bias]" (Lodge & Taber 2013, p. 151; also see Ditto & Lopez 1992). From this perspective, a person can become misinformed 7 and cling stubbornly to inaccurate beliefs even in the face of new (i.e., correct) information.
Yet the reasoning process is not always dominated by directional motives, and there is a broad literature showing that citizens can be responsive to new information (e.g., Ahler & Sood 2018, Bullock 2011, Druckman et al. 2013, Feldman et al. 2015, Hill 2017, Redlawsk et al. 2010. The obvious resolution is to recognize that a person's processing goals (e.g., directional versus accuracy) change across situations. However, researchers are only beginning to understand how these competing motivations vary across relevant political contexts, such as media environments or geographic units. One can look to the literature and find evidence that perceptual biases are magnified by the political context (Jerit & Barabas 2006, 2012Shapiro & Bloch-Elkon 2008), and also evidence that the context plays an essential role in mitigating such biases (Feldman et al. 2015, Lavine et al. 2012, Parker-Stephen 2013. In several of the studies cited above, individuals' processing goals are unobserved, and researchers simply assume that one motive or the other was operating (e.g., a directional motive in the case of Jerit & Barabas 2012). In other work, scholars have attempted to induce particular processing goals with experimental manipulations (e.g., Bolsen et al. 2014). We agree with Flynn et al. (2017, p. 137) that "it would be helpful to construct and validate a scale (or scales) that measures individual-level differences in the strength of underlying accuracy and/or directional motivations." Several possibilities are suggested by studies that measure cognitive style. One example comes from Arceneaux & Vander Wielen's (2017) concept of reflection. "Individual differences in reflection should predict which partisans update in the direction of facts, even those that cast their party's leaders or policies in a negative light, and which partisans only update in the direction of facts that are congenial to their party" (Arceneaux & Vander Wielen 2017, p. 193 At this juncture, there is little agreement as to how to measure a person's inclination toward directional or accuracy reasoning. Previous researchers have focused on a range of concepts, from "science curiosity" (Kahan et al. 2017a)  Consequently, although there has been substantial theoretical progress in elaborating the individual-level mechanism underlying misinformation (e.g., having directional motives), we know much less, in an empirical sense, about how a person's processing goals relate to stable individual-level traits, as well as how those goals can be made salient by different political contexts. 3 Overall, directional goals are presumed to be the "default" when people process information about politics (e.g., Flynn et al. 2017, p. 134), and such consistency pressures are a key contributor to the problem of misinformation. Given the prevalence of misinformation across issues (Bode & Vraga 2015, p. 621;Flynn 2016), a large literature has been devoted to correcting this problem.

OVERCOMING MISINFORMATION THROUGH CORRECTION
When people firmly hold beliefs that happen to be wrong, efforts to correct them will be met with resistance (Kuklinski et al. 2000). This aspect of misinformation has great normative and practical importance. Yet the empirical record is hard to characterize given the variation in how scholars have studied the topic. Describing this literature in a 2018 volume devoted to misinformation, communications scholar Brian Weeks wrote: "It is clear that corrections work in some circumstances but not others. What is not apparent is why or how corrections succeed or fail when one is attempting to challenge partisan-based claims. This is a critical question that must be answered" (Weeks 2018, p. 148; for meta-analyses, see Chan et al. 2017, Walter & Murphy 2018. 3 Additionally, some scholars maintain that the conditions for "biased" reasoning (and pathologies like misinformation) are not clearly articulated (Hill 2017). Research in this area draws upon the Bayes Theorem, which provides a systematic way to account for the relative influences of old beliefs and new information (see Bullock 2009, Druckman & McGrath 2019).

9
One conclusion is clear from existing theoretical accounts: Correction should be difficult because people cling to misinformation for both motivational and cognitive reasons (Ecker et al. 2014b). Misinformation has a motivational basis if it stems from a person's desire to arrive at a specific conclusion. Some of the most pernicious forms of misinformation are grounded in a person's standing political commitments (e.g., identification with a party or political ideology).
There also is a cognitive mechanism at work. In particular, misinformation may be part of a mental model (i.e., an explanation) for some incident or phenomenon (Swire & Ecker 2018, p. 197). Because the misinformation remains available in memory, it is automatically activated upon thinking about the object. This processing advantage is the basis for the CIE. Subsequent research in cognitive psychology has led to specific recommendations about how to correct misinformation. For example, providing an alternative factual account is particularly effective because a person can replace the debunked misinformation with the alternative explanation (Swire & Ecker 2018, p. 198; see Nyhan & Reifler 2015a for an application). Additionally, the more credible the source delivering the correction, the more likely it is to be believed (Guillory & Geraci 2013). Also, it is paramount that corrective efforts not boost the familiarity of misinformation, as in the case of simple retractions that (effectively) repeat the false claim-e.g., "Obama is not a Muslim" (Lewandowsky et al. 2012).
Despite a fairly advanced understanding of the mechanisms underlying misinformation, existing research has not established the standard for an effective correction. In some studies, corrective messages are evaluated based on how a respondent answers a subsequent factual question (e.g., was he or she more likely to provide the right answer after being corrected?), while in others the key outcome is an opinion item (e.g., did related attitudes change after the correction?). Even this simple characterization belies the complexity of the existing literature In an effort to integrate these varied conclusions, we return to the insight that motivations shape all facets of the reasoning process. Just as a person's motives (directional versus accuracy) contribute to the problem of misinformation, processing goals also influence the success of varying corrective efforts. This perspective is helpful for theorizing about the conditions under which correction will be successful. Our discussion below focuses on two factors: issue type and the source of the corrective message.

The Role of Issue Type
The effectiveness of corrective efforts likely varies across political issues. On issues or topics that are closely connected to one's partisan/ideological identity, one will be motivated to process information in a way that preserves those attachments. Thus, misinformation regarding hotbutton issues or prominent political figures will be difficult to correct because doing so has nor Disagree, Disagree, Strongly Disagree)]. It seems plausible that the agree-disagree format makes directional goals more salient, but this claim has not been investigated. 5 Backfire effects occur when a person "reports a stronger belief in the original misconception after receiving a retraction" (Swire et al. 2017, p. 2).
Moreover, on issues that are the subject of frequent political debate (e.g., abortion, immigration) or highly salient at the time of a study (e.g., for research in the late 2000s, death panels or Obama's religion), misinformation has been encoded repeatedly as individuals seek out and favorably evaluate attitude/identity-congruent information (Ecker 2014b, p. 301).
In contrast, corrections will be more successful when accepting the retraction does not threaten a person's worldview or a cherished attitude. As an illustration, one can compare the original "backfire" results reported by Nyhan & Reifler (2010) on President Bush's tax cuts and Iraq's weapons of mass destruction (two then-salient and highly partisan topics) to Young et al.'s (2018) successful correction on the Keystone Pipeline issue (a less polarizing subject). Likewise, there is evidence of successful correction in situations where the false information can be interpreted as a singular event (Ecker & Ang 2019). In these instances, a person can correct misinformation regarding a "one-off" episode while adhering to an existing identity. Correction also is more apparent on topics where the false information is conceptually distal to a person's political ideology. For example,  report evidence for correction across dozens of political topics, but few of the false statements pertained to hot-button issues. Indeed, Wood & Porter (2019, pp. 160-61) acknowledge that on topics like drone use, the number of jobs in the solar industry, or Chicago's homicide rate, participants may not have been motivated to counterargue the retractions (also see Ecker & Ang 2019 for a critique). In contrast to earlier research that seemed to suggest that backfire effects were pervasive, Ecker et al. (2014b, p. 302) conclude that "backfire effects are an occasional and inadvertent consequence of an overzealous attempt to maintain an attitude and protect it against change." When misinformation is not central to a person's worldview, the intrinsic motivation to defend and adhere to it evaporates.
Notably, this claim receives empirical support in an experimental study (Ecker & Ang 2019) that manipulates how closely related misinformation is to subjects' preexisting attitudes-the first such study of which we are aware. The authors manipulated whether the misinformation was a general assertion with direct implications for a person's political worldview or could be dismissed as a one-off event. As expected, participants were more likely to correct their beliefs when the misinformation was specific (i.e., a misstatement about the corrupt behavior of a single political party member) versus more general (i.e., a misstatement about the corrupt behavior of all members of a certain party). In the first study reported by Ecker & Ang (2019), when the misinformation is specific, references to the false information in ten open-ended inference questions dropped across the no-retraction and retraction conditions (e.g., from eight to roughly six mentions; see figure 1 in Ecker & Ang 2019). In the specific scenario, subjects' partisanship did not significantly affect the processing of the retraction. When the misinformation was general (and thus more attitude-relevant), a different pattern appeared. Party supporters readily heed the corrective message exonerating their party (they make about two fewer references to the misinformation across the no-retraction and retraction conditions; p < 0.002), while adherents of the opposing party move in the opposite direction and make more references to the false information (p = 0.10).
These results help establish the conditions under which worldview affects the processing of corrections; however, Ecker & Ang's (2019) study involved hypothetical politicians which might have made it easier for the researchers to correct study participants. Further clarity regarding the effectiveness of corrections would come from experimental studies that manipulate issue type more directly-for example, by including parallel conditions featuring politicized and nonpoliticized issues. At present, existing studies examine the effects of correction on polarizing issues (e.g., Weeks 2015) or less partisan topics (e.g., Young et al. 2018). It is less common to examine the effectiveness of a correction across different types of issues [Bode & Vraga (2015) provide a notable exception]. 6

The Source of the Corrective Message
In addition to differences in the degree to which a topic triggers directional motives, features of the corrective message-in particular, its source-are important for understanding the conditions under which correction will be successful. If the desire to believe a false claim is strong, a person may even be skeptical of high-quality sources, such as experts. More convincing, Berinsky (2017, p. 245) observes, is an unlikely-and thus unusually trustworthy-source, such as a 6 Issues may vary in the degree to which they evoke different discrete emotions, with consequences for the correction process (Weeks 2015; also see Ecker et al. 2011). Additionally, there is suggestive evidence that backfire effects are related to information processing style (e.g., online versus memory-based processing; Peter & Koch 2016), which itself may vary systematically with issue type (Feldman 1995).

13
"partisan politician speaking out against their own apparent interests." Berinsky explores this idea in the case of the ACA and the erroneous claim that the legislation mandated death panels.
The ACA was signed into law by President Obama in 2010, and it was largely opposed by Republican lawmakers. Thus, a Republican politician debunking death panels would be speaking against his or her party, lending the message additional credibility. In an experiment that varied the source of the correction, Berinsky (2017) found that a corrective message about ACA death panels is most effective at changing beliefs when it is attributed to a Republican compared to either a Democrat or a nonpartisan source. The results among the full sample are modest, but there is a clear pattern among a subsample of attentive respondents who were most likely to notice the source cues. In the full sample, 50% of people in an untreated control condition rejected the death panels rumor, whereas 58% did so after reading the rumor and a correction attributed to a Republican (p < 0.05). The 58% figure is higher than the corresponding percentages in the Democratic and nonpartisan correction conditions, but the difference between those conditions is not statistically significant. The strength of an "unexpected" Republican correction is observed more clearly among attentive respondents: 69% of attentive respondents rejected the death panels rumor after getting a Republican correction compared to 57% in the control group (p < 0.01) and 60% in the nonpartisan condition (p = 0.07).
A pair of studies by Emily Vraga and Leticia Bode illustrates how corrective messages that are explicitly apolitical can be highly effective. In one study, the researchers randomly assigned corrective messages related to genetically modified foods that were attributed to Facebook's "related stories" feature-making the source of the message an apolitical algorithm (Bode & Vraga 2015). There was a significant decline in misinformation for participants exposed to "related stories" that contained corrective information relative to conditions where the related stories confirmed the misinformation, contained a mixed message, or were unrelated. In contrast to the Berinsky (2017, p. 245) study, which illustrated how "the power of partisanship" could be used to combat misinformation, the absence of partisan intention was crucial in Bode & Vraga's findings. Participants were less likely to discount the correction because it was generated by an algorithmic process, which was not seen as openly partisan (Bode & Vraga 2015, p. 623).
(Vaccine misinformation was more difficult to correct, underscoring the importance of issue type.) An analogous mechanism is at work in a later article that explores the effect of "observational correction" (Vraga & Bode 2017). Using a similar design, i.e., a simulated social 14 media platform, the authors show that people update their own attitudes after seeing another person corrected. The authors contend that this style of correction is typical of online settings and "may reduce barriers to correction that are likely to occur in more overtly politicized spaces" (Vraga & Bode 2017, p. 624). 7 Historically, source credibility has been viewed as a function of a speaker's expertise and trustworthiness (Pornpitakpan 2004). When it comes to the correction of misinformation, there is accumulating evidence that trustworthiness is the more important of the two characteristics (Guillory & Geraci 2013). In the case of the Berinsky, Bode, and Vraga studies, the corrective messages were effective precisely because the threat to a person's ideological commitments was reduced or absent (for related evidence, see Benegal & Scruggs 2018).

Remaining Gaps in the Literature
Scholars have shown creativity in how they have investigated correction, but gaps remain in our understanding of this phenomenon. Here we highlight three ways in which decisions regarding research design can have significant consequences for the conclusions that scholars draw.
The first issue pertains to the decision whether to measure or to manipulate misinformation.
In one style of research design, misinformation is measured, and corrective information is supplied to a random subset of respondents. Attitudes are then assessed to see if the correction was effective (e.g., by comparing mean levels of the relevant opinion across conditions). In order to measure the phenomenon of interest, however, researchers often select issues where there is an expectation that people are chronically misinformed (i.e., there is a strong motivational component; e.g., see Kuklinski et al. 2000, Prasad et al. 2009). In the second style of design, misinformation is manipulated, which is to say that subjects are exposed to misinformation, and corrective information is supplied to a random subset of the participants. However, these experiments often involve fictional candidates where the motivational mechanism is absent or weak (e.g., Cobb et al. 2013, Nyhan & Reifler 2015a, Thorson 2016, or the purpose of the study is to examine the effectiveness of a particular style of correction, which implies greater attention to "best practices" for counteracting misinformation (e.g., Berinsky 2017, Nyhan & Reifler 7 Other studies examine how the unique format of social media contributes to the proliferation of misinformation and rumors (e.g., Rojecki & Meraz 2016, Shin et al. 2018). 15 2015a). There is no right choice when it comes to measuring versus manipulating misinformation. Nevertheless, this design feature may be intrinsically related to likelihood of correction (see Walter & Murphy 2018 for meta-analytic evidence).
Second, it remains hotly debated whether failed correction-and especially backfire effects-are asymmetric across liberals and conservatives. Evidence from several previous studies suggests that conservatives are more prone than liberals to backfire effects (e.g., Hart & Nisbet 2012, Nyhan & Reifler 2010, Prasad et al. 2009, Zhou 2016). This pattern is consistent with findings regarding the distinctive personality characteristics of political conservatives (e.g., need for closure, avoidance of complexity, intolerance of ambiguity ;Jost Glaser et al. 2003), which could in turn lead to disproportionate rates of backfire among people on the political right.
Yet, other studies show that motivated cognition occurs across the whole ideological spectrum (e.g., Kahan et al. 2017b, Nisbet et al. 2015, Swire et al. 2017. Additional research is needed to settle the question of partisan asymmetries , and this is another area where researchers must be mindful to select issues in a balanced manner. Some of the clearest examples of misinformation involve topics on which the scientific consensus challenges the conservative worldview (e.g., climate change). Fewer studies have "actively sought evidence for attitude/backfire effects in left-leaning participants" (Ecker & Ang 2019, p. 244).
Third, and finally, it is notoriously difficult to identify the standard for a successful correction, particularly when the dependent variable is an attitude. More to the point, it is unclear how to interpret evidence that a corrective message did not work (i.e., attitudes remained unchanged following a corrective message). This pattern could occur because the corrective message was ineffective or because some other consideration (i.e., one not raised by the correction) was more closely tied to a person's opinion on that issue. We do not see an obvious solution to this problem because "one cannot easily define the 'sampling frame' of political facts" (Delli Carpini & Keeter 1993, p. 1181. In their early study, Kuklinski et al. (2000, p. 795) focused on six facts that were related to welfare and had been identified by social policy scholars as fundamental to that issue. Subsequent scholars have (understandably) taken a similar approach (e.g., Gilens 2001). But with growing evidence that people update their factual beliefs in response to new information, but then interpret this information in a way that leaves attitudes unchanged (Gaines et al. 2007; also see Bisgaard 2015, Gal & Rucker 2010, Hopkins et al. 2018, Khanna & Sood 2018, Nyhan et al. 2019, Thorson 2016, the normative status of factual beliefs becomes unclear. In much of the research canvassed in our review, the implicit decision-making model is one in which people use their perceptions of the world to inform their preferences (i.e., beliefs lead to preferences). The preceding discussion suggests two complications to this simple model. First, empirical researchers often must make an assumption as to which beliefs relate to which preferences. Perhaps as a result, there are instances in which false beliefs are corrected but attitudes remain unchanged (e.g., Thorson 2016). Second, motivationally based misinformation implies a reversed process in which beliefs are a consequence rather than a cause of attitudes (i.e., preferences lead to beliefs).

Lessons from Psychology
Recognizing the difficulty of correction, cognitive psychologists have articulated the nub of the problem: "Invalidated information is not simply deleted from memory because memory does not work like a whiteboard, and retractions do not simply erase misinformation" (Swire & Ecker 2018, p. 199). It can be difficult, however, to apply recommendations from this field to the political context. One suggestion for reducing misinformation involves affirming a person's worldview prior to correction (e.g., Cook et al. 2015), the logic being that a person will be more likely to accede to the correction if their worldview has been bolstered. The idea is intuitive; however, the effectiveness of this technique in political contexts has been more equivocal Another recommendation revolves around the distinction between strategic and automatic memory processes (Swire & Ecker 2018). Misinformation effects are driven by automatic processes (e.g., fluency, hot cognition), but they can be short-circuited by the more effortful process of "strategic monitoring," which involves the retrieval of contextual details such as the retraction of a false claim or skepticism toward the source of misinformation (Ecker et al. 2010(Ecker et al. , 2014a. This distinction is referred to as the "dual process theory of misinformation" (Ecker et al. 2014a, p. 323;Swire & Ecker 2018, p. 199). While there is some evidence that the continued influence effect (CIE) can be reduced by explicitly warning people that they may be misled by a subsequent message (Ecker et al. 2010), there have been few attempts to study strategic retrieval monitoring outside the CIE paradigm and with explicitly political topics. A useful exemplar is Ecker et al.'s (2014a) study of misleading newspaper headlines. In that study, participants were exposed to headlines that were either consistent or inconsistent with the remainder of the story.
Those who recognized a conflict between a misleading heading and the rest of a fact-based news story deployed strategic processing resources toward correcting their initial impression (also see Clayton et al. 2019, Lewandowsky et al. 2005. When drawing upon cognitive psychology, it is important to keep in mind that the CIE and the backfire effect are different phenomena. The CIE arises from cognitive mechanisms (e.g., fluency) that allow false information to influence the reasoning process even after that information has been discredited. Backfire involves the ironic strengthening of incorrect beliefs as a result of directional motives. Consequently, corrective techniques that are effective for reducing the CIE may be insufficient for counteracting the consistency motives that underlie political misinformation.

MEASUREMENT: CONFIDENTLY WRONG OR EXPRESSIVE RESPONDING?
It is one thing to be mistaken about some aspect of the political world; it is quite another to be incorrect but firmly believe that one is right. One of the most intriguing patterns found by Kuklinski et al. (2000) was the positive relationship between inaccuracy and confidence: The more inaccurate people's beliefs about welfare, the more certain they felt about being correct.
Despite the provocative nature of this early claim, there has been scant attention to confidence.
Numerous studies show that survey respondents provide incorrect answers to questions pertaining to economic conditions, the content and effects of specific policies, the size of minority groups, and other "social facts" (Hochschild 2001), but few of these studies explicitly measure confidence-in-knowledge (see Lee & Matsuo 2018 for discussion). In these instances, are people misinformed or merely ignorant?
In a notable exception, Pasek et al. (2015) find that people report only moderate levels of confidence across a wide range of facts. In the Pasek et al. (2015) study, respondents were asked about very specific topics-in particular, whether 18 different provisions were included in the ACA. One of the items asks whether the law "requires companies that make drugs to pay new fees to the federal government each year," while another inquires if the law "requires insurance companies to charge an additional fee of $1,000 year to anyone who buys insurance from them and smokes cigarettes." [ Table 1 in Pasek et al. (2015) shows the remaining items, which are similarly detailed.] To the extent that misinformation is driven by directional goals (see the section above titled "What Causes Political Misinformation?"), the questions of Pasek et al. (2015) seem an unlikely place to find evidence of this phenomenon. Graham (2018) examines awareness and confidence across two dozen political topics, 18 including general knowledge questions and items with greater partisan relevance. Approximately one-fifth to one-third of respondents who answered incorrectly in that study also were highly certain. More commonly, people who answered incorrectly were aware of their lack of knowledge (i.e., they were incorrect and reported low levels of certainty), even on facts expected to be inconvenient to their partisan predispositions. These two studies lead to a clear recommendation: Regular inclusion of confidence-in-knowledge measures would both improve the classification of these items (e.g., ignorance versus misinformation) and increase our ability to generalize across studies (see Graham 2018 for discussion). Indeed, one of Graham's (2018) most important findings was the degree to which Democrats and Republicans were misinformed about the same facts-a conclusion that could only emerge in a study that examined confidencein-knowledge across an array of issues.
Traditionally, confidence measures have been interpreted as an indication of whether respondents believe their answers (e.g., Kuklinski et al. 2000, p. 793). Lee & Matsuo (2018, p. 2, emphasis added) state that items measuring confidence are essential "to identify whether a response to a factual question is based on a genuine belief" as opposed to a guess (also see Flynn et al. 2017, p. 139). It is notable, then, that a flurry of new studies questions the sincerity of respondents' stated factual beliefs-particularly in areas where partisans have diverging perceptions of objective reality (e.g., Bullock et al. 2015, Prior et al. 2015. The phenomenon, referred to as "expressive responding," occurs when people "intentionally provide misinformation" for the purposes of reaffirming their partisan identity (Schaffner & Luks 2018, p. 136). To provide a simple illustration, an individual may be reluctant to acknowledge bad economic times when his or her party controls the presidency and thus deliberately understate the unemployment rate when asked about it in an opinion survey. In studies of this behavior (Bullock et al. 2015, Prior et al. 2015, researchers find that randomly assigned monetary incentives significantly reduce partisan differences in survey responding, leading to the conclusion that much of the apparent difference in factual beliefs is expressive rather than sincere. The prevalence of expressive responding has important implications for research on misinformation. To the extent that expressive responding takes place, diverging misperceptions among partisans reflect the "joy of partisan 'cheerleading' rather than sincere differences in beliefs about the truth" (Bullock et al. 2015, p. 521). Misinformation is no longer a problem with 19 serious consequences for representative democracy; rather, it is an artifact of the survey experience (for discussion, see Berinsky 2018, p. 212). Indeed, Bullock et al. (2015, p. 561) interpret their findings as " [calling] into question the common assumption that what people say in surveys reflects their beliefs." It also follows that if survey responses reflect partisan cheerleading rather than sincere differences in beliefs, solutions to this problem should focus on survey instrumentation (e.g., giving respondents an incentive to accurately report their beliefs) rather than corrective efforts.
At present, important theoretical and empirical challenges surround the concept of expressive responding. We first consider the theoretical issues. Expressive responding is thought to occur when there is a difference between the response formed by a respondent (i.e., their "true" belief) and the one they report in a survey (Berinsky 2018, p. 212). This two-step process runs counter to research showing that the typical person devotes little attention to politics and likely "[constructs] responses to most factual survey questions from the top of their head" (Flynn et al. 2017, p. 138; also see Zaller 1992). On this view, monetary incentives might simply alter the mix of considerations that come to mind (Flynn et al. 2017, pp. 138-39). An alternative account, provided by Kahan (2016, p. 8), is that incentives transform respondents from "identityprotectors to scientific knowledge acquirers, and activate the corresponding shift in informationprocessing styles appropriate to those roles" (also see Bullock et al. 2015, pp. 527, 559). This distinction matters if the monetized response does not relate to actual political behavior (e.g., vote choice) in the same way as the expressive response. Still another explanation for the observed patterns-entirely distinct from partisan cheerleading-is that respondents are uncertain about the correct answer and use party identification as a heuristic, i.e., they give the pro-party response (see Bullock & Lenz 2019, Clifford & Thorson 2017. From an empirical standpoint, it is difficult to determine whether survey responses reflect genuine beliefs or partisan cheap talk. Scholars have devised highly creative ways to study this phenomenon (e.g., Berinsky 2018, Schaffner & Luks 2018), yet there is room for continued innovation. Here we suggest three possible directions for future work. First, if respondents perceive reality with accuracy but strategically disregard this information (Prior et al. 2015), response times should be longer for expressive than for nonexpressive responses. This is a testable implication that to our knowledge has not yet been investigated in the context of expressive responding (but see Schaffner & Roche 2017 for a related application). Second, 20 computer mouse trajectories, which capture participants' arm movements, have been used to analyze the competing motivations underlying politically tinged conspiratorial thinking (Duran et al. 2017). The mouse-tracking paradigm is an unobtrusive method of studying how a person can be both attracted to and repulsed by political conspiracies. Researchers could plausibly adapt the method to the study of expressive responding, which is driven by a similar conflict between identity and accuracy goals.
Finally, Prasad et al.'s (2009) "challenge interviews," while more obtrusive than the preceding approaches, could be useful for exploring the depths to which people will go to defend closely held, but inaccurate, beliefs. Prasad et al. (2009) conducted in-depth interviews on the link between Saddam Hussein and the 9/11 attacks. The interviews focused on Republicans who stated that Hussein was involved in the attacks (at a time when there was unambiguous evidence to the contrary). The purpose of the challenge interviews was to present respondents with substantive challenges to their opinions and assess whether (and how) they resisted contradictory information. Consistent with Lodge & Taber's (2013) JQP model, the interviewees engaged in resistance behaviors (e.g., counterarguing attitude-incongruent information and selectively recruiting information that supported their views). Analogously, in-depth interviews could be applied to contexts where an author has unambiguously identified expressive responding. For example, Schaffner & Luks (2018) presented images of the crowds from the Trump and Obama inauguration ceremonies (labeled Photo A and Photo B) and asked survey respondents which photo had more people. Respondents who supported President Trump were more likely to claim (incorrectly) that there were more people in the photo depicting Trump's inauguration. Schaffner & Luks (2018, pp. 137-38) conclude that expressive responding "is almost certainly the culprit," given that the correct answer was "so clear and obvious." But how would these individuals respond to subsequent challenges to their stated position or to queries about how they arrived at their response choice? One prediction suggested by this literature is that someone who is responding expressively would be more likely than someone who is responding sincerely to convey partisan sentiment in subsequent questioning [e.g., Yair & Huber's (2018) notion of "blowing off steam"].
Overall, the literature on expressive responding raises some challenging issues for opinion scholars. The question of whether mistaken factual beliefs are genuine or whether some statements of belief are a product of partisan cheerleading is crucial to address because the 21 answer has implications for the solutions that scholars devise. Ultimately, this topic may force scholars to be more explicit about their assumptions regarding the motives underlying political behavior (e.g., does voting induce an accuracy incentive or is it more of an expressive act?).

CONCLUSION
Past research suggests that people are misinformed on a range of political issues and topics.
Furthermore, the motivational component of political misinformation implies that the prospects for correcting false beliefs are dim. That said, our understanding of the misinformation landscape is necessarily confined to the topics that researchers have decided to study (welfare, immigration, prominent pieces of legislation like the ACA, and so on). But as one review noted, "Numerous facts could be politicized. However, most are not…. It is important to recognize how the set of beliefs considered affects the conclusions we draw" (Flynn et al. 2017, p. 135, emphasis original; also see Leeper & Slothuus 2014, p. 143). The selection of issues not only affects observed levels of misinformation; it also has implications for the success of varying corrective efforts (i.e., it may be easier to correct misperceptions on issues that are not salient).
Significant advances have been made in each of the areas we considered in our reviewcauses, correction, and measurement-although it is our sense that the return on investment varies considerably across the three areas. With regard to the causes of political misinformation, scholars have elaborated the mechanisms first articulated by Kuklinski et al. (2000) in a productive manner, using a range of methods and data to flesh out the field's understanding of processing motives. As of this writing, there is near consensus that directional motives play a crucial role in the problem of misinformation. The desire to be consistent with prior attitudes or a political party can lead people to cling strongly to false beliefs. What remains unclear is the extent to which such consistency motives are linked to stable individual-level traits (such that some people are more chronically misinformed than others). Also unsettled is the degree to which particular political contexts elevate directional motives in wide swaths of the public.
However, as our review indicates, it is more difficult to characterize the accumulated wisdom regarding the correction of misinformation. This is a literature that spans many related disciplines, and perhaps as a result, research findings have proliferated rather than cumulated.
We have suggested a motives-based framework for theorizing about the conditions under which corrective messages are likely to be successful. But our discussion does not resolve the nagging 22 question of what counts as a successful correction: a change in beliefs, attitudes, or some combination of the two? There remain a myriad of other design choices that may be related to the success of corrective messages and that have not been studied systematically (e.g., the placement of the corrective message relative to the erroneous information, and the use of a within-or between-subjects design).
Finally, when it comes to the measurement of misinformation, new studies are adding to the empirical record to account for confidence-in-knowledge (e.g., Flynn 2016, Graham 2018, Lee & Matsuo 2018, but more research is needed to distinguish between partisan cheerleading and alternative accounts (Bullock & Lenz 2019, p. 337). People's perceptions of the political world-accurate or not-often are a crucial input into their attitudes and behaviors. It is of fundamental importance, then, that researchers make continued progress in their exploration of the misinformation landscape.

DISCLOSURE STATEMENT
The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.