we opened this book with a threat matrix: a description of the major sources of polluted information that have been blamed for threatening our ability to tell truth from fiction as a society. We noted that most public discussion of the threats focused on novel and technological causes, rather than long-term institutional and structural causes. As we come to the conclusion of our book, we can say with confidence that writing this chapter of possible solutions would have been easier had our analysis revealed a clear, technologically driven cause for our present epistemic crisis. It would be easier if we knew that the present crisis was caused by entrepreneurial teenagers running fake news sites on Facebook, Russian sockpuppet and bot accounts, targeted psychographics-informed advertising from Cambridge Analytica, or even technologically induced symmetrically partisan echo chambers.
But our studies have led us to the conclusion that these putative risks are, for the near future, not the major causes of disruption. We have argued throughout this book that none of these actors—Russians, fake news entrepreneurs, Cambridge Analytica, Facebook itself, or symmetric partisan echo chambers—were the major cause of the epistemic crisis experienced within the U.S. media ecosystem during the 2016 election or since. Instead of these technologically driven dynamics, which are novel but ultimately less important, we see longer-term dynamics of political economy: ideology and institutions interacting with technological adoption as the primary drivers of the present epistemic crisis. These dynamics, which play out across television, radio, and mainstream professional journalism at least as much as through the internet and social media, have been developing for 30 years and have resulted in a highly asymmetric media ecosystem that is the primary driver of disinformation and propaganda in the American public sphere. The right-wing media ecosystem in particular circulates an overwhelming amount of domestic disinformation and propaganda, and its practices create the greatest vulnerabilities to both foreign propaganda and nihilistic commercial exploitation by clickbait factories. Outside the right-wing media ecosystem, we observe efforts by partisans, Russians, and clickbait factories, but fact-checking norms and journalistic institutional commitments dampen the diffusion and amplification of disinformation. These dynamics are inverted in the insular right-wing network, and partisan identity-confirming assertions, however false, are accelerated and amplified.
In this chapter we take on a range of possible interventions that might strengthen media ecosystems and raise the level of public discourse and political reporting. While the options are many, there are no silver bullets. There are changes that might be led by media producers themselves on both sides of the political spectrum. There are a range of intended fixes underway by the large social media platforms and supporting mechanisms being proposed and developed by public interest organizations, and there are options for more aggressive government regulatory action. Because of the size of the problem, we are not optimistic that any of these changes will succeed on its own. But the lesson we take away from our work is that we as a society must lean more heavily toward fixing our currently broken media system. Unfortunately, most of the interventions we describe below have real costs in terms of uncomfortable acceptance of the partisan nature of the problem and of increased private and public control of content.
We divide this chapter into three parts. The first part is dedicated to the two major kinds of changes that may actually go to the root of the problem: political-institutional change on the right and a reorientation of how journalists pursue their professional commitments in a highly asymmetric media ecosystem. The former is the ultimate solution, we think, but is unlikely to occur unless the Republican Party suffers a series of political setbacks that will force such a fundamental reorientation. The latter is more feasible, and we think it can actually make a difference for much of the population. But it may have little or no effect on the roughly one-third of the population that actively and willingly inhabits the right-wing media ecosystem. The second part outlines the most widely discussed practical solutions to the more novel, technology-dependent explanations of crisis that have captured most of the public attention since 2016. These are primarily: increased control by intermediaries, particularly social media platforms and Google, of variously defined illegal or otherwise unacceptable content, and regulation of political advertising. The last part will offer a brief overview of various other approaches aimed to help reduce the supply of, and demand for, misinformation and bias-confirming propaganda.
Reconstructing Center-Right Media
There is nothing conservative about calling career law enforcement officials and the intelligence community the “deep state.” The fact that the targets of the attack, like Robert Mueller, Rod Rosenstein, or Andrew McCabe, were life-long Republicans merely underscores that fact. There is nothing conservative about calling for a trade war. There is nothing conservative about breaking from long-held institutional norms for short-term political advantage. And there is nothing conservative about telling Americans to reject the consensus estimate of the CIA, the FBI, and the NSA that we were attacked by Russia and suggesting instead that these agencies are covering up for a DNC conspiracy. What has happened first and foremost to make all these things possible is that the Republican Party has been taken over by ever-more right-wing politicians. As an analysis published following Richard Lugar’s primary defeat noted, when Lugar first became a senator in 1977, he was to the right of more than half of Senate Republicans. Despite moderating only a bit over his 36 years in the Senate, by his retirement he had moved from being the twenty-third-most moderate Republican to being the fifth-most moderate, and even if he had not moderated his views, would have been the twelfth-most moderate.1Close When Marco Rubio was elected as a senator in 2010, he was the firebrand Tea Party candidate, elected alongside Tea Party darling Pat Toomey, who had ousted moderate Republican Arlen Specter. By 2016 Rubio was identified as the “moderate” candidate, even though he had a DW-NOMINATE score to the right of the median Republican senator in the 113th Congress, while the final two candidates in the primary were the third-most right-wing senator, Ted Cruz, and Donald Trump. As we write these lines, we have no idea whether the 2018 mid-term elections, much less the 2020 election, will deliver losses or gains to the Republican Party. Perhaps the hard-right strategy will continue to pay off for Republicans in terms of short-term political gains. Perhaps a reversal will force a reorientation. The question is whether Republicans who would identify themselves with a Lugar, an Arlen Specter, or a John Kasich have a significant chance of recapturing their party without a significant reassertion of the role of center-right media.
One of our clearest and starkest findings is the near disappearance of center-right media. There is the Wall Street Journal, with its conservative editorial page but continued commitment to journalistic standards in its reporting; and to some extent The Hill plays a center-right role. Both sites appear in the center of the partisan landscape according to our data because readers on the right did not pay attention to these sites any more than readers on the left did. Fox News, as we showed in Chapter 2, has asserted its leadership role in the right-wing media ecosystem at the expense of becoming even more oriented inwardly, toward the insular right wing of the American public sphere. Several conservative commentators have emphasized the extent to which Fox News has skewed the Republican-oriented media system. David French and Matthew Sheffield wrote in the National Review during the run-up to the 2016 election that Fox News was “hurting the right” or “making the right intellectually deaf.”2Close Libertarian Cato Institute Senior Fellow Julian Sanchez wrote early and presciently when he described the right as suffering what he called “epistemic closure”:
One of the more striking features of the contemporary conservative movement is the extent to which it has been moving toward epistemic closure. Reality is defined by a multimedia array of interconnected and cross promoting conservative blogs, radio programs, magazines, and of course, Fox News. Whatever conflicts with that reality can be dismissed out of hand because it comes from the liberal media, and is therefore ipso facto not to be trusted. (How do you know they’re liberal? Well, they disagree with the conservative media!) This epistemic closure can be a source of solidarity and energy, but it also renders the conservative media ecosystem fragile.3Close
Our data could not be summed up more perfectly. The right wing of the American media ecosystem has been a breeding ground for conspiracy theory and disinformation, and a significant point of vulnerability in our capacity, as a country and a democracy, to resist disinformation and propaganda. We have documented this finding extensively throughout this book by describing both the architecture and flows of information and disinformation in the American media ecosystem in the past three years and by presenting distinct case studies of disinformation campaigns surrounding what we know to be Russian hacks of the DNC and Podesta emails and the investigation into Russian interference in the election—be it Seth Rich, the Forensicator, Uranium One, or the DNC and Podesta email-based stories like spirit cooking.
The question is, in part, whether there is enough will, and money, among centrist Republicans to address the fragility of the conservative media ecosystem by supporting publications that are centrally committed to reasserting a sense of reality. We are under no illusion that such a reorientation will be easy, and we are not even sure it is possible. Our observations regarding how Breitbart and Trump brought Fox News to heel toward the end of the primary season, and the propaganda feedback loop we outline in Chapter 3 suggest that this strategy will be exceedingly difficult to carry out, as does the extent to which long-standing conservative sites that opposed Trump were shunted aside during the primary. There just may not be appetite for reporting oriented more toward objectivity and less toward partisanship. But just as Robert and Rebekah Mercer invested heavily in creating Breitbart, the Government Accountability Initiative, and other elements of the far-right disinformation ecosystem, one might imagine that there are enough billionaires and millionaires who do not see themselves in the populist, anti-immigrant, anti-trade, and increasingly anti-rule-of-law Trump-Bannon image of the Republican Party. We have no basis in our own work to advise how such a Herculean feat may be achieved. Perhaps it could take the form of leadership or ownership change in Fox News, leveraging existing audience loyalty to gradually reintroduce a more reality-anchored but nonetheless conservative news outlet. Perhaps it would require launching a new media outlet. But without a significant outlet that is both committed to professional journalistic values in its news reporting and fact checking and trusted by conservatives when it reports what it reports, it is difficult to see how center-right, reality-based conservatives can reassert themselves within the Republican Party. And without such a reassertion, it is hard to see how the fragility of the right-wing media ecosystem does not continue to be a hotbed of bias-confirming disinformation and propaganda, foreign and domestic.
Professional Journalism in a Propaganda-Rich Ecosystem
We have dedicated most of this volume to mapping the media ecosystem, documenting its asymmetric structure, and understanding the role this asymmetry plays in the dissemination of disinformation. We nonetheless emphasized the critical role of mainstream media in this system. In Chapter 6, we showed how our work relates to the work of other scholars who looked at coverage during the election to underscore the extent to which mainstream media focused on horse race coverage, negative coverage, and scandals over issues, particularly when it came to Hillary Clinton. We showed how the New York Times allowed itself to be used in a disinformation campaign, laundering politically motivated opposition research by Peter Schweizer by bestowing upon it the imprimatur and legitimacy of its name with coverage that raised questions in blaring headlines and buried caveats deep in the small print of the story. We similarly showed how coverage by other major outlets like the Washington Post and the Associated Press in August of 2016 led with the scandal-implying headline and buried the admission that evidence of corruption was thin at best. But we also showed in Chapters 3, 5, 6, and 8 how the reality-check dynamic in play across “the rest” of the media ecosystem was able to check disinformation and error on both sides, as mainstream and newer online media continuously checked each other’s worst impulses and corrected error and overreaching.
Most Americans do not occupy the right-wing media ecosystem. Likely more than half of conservative and more independent viewers, readers, and listeners are exposed to CNN as well as to network and local television and mainstream news sites. The crossover audience, much more than the audience already fully committed to the perspective of Sean Hannity or Rush Limbaugh, is the most important audience that can be influenced by mainstream media. When mainstream professional media sources insist on coverage that performs their own neutrality by giving equal weight to opposing views, even when one is false and the other is not, they fail. In a famous 2004 study, Maxwell Boykoff and Jules Boykoff showed that the climate change coverage of major prestige papers—the New York Times, the Washington Post, the Wall Street Journal, and the Los Angeles Times—gave “balanced” coverage, providing a platform not only for arguments that climate change was anthropogenic but also to those that climate change was not caused by human activity, even though the scientific consensus was by then well established.4Close As more recent work by Derek Koehler showed, readers form erroneous assessments of the weight of the evidence even when they are explicitly told that one view they are reading reflects a near-consensus of experts while the other is held by a small minority.5Close The balanced-reporting norm in that context significantly muddied the waters on the politics of climate change, which was exactly the purpose of the propagandists. But they could only achieve their goal at the population level with the implicit support of mainstream journalists who reported on the two sides as more-or-less equal voices in the controversy. Our observations regarding the prevalence and diffusion of disinformation in the right-wing media ecosystem suggest that mainstream professional journalism needs to treat many statements associated with current American politics as it later came to deal with coverage of climate science and climate denial.
Practically, this means that professional journalism needs to recalibrate its commitment to objective reporting further toward transparent, accountable verifiability and away from demonstrative neutrality. As Thomas Patterson wrote in explaining the negative reporting in election coverage, consistently negative reporting allowed media outlets to demonstrate their independence and neutrality to their audiences. Given the highly asymmetric patterns of lying and disinformation coming from the two campaigns, media outlets avoided the appearance of bias by emphasizing negativity and criticism in their reporting of both sides. In practical terms, that created a broad public impression of equivalent unsuitability for office and a paucity of substantive coverage of positions.
Instead of engaging in this kind of public performance of neutrality, what we might call demonstrative neutrality, objectivity needs to be performed by emphasizing the transparency and accountability of journalists’ sources and practices, what we might call accountable verifiability. We already see it to a degree in the ways in which mainstream media organizations dealt with some visible errors in Chapter 6. Some of this is simply emphasis and extension of existing practices—providing public access to underlying documentary materials, for example. Some would require developing institutional mechanisms for independent verification of sources, on the model of scientific access to materials and underlying code for reproducibility. Whether this should be done in a network of media outlets, as it is in peer review in science, by designating a set of independent nonprofit fact-checking organizations that will operate as quality assurance for journalistic enterprises and be provided with more access to underlying materials than under current practice, or at least initially by creating a more prominent role for internal “inspector generals” inside the most visible media remains to be seen. More controversially and harder to implement, these practices have to be applied equally to headlines and framing, where so much of the communicative value of the article is conveyed, as they are to discrete factual claims. None of this will persuade people who are already inside the propaganda feedback loop, nor need it be designed to satisfy them. It is intended to allow journalistic enterprises to avoid errors and gain enough confidence internally, and communicate enough objective truth seeking externally, so that they can abandon demonstrative neutrality and its attendant false impressions.
This broader reorientation should not postpone several narrower and more immediate adjustments in journalistic practice. Recognizing the asymmetry we document here requires editors to treat tips or “exclusives,” as well as emails or other leaked or hacked documents with greater care than they have in the past few years. The “Fool me once, shame on you . . .” adage suggests that after, for example, the New York Times’s experience with Peter Schweizer and the Uranium One story, mainstream professional journalists need to understand that they are subject to a persistent propaganda campaign trying to lure them into amplifying and accrediting propaganda. This happens of course as normal politics from both sides of the partisan system, but our work here shows that one side is armed with a vastly more powerful engine for generating and propagating propaganda.
It is certainly possible to resist these attacks without particularly supporting one side over the other. In November 2017, for example, the right-wing disinformation outfit Project Veritas tried to trip up the Washington Post, offering the Post a fake informant who told the Post that Roy Moore had impregnated her when she was a teenager. The sting operation was intended to undermine the credibility of the Post’s reporting on Roy Moore’s alleged pursuit and harassment of teens when he was a 30-something-year-old.6Close Rather than jumping at the opportunity to develop the Moore story, the Washington Post’s reporters followed the professional model—checked out the source, assessed her credibility, and ultimately detected and outed the attempt at manipulation. Mainstream media editors and journalists must understand that they are under a sustained attack, sometimes as premeditated and elaborate as this sting, usually more humdrum.
Russian and right-wing political actors have been particularly effective at using document dumps, particularly hacked emails, to lure journalists into over-reporting. The dynamic is clear. By offering real documents, activists and propagandists give journalists the illusion that they have both a trove of newsworthy documents—what could be more tempting than a tranche of secrets?—and complete control over the sources materials and ability to craft a narrative around them. After all, the journalist makes up her own mind about what matters and what does not. After a reporter spends hours or days pouring over materials, as other competitors are releasing juicy tidbits based on the documents, the pressure to produce something spectacular becomes powerful. It is not front-page news to report: “folks, we looked at these emails for days and they’re pretty humdrum; not much to report.” And yet, now we know that these document dumps are intended precisely to elicit attention-grabbing headlines followed by professional caveats buried in paragraph 12, on the model of “while there is no direct evidence of wrongdoing in these emails, they nonetheless raise legitimate questions. . . .” The DNC and Podesta email hacks, as well as the Judicial Watch FOIA-based email dumps, were all designed precisely to elicit this kind of predictable response. And the editors, reporters, and propagandists all know that very few readers will get to paragraph 12. It is irresponsible for professional journalists and editors to continue this practice given this pattern. We must develop new standards for evidence of wrongdoing before prominent publication of implied allegations of corruption. Likewise, we must develop new modes of investigative reporting that focus more on propagandists who create document dumps.
Similarly, researchers at the Data & Society Institute who have focused specifically on the nether-regions of the internet, like Joan Donovan, have questioned whether journalists and editors should curtail coverage of fake news and extremist memes.7Close Producers of this content thrive on such attention, and publication in major outlets only increases the diffusion of the falsehoods, whether debunked or not. What is perhaps gained with exposure and debunking may, according to these arguments, be more than outweighed by the encouragement the speakers receive from the attention and the exposure that the falsehoods received.8Close Our findings about the insularity of the right-wing media ecosystem suggest that for a given rumor or conspiracy theory to circulate outside right-wing media, it has to be picked up by outlets outside that insular system. Moreover, even inside that ecosystem, a story must be picked up by influencers or major outlets to escape obscurity. Decisions not to report such stories unless and until they hit a sufficiently influential node could in fact dampen distribution to people who would only be exposed to them from mainstream media. Whether this dampening effect would be outweighed by having the rumors circulate unchallenged should be addressed as an open research question.
Despite our various cautions, our findings are good news for professional journalistic organizations. The past decade has seen repeated claims that the sky is falling; that there is no more of a market for general interest professional journals; that audience attention is diffused around the internet; that audiences only want entertaining, bias-confirming news; and so forth. We find instead that professional mainstream media continue to play an enormously important role for most Americans. Indeed, the information abundance of this era makes professional mainstream media particularly valuable to those living at least partly outside the right-wing bubble. Going forward, professional journalists must offer precisely what makes them special: credible reporting, in organizations committed to journalistic norms, but with a heavier emphasis on verifiable and accountable truth and credibility rather than balance and neutrality. As long as the media ecosystem is highly asymmetric structurally and in its flow of propaganda, balance and neutrality amplify disinformation rather than combating it.
The Question of Platform Regulation
The primary focus of solutions-oriented conversations since 2016—in the United States, in Europe, and throughout the world—has been on changing how information is accessible, framed, shared, or remunerated on platforms, primarily on Facebook and Google. The big unanswered question is whether the time has come for new government regulation or whether the best of the imperfect options is to be patient and avoid invasive legal remedies while platforms muddle toward the right balance of content moderation tools and self-regulatory policies to satisfy their many constituencies, including investors, advertisers, and users.
There are a range of measures under consideration, some based on regulation, some on self-regulation or voluntary measures. They sometimes depend on algorithmic identification of false stories and sometimes on human detection and assessment; they usually focus on removal or treatment for falsehood or illegality, but sometimes for extreme views; sometimes such designations result in additional tagging and marking of the content, sometimes in removal or demotion; and sometimes they involve blocking advertising and payment to undermine the business model of the purveyors of bullshit.
The debate over misinformation and disinformation on online platforms has intersected with a growing concern over the degree of concentration that internet platforms enjoy in their respective markets. The core examples are Google and Facebook. Google dominates search and (through YouTube) video, while Facebook dominates social media, through the Facebook platform itself and through its ownership of Instagram and WhatsApp. But there is real tension between the goal of reducing concentration and increasing competition, on the one hand, and the goal of regulating a reasonably coherent public sphere, on the other hand. A Google search for “Gab” displays (at least in early 2018) a “people also searched for” box that includes “The Daily Stormer.” Gab is a social media platform that developed as an alternative for far-right and alt-right users who were banned or constrained by Twitter, Facebook, and even Reddit. A crackdown by the platforms in the presence of competition diverted their communications to a semi-segregated platform but did not remove them from the internet. Whether robust competition is helpful for combating misinformation depends on whether the critical goal is to eliminate the disinformation or merely to segregate it and reduce its diffusion pathways. But segregation only works in a relatively concentrated market, where banishment from a handful of major platforms contains content within smaller communities. Current regulatory and activist effort is focused on the major platforms precisely because they have so much power and changing their diffusion patterns has a large impact on overall diffusion patterns online.
From the perspective of disinformation and misinformation, the trouble with concentrated platforms is that if and when they fail, or become bad actors or beholden to bad actors, the negative effect is enormous. The Cambridge Analytica story, with all the caveats about how much of it was true and how much was hype, offers a perfect example. Even in the most Facebook-supportive version of the story, the company’s overwhelming influence and presence made it a single point of failure that enabled Cambridge Analytica, by abusing Facebook’s terms of service, to inappropriately collect and use the private data of 87 million Facebook users. The story is no better if, instead of fingering Cambridge Analytica as the bad actor, it is Facebook itself, using permission it obtained by forcing people to click “I agree” on inscrutable terms of service, that sells to a political campaign the means to mount a targeted advertising campaign intended to microtarget black voters in Florida to suppress their votes. The acts of both Facebook and the campaign in Florida were legal. But the danger presented by a single company having such massive influence over large portions of the population is reason enough to focus either on ensuring greater competition that will diffuse that power or on tightly regulating how that company can use its market power. In this Facebook represents a parallel problem to the challenge created by media consolidation after the 1996 Telecommunications Act in the United States, where companies like Clear Channel and Sinclair Broadcasting were able to expand their audience reach dramatically while combining it with aggressive distribution of right-wing propaganda on radio and television. In all these cases, solutions that tend to reinforce centralized control over the network of content outlets, rather than reduce concentration and with it the degree to which any of them provides a single point of failure, seem to exacerbate rather than solve the problem.
Our experience in the few areas where there has been normative consensus in favor of harnessing the power of platforms to regulate content means that we are not in uncharted territory. Child pornography and copyright, with all their fundamental differences, have been areas where platforms have been regulated, and have even adopted voluntary mechanisms beyond what law requires, to contain the spread of information about which we have made moral or legal decisions. The problem is that political speech is very different from pedophilia and that the copyright wars have taught us well that platform control over speech can shade into censorship, both intentionally and unintentionally.
The Contentious Role of Government
The United States has generally taken a light touch approach to internet regulation, particularly in respect to political speech. Many other countries, especially more authoritarian countries such as China, Iran, and Saudi Arabia, have forcefully policed online speech. The most aggressive effort in a liberal democracy to respond to disinformation and hate speech on social media by regulating social media platforms is the German NetzDG law that became effective on January 1, 2018. The act applies to platforms with more than two million registered users in Germany, thereby preserving the possibility for smaller, more insulated platforms to exist without following these rules, as well as for new entrants to roll out and grow before they must incur the costs of compliance. The law requires these larger platforms to provide an easily usable procedure by which users can complain that certain content is “unlawful” under a defined set of provisions in the German criminal code. These provisions include not only hate speech against groups and Holocaust denial but also criminal prohibitions on insulting and defamatory publications.
On its face, this would seem to cover much of the immigration coverage we described in Chapter 4 and practically all of the personal attacks we described in Chapter 3 and Part Two. And while the German criminal code offers reduced penalties when the targets of defamation are political figures, it does not excuse the defamation. The NetzDG requires companies it covers to either delete the content if it is “manifestly unlawful” within 24 hours, or, if its unlawfulness is not “manifest,” to take one of two actions. The company may decide on the content’s lawfulness within seven days, or it may refer the content to an industry-funded tribunal that will be overseen by a public authority, which then has seven days to decide on the unlawfulness of the content. Violations can be sanctioned at various levels, all the way up to 50 million euros. Sharp criticism that the draft of the act was overly restrictive came from within Germany and outside. In response to the critiques, several of the definitions were tightened, an alternative dispute resolution mechanism was added, and provisions were added to allow users to challenge blocks of allegedly factually false content. These changes did not satisfy opponents, and blistering criticism continued from a broad range of observers, including Human Rights Watch.
Critics argued that, faced with a very short time to make decisions, companies would err on the side of over-censoring rather than take the risk of being found in violation of a law that carries very high fines. Examples cropped up as soon as the law began to be enforced. Comedian Sophie Passman poked fun at the far-right’s claims that immigrants destroyed German culture with a tweet that said that, as long as the practice of airing “Dinner for One” on television on New Year’s Eve remained a part of German cultural tradition, immigrants were totally welcome to come and destroy it. Her tweet was removed because some users misinterpreted it as espousing the rhetoric she was mocking. Less silly, but more threatening to democratic contestation, at least from an American perspective, were the rapid removals of tweets by leaders of the far-right Alternative for Germany (AfD) party. After the Cologne police tweeted out a New Year’s message in Arabic, among several other foreign languages, Beatrix von Storch, deputy-leader of AfD, tweeted that the police were appeasing “barbaric, gang-raping Muslim hordes of men.” Twitter removed von Storch’s tweet. The company then also removed a tweet from AfD co-leader, Alice Weidel, arguing that “[o]ur authorities submit to imported, marauding, groping, beating, knife-stabbing migrant mobs.”9Close In Germany, incitement to hatred is one of the criminal offenses to which the NetzDG applies, and it is difficult to argue that the words on their face fail that test—and yet the result is state-induced private censorship of the political viewpoints of the leaders of a party that represents the views of about 13% of the votes. The problem is that there is often a wide gap, filled by case law in common law countries and commentary and experience in civil law countries, between what a law says on its face and what will be prosecuted, and what may lead to a conviction. Think of how unimaginably broad the prohibition on insulting someone might be without working knowledge of German law and its application by the judicial system. Due to lack of public process or appeal, companies are expected to err on the side of caution (censoring arguably criminal speech to avoid the fine for inaction). Another major concern is that the NetzDG law legitimizes efforts in Singapore, the Philippines, Russia, Venezuela, and Kenya to adopt copycat models that incorporate more speech-repressive criminal provisions.
In grappling with the trade-off between free speech and extreme, counterdemocratic speech, Germany’s experience with the rise of Nazism has led it to adopt a more aggressive model than the U.S. First Amendment permits. Britain’s and France’s forays into platform regulation focused more on countering terrorist recruitment and propaganda, but hate speech is generally regulated more tightly in Western Europe than it is in the United States. And a 2017 European Commission report underscored that the model by which platforms regulate various forms of illegal speech is hardly new. From materials depicting the sexual abuse of children to materials that violate someone’s intellectual property rights, democracies generally find some materials that they are willing to prohibit and then impose on platforms some obligation to take those materials down.
The basic concern of critics of the German NetzDG law—that fear of liability would lead firms to exercise private censorship well beyond what the legislature could constitutionally impose directly—was at the foundation of American legal protection for online platforms. In the United States, the Communications Decency Act of 1996 (CDA) section 230 and the Digital Millennium Copyright Act of 1998 (DMCA) are the foundational laws governing liability of platforms for content posted by their users. CDA section 230 gave platform providers broad immunity from liability for pretty much anything their users published and has been widely considered the foundation of the freewheeling speech culture on the net. Courts were happy to let platforms avoid liability for quite substantial interpersonal abuse on their platforms,10Close much less hate speech, which is mostly protected under the First Amendment. In the absence of legal constraint in the United States, pressure from users has occasionally moved some of the major platforms to regulate offensive speech, resulting in “community guidelines” approaches. These became the de facto content control rules of the platforms in most areas, enforced through end-user agreements as interpreted by the platform companies, and punctuated by moments of public outrage that nudge these practices one way or another.
Unlike other forms of disfavored speech, the U.S. Supreme Court has long held that when copyright is used as the reason to prohibit publication of words or images, there is usually no violation of the First Amendment. We will not here rehearse the decades of academic commentary that has explained why that interpretation of the relationship between copyright and the First Amendment is incoherent. It is the law of the land, and for our purposes it explains how, faced with relative constitutional license, Congress developed in the DMCA a much more restrictive “notice-and-takedown” regime for speech challenged as violating copyright. In doing so, it identified a structure that, while not as restrictive as the NetzDG, nonetheless has similar structural attributes. Here, platform providers can only escape copyright liability over the acts of their users if they maintain a system to allow copyright challenges. Those challenges allow copyright owners to challenge the use of specific pieces of content on grounds of copyright infringement and to take down that content unless the person posting the content submitted a counternotification. This approach would encounter strong constitutional headwinds in any context but copyright. In the United States it is easier to remove YouTube videos showing students performing a show tune at a school play than YouTube videos showing neo-Nazis singing the “Horst Wessel.”
The approach taken by the CDA section 230 has been remarkably successful in supporting the development of online publishing platforms. This same architecture is ill-suited for fine-grained moderation of human interaction. Whether by government mandate or voluntary company action, applying online content moderation at massive scale is a particularly gnarly challenge. There are a range of options for dealing with problematic speech. It can be removed at the source, blocked from circulation by intermediaries, removed or demoted by a search engine, or flagged for audiences as problematic. The hard part is that someone must first determine standards and guidelines for what constitutes problematic speech, and second, design a process to weed through billions of online posts, determine which posts are problematic, and then find a way to review these decisions to be able to correct errors. In the United States the default answer to the first challenge is to leave it to companies to decide how to deal with transgressing actors and content. Germany has chosen a different path that will almost certainly result in the removal of considerably more content. Companies are meanwhile expanding the infrastructure for monitoring and sorting content at a massive scale, combining algorithms and human review. It is difficult to be optimistic about how this can be applied at such scale. The promise of machine learning and artificial intelligence to accurately separate the good from the bad has not been realized, and the interim solution is hiring tens of thousands of workers to review content with company instructions in hand. These processes are aided by flagging mechanisms that allow users to flag problematic content. Predictably, these mechanisms have their limits as they are easily gamed to attack rivals and political opponents. Our experiences, in the United States and elsewhere, with both over- and underinclusiveness in the determinations of existing systems for managing copyright or nudity, leave us very skeptical that these processes will work well, much less seamlessly.
Self-Regulation and Its Discontents
For the lack of better alternatives, the hardest questions of content moderation in the United States are being left in the hands of companies: whether to allow the Daily Stormer to be treated like any other web publisher, whether conspiracy theorists should have equal standing on YouTube and in the Facebook newsfeed, and whether anti-vaxxers should be allowed to freely distribute information that will result in sickness and death.
Over the course of 2017 and early 2018, both Google and Facebook announced various measures intended to combat “fake news,” misleading information, and illegal political advertising. These measures combined various elements of the systems that are already in place and will almost certainly suffer from similar difficulties and imperfections, while delivering some higher enforcement. It is important not to get caught up in the supposed newness of the problem and to recognize that we are dealing with known problems with bad information—spam, phishing, sexual exploitation of children, hate speech, and so on.
The commercial bullshit or clickbait sites are the most familiar challenge. They are simply a new breed of spammers or search engine optimizers. It is feasible to identify them through some combination of machine learning based on traffic patterns, fact checking, and human judgment, likely outsourced to independent fact-checking organizations. Excluding clickbait factories is normatively the least problematic, since they are not genuine participants in the polity. That is why most of the announcements and efforts have been directed at this class of actors. Similarly, dealing with Russian or other foreign propaganda is normatively unproblematic, though technically more challenging because of the sophistication of the attacks.
The much more vexing problem is intentional political disinformation, including hyperpartisan hate speech that is harmful but not false. Perhaps in Germany it is imaginable to remove posts by the leader of a political party (AfD) supported by over 10 percent of the electorate. In the United States private platforms are allowed under current interpretations of the First Amendment to censor political speech for whatever reason they choose. But widespread political censorship by the major private platforms would certainly generate howls of protest from large swaths of the political spectrum. Moreover, the most effective propaganda generally builds on a core set of true facts and then constructs a narrative that is materially misleading. Efforts we ourselves, and many of our colleagues who are studying the landscape of disinformation, propaganda, and bullshit, have made to create well-defined research instruments that reliably permit trained coders to identify this kind of manipulative propaganda leave us skeptical that a reliable machine-learning algorithm will emerge to solve these questions in the near future.
There have been some successful efforts to pressure advertisers to remove ads from certain programs. Alex Jones of Infowars offers the clearest example of a publisher who has repeatedly published defamatory falsehoods, like Pizzagate, that have been debunked by many fact checkers. One company, AdRoll, suspended its advertising relationship with Infowars, but Google has not turned off advertising for Infowars videos. Some brands, however, have instructed Google not to run ads for their products alongside Infowars videos. In Jones’s case, this may not be too great of a loss for him, since his business model is largely based on directly selling products he markets on his shows (over 20 percent of Infowars outgoing traffic is to the site’s store, where Jones sells his own branded supplements for “male vitality,” brain enhancement, and such). These campaigns may reduce the economic incentives for some disinformation sites, and they are cathartic for the activists pushing the companies to sever ties. But they do not appear to present a systematic, scalable, and long-term response to the broader phenomenon of disinformation in the media ecosystem. Many of the most widely visited, shared, or linked sites in the right wing of the American media ecosystem engage in disinformation regularly or episodically. Asking the platforms to solve the problem by blocking a broad swath of the right-wing media ecosystem would be palpably antidemocratic.
In 2013 Pew reported that about one-quarter of American adults watch only Fox News among the cable news channels.11Close A later Pew study found that the number of Americans who preferred getting news on radio, rather than television, was about one-quarter the number who preferred television news.12Close Yet a third Pew study found that, after Fox News, the most trusted news sources for consistently conservative respondents were the radio shows of Sean Hannity, Rush Limbaugh, and Glenn Beck.13Close These all suggest that somewhere between 25 and 30 percent of Americans willingly and intentionally pay attention to media outlets that consistently tell that audience what it wants to hear, and what that audience wants to hear is often untrue. For the rest of the population to ask a small oligopoly of platforms to prevent those 30 percent from getting the content they want is, to say the least, problematic. Platforms can certainly help with the commercial pollution, and to some extent they can help with foreign propaganda. But we suggest that asking platforms to solve the fundamental political and institutional breakdown represented by the asymmetric polarization of the American polity is neither feasible nor normatively attractive.
we opened this book with a threat matrix: a description of the major sources of polluted information that have been blamed for threatening our ability to tell truth from fiction as a society. We noted that most public discussion of the threats focused on novel and technological causes, rather than long-term institutional and structural causes. As we come to the conclusion of our book, we can say with confidence that writing this chapter of possible solutions would have been easier had our analysis revealed a clear, technologically driven cause for our present epistemic crisis. It would be easier if we knew that the present crisis was caused by entrepreneurial teenagers running fake news sites on Facebook, Russian sockpuppet and bot accounts, targeted psychographics-informed advertising from Cambridge Analytica, or even technologically induced symmetrically partisan echo chambers.
But our studies have led us to the conclusion that these putative risks are, for the near future, not the major causes of disruption. We have argued throughout this book that none of these actors—Russians, fake news entrepreneurs, Cambridge Analytica, Facebook itself, or symmetric partisan echo chambers—were the major cause of the epistemic crisis experienced within the U.S. media ecosystem during the 2016 election or since. Instead of these technologically driven dynamics, which are novel but ultimately less important, we see longer-term dynamics of political economy: ideology and institutions interacting with technological adoption as the primary drivers of the present epistemic crisis. These dynamics, which play out across television, radio, and mainstream professional journalism at least as much as through the internet and social media, have been developing for 30 years and have resulted in a highly asymmetric media ecosystem that is the primary driver of disinformation and propaganda in the American public sphere. The right-wing media ecosystem in particular circulates an overwhelming amount of domestic disinformation and propaganda, and its practices create the greatest vulnerabilities to both foreign propaganda and nihilistic commercial exploitation by clickbait factories. Outside the right-wing media ecosystem, we observe efforts by partisans, Russians, and clickbait factories, but fact-checking norms and journalistic institutional commitments dampen the diffusion and amplification of disinformation. These dynamics are inverted in the insular right-wing network, and partisan identity-confirming assertions, however false, are accelerated and amplified.
In this chapter we take on a range of possible interventions that might strengthen media ecosystems and raise the level of public discourse and political reporting. While the options are many, there are no silver bullets. There are changes that might be led by media producers themselves on both sides of the political spectrum. There are a range of intended fixes underway by the large social media platforms and supporting mechanisms being proposed and developed by public interest organizations, and there are options for more aggressive government regulatory action. Because of the size of the problem, we are not optimistic that any of these changes will succeed on its own. But the lesson we take away from our work is that we as a society must lean more heavily toward fixing our currently broken media system. Unfortunately, most of the interventions we describe below have real costs in terms of uncomfortable acceptance of the partisan nature of the problem and of increased private and public control of content.
We divide this chapter into three parts. The first part is dedicated to the two major kinds of changes that may actually go to the root of the problem: political-institutional change on the right and a reorientation of how journalists pursue their professional commitments in a highly asymmetric media ecosystem. The former is the ultimate solution, we think, but is unlikely to occur unless the Republican Party suffers a series of political setbacks that will force such a fundamental reorientation. The latter is more feasible, and we think it can actually make a difference for much of the population. But it may have little or no effect on the roughly one-third of the population that actively and willingly inhabits the right-wing media ecosystem. The second part outlines the most widely discussed practical solutions to the more novel, technology-dependent explanations of crisis that have captured most of the public attention since 2016. These are primarily: increased control by intermediaries, particularly social media platforms and Google, of variously defined illegal or otherwise unacceptable content, and regulation of political advertising. The last part will offer a brief overview of various other approaches aimed to help reduce the supply of, and demand for, misinformation and bias-confirming propaganda.
Political Advertising: Disclosure and Accountability
A much more tractable problem, for both law and self-regulation, is online political advertising. The problem presents three distinct aspects. First, online ads have been to this date exempt from the disclosure requirements that normally apply to television, radio, and print advertising. This is a holdover from an earlier, “hands off the internet” laissez-faire attitude that can no longer be justified given the size of the companies involved and the magnitude of the role they play in political advertising. Second, online advertising may be substantially more effective and narrowly targeted in ways that subvert judgment and are more amenable to experimentally validated behavioral manipulation. Cambridge Analytica’s claims were likely overstated, and it is likely that the 2016 cycle did not see these developing techniques of psychographically informed behavioral marketing techniques deployed with any measureable success. But there is little doubt that the confluence of techniques for analysis of very large datasets, A/B testing in product marketing, and the rapidly developing fields of behavioral sciences will continue to improve techniques of emotional and psychological manipulation. The literature we reviewed in Chapter 9 makes clear that the claims of the effectiveness of narrowly targeted advertising are not yet scientifically proven. But the continued efforts of industry suggest that platforms will continue to increase their ability to individualize and will seek to increase their effectiveness at manipulating the preferences of their targets. The third problem is that, just as we saw with Russian sockpuppets and bots, behavioral marketing techniques often do not take the form of explicit advertising. Rather, they are paid influencers who deceptively manipulate genuine users. Any regime that focuses its definitions purely on explicit paid advertising, and does not address to some extent the problem of masked influence, will push propaganda and political marketing from the regulated and explicit to the unregulated underground.
we opened this book with a threat matrix: a description of the major sources of polluted information that have been blamed for threatening our ability to tell truth from fiction as a society. We noted that most public discussion of the threats focused on novel and technological causes, rather than long-term institutional and structural causes. As we come to the conclusion of our book, we can say with confidence that writing this chapter of possible solutions would have been easier had our analysis revealed a clear, technologically driven cause for our present epistemic crisis. It would be easier if we knew that the present crisis was caused by entrepreneurial teenagers running fake news sites on Facebook, Russian sockpuppet and bot accounts, targeted psychographics-informed advertising from Cambridge Analytica, or even technologically induced symmetrically partisan echo chambers.
But our studies have led us to the conclusion that these putative risks are, for the near future, not the major causes of disruption. We have argued throughout this book that none of these actors—Russians, fake news entrepreneurs, Cambridge Analytica, Facebook itself, or symmetric partisan echo chambers—were the major cause of the epistemic crisis experienced within the U.S. media ecosystem during the 2016 election or since. Instead of these technologically driven dynamics, which are novel but ultimately less important, we see longer-term dynamics of political economy: ideology and institutions interacting with technological adoption as the primary drivers of the present epistemic crisis. These dynamics, which play out across television, radio, and mainstream professional journalism at least as much as through the internet and social media, have been developing for 30 years and have resulted in a highly asymmetric media ecosystem that is the primary driver of disinformation and propaganda in the American public sphere. The right-wing media ecosystem in particular circulates an overwhelming amount of domestic disinformation and propaganda, and its practices create the greatest vulnerabilities to both foreign propaganda and nihilistic commercial exploitation by clickbait factories. Outside the right-wing media ecosystem, we observe efforts by partisans, Russians, and clickbait factories, but fact-checking norms and journalistic institutional commitments dampen the diffusion and amplification of disinformation. These dynamics are inverted in the insular right-wing network, and partisan identity-confirming assertions, however false, are accelerated and amplified.
In this chapter we take on a range of possible interventions that might strengthen media ecosystems and raise the level of public discourse and political reporting. While the options are many, there are no silver bullets. There are changes that might be led by media producers themselves on both sides of the political spectrum. There are a range of intended fixes underway by the large social media platforms and supporting mechanisms being proposed and developed by public interest organizations, and there are options for more aggressive government regulatory action. Because of the size of the problem, we are not optimistic that any of these changes will succeed on its own. But the lesson we take away from our work is that we as a society must lean more heavily toward fixing our currently broken media system. Unfortunately, most of the interventions we describe below have real costs in terms of uncomfortable acceptance of the partisan nature of the problem and of increased private and public control of content.
We divide this chapter into three parts. The first part is dedicated to the two major kinds of changes that may actually go to the root of the problem: political-institutional change on the right and a reorientation of how journalists pursue their professional commitments in a highly asymmetric media ecosystem. The former is the ultimate solution, we think, but is unlikely to occur unless the Republican Party suffers a series of political setbacks that will force such a fundamental reorientation. The latter is more feasible, and we think it can actually make a difference for much of the population. But it may have little or no effect on the roughly one-third of the population that actively and willingly inhabits the right-wing media ecosystem. The second part outlines the most widely discussed practical solutions to the more novel, technology-dependent explanations of crisis that have captured most of the public attention since 2016. These are primarily: increased control by intermediaries, particularly social media platforms and Google, of variously defined illegal or otherwise unacceptable content, and regulation of political advertising. The last part will offer a brief overview of various other approaches aimed to help reduce the supply of, and demand for, misinformation and bias-confirming propaganda.
The Honest Ads Act
The Honest Ads Act introduced by Senators Amy Klobuchar, Mark Warner, and John McCain was the first significant legislative effort to address the new challenges of network propaganda. The bill sought to do three things. First, it separated paid internet communications from unpaid communications, incorporating paid communications into the normal model adopted for communication generally, and leaving volunteer or unpaid communications alone. Second, it required disclaimers on online advertising, so that people exposed to political advertising can see that it is political advertising, not part of the organic flow of communications, and who is paying for it. And third, and perhaps most important, it required the creation of a fine-grained public database of online political issue advertising, reaching well beyond electioneering.
Paid Internet as Political Communication. First, the bill included “paid Internet, or paid digital communications” as part of the normal framework of contributions and expenditures, addressing what was an anachronistic exclusion given the dramatic increase in the significance of internet and social media as core modes of political communication. And the bill also expanded electioneering—express advocacy for or against a candidate by anyone just before an election—to include placement or promotion on an online platform for a fee. The use of “paid” and “for a fee” are clearly intended to exclude genuine grassroots campaigns. By doing so, the bill recognized the importance of preserving the democratizing aspect of the internet—its capacity to empower decentralized citizens to self-organize rather than depending on the established parties and wealthy donors. This latter provision is also the only one that might be interpreted to apply not only to communications made in payment to the platforms themselves, as with advertising, but also to behavioral social-media marketing firms that specialize in simulating social attention to a topic or concern by deploying paid human confederates or automated and semi-automated accounts—botnets and sockpuppets.
Disclaimers on Facebook and Google Ads. Second, the bill required online advertising to include the kinds of disclaimers television viewers have come to expect: “paid for by” or “I am so and so and I approve this message.” These provisions of the bill emphasize the anomaly that inconclusive Federal Election Commission (FEC) advisory opinions have enabled Google and Facebook to market to political advertisers not only reach and focus but also the ability to remain masked. In 2010 Google persuaded a divided FEC that its ads were too short to include a full disclaimer, and the commissioners were split between those who wanted to simply exclude Google’s ads from the requirements of disclaimer altogether, and those who wanted to condition the exclusion on the ad carrying a link to the advertiser’s site, where the disclaimer would appear prominently.14Close In 2011 Facebook tried to piggyback on Google’s effort by arguing that its own advertising was not only too brief to allow the disclaimer to show on its face, but that because much of the advertising directed not to a campaign site but to news stories supportive of a campaign, the FEC should adopt the more complete exclusion supported by some of its members in the 2010 opinion.15Close In other words, because Facebook’s ads were designed to be short to fit with users’ usage patterns, and because ads often sent users to media sites rather than to a campaign site where a disclaimer could be displayed, imposing any disclaimer requirement on Facebook advertising, even one satisfied merely by disclosure on the target site, was “impractical,” a recognized exception to the disclaimer requirement in the act.
The Honest Ads Act would explicitly reject the possibility that advertising on social media and search would be covered by this “impractical” exception. The idea that the biggest and most sophisticated technology companies in the world can build driverless cars and optimize messaging and interfaces by running thousands of experiments a day, but cannot figure out how to include an economical indication that a communication is a political ad and construct a pop-up or other mechanism for letting users who want to figure out who is behind the act, is laughable. The bill simply states a clear minimal requirement: users have to know the name of the sponsor and have to be given the means to get all the legally required information about the sponsor without being exposed to any other information.
The necessity of this kind of provision is clear. We assess the credibility of any statement in the context of what we think the agenda of the speaker is. That’s why we require political advertising to disclose its sponsor to begin with. If the Clinton campaign were to target evangelical voters with communications that emphasized her opponent’s comments on the Hollywood Access video, these voters would treat the communications with more of a grain of salt even if its contents are true. The same would be true if the Trump campaign had targeted African American voters with narrowly tailored targeted ads quoting Hillary Clinton’s use of the term “superpredator” in the context of a 1996 criminal law reform debate.16Close There is nothing wrong with trying to persuade your opponents’ base that their candidate is unworthy of their support. But doing so behind a mask undermines those voters’ ability to judge your statements fairly, including by discounting properly the reliability or honest intentions of the speaker.
In 2018 the Federal Election Commission invited comments on its own version of the disclosure requirements. These are tailored to the FEC’s mandate to deal with electioneering, and largely deal with the nature of the disclaimer requirement. One option would simply treat online video like television, online audio like radio, and online text like print. The other tries to offer more flexibility for online platforms to tailor their disclosure to the technology. Presumably, the more flexibility the platforms have to design the disclosures, the easier it will be for them to design and test forms of disclosure that comply with the letter of the law but offer their clients the ability to minimize the number of recipients who are exposed to the disclosure. Our own sense is that, if in 2010 the internet companies deserved protection from overly onerous requirements, by 2018 the risk is inverted. Starting with a very demanding and strict requirement and then loosening the constraints through case-by-case advisory opinions, seems the more prudent course today.
Reconstructing Center-Right Media
There is nothing conservative about calling career law enforcement officials and the intelligence community the “deep state.” The fact that the targets of the attack, like Robert Mueller, Rod Rosenstein, or Andrew McCabe, were life-long Republicans merely underscores that fact. There is nothing conservative about calling for a trade war. There is nothing conservative about breaking from long-held institutional norms for short-term political advantage. And there is nothing conservative about telling Americans to reject the consensus estimate of the CIA, the FBI, and the NSA that we were attacked by Russia and suggesting instead that these agencies are covering up for a DNC conspiracy. What has happened first and foremost to make all these things possible is that the Republican Party has been taken over by ever-more right-wing politicians. As an analysis published following Richard Lugar’s primary defeat noted, when Lugar first became a senator in 1977, he was to the right of more than half of Senate Republicans. Despite moderating only a bit over his 36 years in the Senate, by his retirement he had moved from being the twenty-third-most moderate Republican to being the fifth-most moderate, and even if he had not moderated his views, would have been the twelfth-most moderate.1Close When Marco Rubio was elected as a senator in 2010, he was the firebrand Tea Party candidate, elected alongside Tea Party darling Pat Toomey, who had ousted moderate Republican Arlen Specter. By 2016 Rubio was identified as the “moderate” candidate, even though he had a DW-NOMINATE score to the right of the median Republican senator in the 113th Congress, while the final two candidates in the primary were the third-most right-wing senator, Ted Cruz, and Donald Trump. As we write these lines, we have no idea whether the 2018 mid-term elections, much less the 2020 election, will deliver losses or gains to the Republican Party. Perhaps the hard-right strategy will continue to pay off for Republicans in terms of short-term political gains. Perhaps a reversal will force a reorientation. The question is whether Republicans who would identify themselves with a Lugar, an Arlen Specter, or a John Kasich have a significant chance of recapturing their party without a significant reassertion of the role of center-right media.
One of our clearest and starkest findings is the near disappearance of center-right media. There is the Wall Street Journal, with its conservative editorial page but continued commitment to journalistic standards in its reporting; and to some extent The Hill plays a center-right role. Both sites appear in the center of the partisan landscape according to our data because readers on the right did not pay attention to these sites any more than readers on the left did. Fox News, as we showed in Chapter 2, has asserted its leadership role in the right-wing media ecosystem at the expense of becoming even more oriented inwardly, toward the insular right wing of the American public sphere. Several conservative commentators have emphasized the extent to which Fox News has skewed the Republican-oriented media system. David French and Matthew Sheffield wrote in the National Review during the run-up to the 2016 election that Fox News was “hurting the right” or “making the right intellectually deaf.”2Close Libertarian Cato Institute Senior Fellow Julian Sanchez wrote early and presciently when he described the right as suffering what he called “epistemic closure”:
One of the more striking features of the contemporary conservative movement is the extent to which it has been moving toward epistemic closure. Reality is defined by a multimedia array of interconnected and cross promoting conservative blogs, radio programs, magazines, and of course, Fox News. Whatever conflicts with that reality can be dismissed out of hand because it comes from the liberal media, and is therefore ipso facto not to be trusted. (How do you know they’re liberal? Well, they disagree with the conservative media!) This epistemic closure can be a source of solidarity and energy, but it also renders the conservative media ecosystem fragile.3Close
Our data could not be summed up more perfectly. The right wing of the American media ecosystem has been a breeding ground for conspiracy theory and disinformation, and a significant point of vulnerability in our capacity, as a country and a democracy, to resist disinformation and propaganda. We have documented this finding extensively throughout this book by describing both the architecture and flows of information and disinformation in the American media ecosystem in the past three years and by presenting distinct case studies of disinformation campaigns surrounding what we know to be Russian hacks of the DNC and Podesta emails and the investigation into Russian interference in the election—be it Seth Rich, the Forensicator, Uranium One, or the DNC and Podesta email-based stories like spirit cooking.
The question is, in part, whether there is enough will, and money, among centrist Republicans to address the fragility of the conservative media ecosystem by supporting publications that are centrally committed to reasserting a sense of reality. We are under no illusion that such a reorientation will be easy, and we are not even sure it is possible. Our observations regarding how Breitbart and Trump brought Fox News to heel toward the end of the primary season, and the propaganda feedback loop we outline in Chapter 3 suggest that this strategy will be exceedingly difficult to carry out, as does the extent to which long-standing conservative sites that opposed Trump were shunted aside during the primary. There just may not be appetite for reporting oriented more toward objectivity and less toward partisanship. But just as Robert and Rebekah Mercer invested heavily in creating Breitbart, the Government Accountability Initiative, and other elements of the far-right disinformation ecosystem, one might imagine that there are enough billionaires and millionaires who do not see themselves in the populist, anti-immigrant, anti-trade, and increasingly anti-rule-of-law Trump-Bannon image of the Republican Party. We have no basis in our own work to advise how such a Herculean feat may be achieved. Perhaps it could take the form of leadership or ownership change in Fox News, leveraging existing audience loyalty to gradually reintroduce a more reality-anchored but nonetheless conservative news outlet. Perhaps it would require launching a new media outlet. But without a significant outlet that is both committed to professional journalistic values in its news reporting and fact checking and trusted by conservatives when it reports what it reports, it is difficult to see how center-right, reality-based conservatives can reassert themselves within the Republican Party. And without such a reassertion, it is hard to see how the fragility of the right-wing media ecosystem does not continue to be a hotbed of bias-confirming disinformation and propaganda, foreign and domestic.
A Public Machine-Readable Open Database of Political Issue Advertising
The major innovation of the bill is to leverage the technological capabilities of online advertising to create a timely, publicly open record of online advertising that would be available “as soon as possible” and would be open to public inspection in machine-readable form. This is perhaps the most important of the bill’s provisions, because, executed faithfully, it should allow public watchdog organizations to offer accountability for lies and manipulation in almost real time. Moreover, this is the only provision of the bill that applies to issue campaigns as well as electoral campaigns, and so it is the only one where the American public will get some visibility into the campaign dynamics on any “national legislative issue of public importance.”
The bill requires the very biggest online platforms (with over 50 million unique monthly U.S. visitors) to report all ads placed by anyone who spends more than $500 a year on political advertising to be placed in an open, publicly accessible database. The data would include a copy of the ad, the audience targeted, the views, and the time of first and last display, as well as the name and contact information of the purchaser. The online platforms already collect all this data as a function of their basic service to advertisers and their ability to price their ads and bill their clients. The additional requirement of formatting this data in an open, publicly known format and placing it in a public database is incrementally trivial by comparison to the investments these companies have made in developing their advertising base and their capacities to deliver viewers to advertisers. Having such a database, by contrast, would allow campaigns to be each other’s watchdogs—keeping each other somewhat more honest and constrained—and perhaps more importantly, would allow users anywhere on the net, from professional journalists and nonprofits to concerned citizens with a knack for data, to see what the campaigns and others are doing, and to be able to report promptly on these practices to offer us, as a society, at least a measure of transparency about how our elections are conducted. This public database could allow the many and diverse organizations that have significant knowledge in machine learning and pattern recognition to deploy their considerable capabilities to identify manipulative campaigns by foreign governments and to help Americans understand who, more generally, is trying to manipulate public opinion and how.
The database is particularly critical because Facebook and Google will continuously improve their ability to deliver advertisements finely tuned to very narrowly targeted populations. In the television or newspaper era, if a campaign wanted to appeal to neo-Nazis, it could only do so in the public eye, suffering whatever consequences that association entails with other populations. That constraint on how narrow, incendiary, or outright false a campaign candidates and their supporters can run is disappearing in an era when Facebook can already identify and target advertising to populations in the few thousands range—down to the level of American followers of a German far-right ultranationalist party.17Close Hypertargeted marketing of this sort frees a campaign from being associated with particularly controversial messages, while still being able to use them on very narrow populations where they appeal. A database that is publicly accessible and allows many parties to review and identify particularly false, abusive, or underhanded microtargeted campaigns will impose at least some pressure on campaigns not to issue messages that they cannot defend to the bulk of their likely voters.
In 2017 Google released a plan to voluntarily implement some of these affordances. It announced that it would publish a transparency report about who is buying election-related ads on Google platforms and how much money is being spent, a publicly accessible database of election ads purchased on AdWords and YouTube, with information about who bought each ad, and will implement in-ad disclosures—Google will identify the names of advertisers running election-related campaigns on Google Search, YouTube, and the Google Display Network via Google’s “Why This Ad” icon. These are all desirable features, and they may offer some insight into how these elements may operate when adopted voluntarily, but the electoral system and its integrity is fundamentally a matter of public concern and should be regulated uniformly, across all companies and platform, and subject to appropriate administrative and judicial procedures. Falling back on private action may be the only first step available, given a dysfunctional legislative process, but it cannot be the primary permanent solution for a foundational piece of the democratic process.
Reconstructing Center-Right Media
There is nothing conservative about calling career law enforcement officials and the intelligence community the “deep state.” The fact that the targets of the attack, like Robert Mueller, Rod Rosenstein, or Andrew McCabe, were life-long Republicans merely underscores that fact. There is nothing conservative about calling for a trade war. There is nothing conservative about breaking from long-held institutional norms for short-term political advantage. And there is nothing conservative about telling Americans to reject the consensus estimate of the CIA, the FBI, and the NSA that we were attacked by Russia and suggesting instead that these agencies are covering up for a DNC conspiracy. What has happened first and foremost to make all these things possible is that the Republican Party has been taken over by ever-more right-wing politicians. As an analysis published following Richard Lugar’s primary defeat noted, when Lugar first became a senator in 1977, he was to the right of more than half of Senate Republicans. Despite moderating only a bit over his 36 years in the Senate, by his retirement he had moved from being the twenty-third-most moderate Republican to being the fifth-most moderate, and even if he had not moderated his views, would have been the twelfth-most moderate.1Close When Marco Rubio was elected as a senator in 2010, he was the firebrand Tea Party candidate, elected alongside Tea Party darling Pat Toomey, who had ousted moderate Republican Arlen Specter. By 2016 Rubio was identified as the “moderate” candidate, even though he had a DW-NOMINATE score to the right of the median Republican senator in the 113th Congress, while the final two candidates in the primary were the third-most right-wing senator, Ted Cruz, and Donald Trump. As we write these lines, we have no idea whether the 2018 mid-term elections, much less the 2020 election, will deliver losses or gains to the Republican Party. Perhaps the hard-right strategy will continue to pay off for Republicans in terms of short-term political gains. Perhaps a reversal will force a reorientation. The question is whether Republicans who would identify themselves with a Lugar, an Arlen Specter, or a John Kasich have a significant chance of recapturing their party without a significant reassertion of the role of center-right media.
One of our clearest and starkest findings is the near disappearance of center-right media. There is the Wall Street Journal, with its conservative editorial page but continued commitment to journalistic standards in its reporting; and to some extent The Hill plays a center-right role. Both sites appear in the center of the partisan landscape according to our data because readers on the right did not pay attention to these sites any more than readers on the left did. Fox News, as we showed in Chapter 2, has asserted its leadership role in the right-wing media ecosystem at the expense of becoming even more oriented inwardly, toward the insular right wing of the American public sphere. Several conservative commentators have emphasized the extent to which Fox News has skewed the Republican-oriented media system. David French and Matthew Sheffield wrote in the National Review during the run-up to the 2016 election that Fox News was “hurting the right” or “making the right intellectually deaf.”2Close Libertarian Cato Institute Senior Fellow Julian Sanchez wrote early and presciently when he described the right as suffering what he called “epistemic closure”:
One of the more striking features of the contemporary conservative movement is the extent to which it has been moving toward epistemic closure. Reality is defined by a multimedia array of interconnected and cross promoting conservative blogs, radio programs, magazines, and of course, Fox News. Whatever conflicts with that reality can be dismissed out of hand because it comes from the liberal media, and is therefore ipso facto not to be trusted. (How do you know they’re liberal? Well, they disagree with the conservative media!) This epistemic closure can be a source of solidarity and energy, but it also renders the conservative media ecosystem fragile.3Close
Our data could not be summed up more perfectly. The right wing of the American media ecosystem has been a breeding ground for conspiracy theory and disinformation, and a significant point of vulnerability in our capacity, as a country and a democracy, to resist disinformation and propaganda. We have documented this finding extensively throughout this book by describing both the architecture and flows of information and disinformation in the American media ecosystem in the past three years and by presenting distinct case studies of disinformation campaigns surrounding what we know to be Russian hacks of the DNC and Podesta emails and the investigation into Russian interference in the election—be it Seth Rich, the Forensicator, Uranium One, or the DNC and Podesta email-based stories like spirit cooking.
The question is, in part, whether there is enough will, and money, among centrist Republicans to address the fragility of the conservative media ecosystem by supporting publications that are centrally committed to reasserting a sense of reality. We are under no illusion that such a reorientation will be easy, and we are not even sure it is possible. Our observations regarding how Breitbart and Trump brought Fox News to heel toward the end of the primary season, and the propaganda feedback loop we outline in Chapter 3 suggest that this strategy will be exceedingly difficult to carry out, as does the extent to which long-standing conservative sites that opposed Trump were shunted aside during the primary. There just may not be appetite for reporting oriented more toward objectivity and less toward partisanship. But just as Robert and Rebekah Mercer invested heavily in creating Breitbart, the Government Accountability Initiative, and other elements of the far-right disinformation ecosystem, one might imagine that there are enough billionaires and millionaires who do not see themselves in the populist, anti-immigrant, anti-trade, and increasingly anti-rule-of-law Trump-Bannon image of the Republican Party. We have no basis in our own work to advise how such a Herculean feat may be achieved. Perhaps it could take the form of leadership or ownership change in Fox News, leveraging existing audience loyalty to gradually reintroduce a more reality-anchored but nonetheless conservative news outlet. Perhaps it would require launching a new media outlet. But without a significant outlet that is both committed to professional journalistic values in its news reporting and fact checking and trusted by conservatives when it reports what it reports, it is difficult to see how center-right, reality-based conservatives can reassert themselves within the Republican Party. And without such a reassertion, it is hard to see how the fragility of the right-wing media ecosystem does not continue to be a hotbed of bias-confirming disinformation and propaganda, foreign and domestic.
What About Botnets, Sockpuppets, and Paid Social Promoters?
Addressing paid ads will not obviously address “astroturf” social influence—bots, sockpuppets, and paid influencers. Particularly in Chapter 8, in our discussion of Russian efforts, we saw that coordinated campaigns aim to simulate public engagement and attention, and to draw other, real citizens to follow the astroturfing networks in terms of agenda setting, framing of issues, and levels of credibility assigned to various narratives. This practice is used by marketing firms as well as foreign governments.
If regulation stopped at “paid advertising” as traditionally defined, the solution would be significant but partial even with regard to paid advertising. Historically, when a broadcast station or editor was a bottleneck that needed to be paid to publish anything on the platform, defining “paid” as “paid to the publisher” would have made sense. Here, however, a major pathway to communicating on a platform whose human users are provided the service for free is by hiring outside marketing firms that specialize in using that free access to provide a paid service to the person seeking political influence. Search engine optimizers who try to manipulate Google search results to come out on top, or behavioral marketing firms that use coordinated accounts, whether automated or not, to simulate social engagement, are firms that offer paid services to engage in political communication. The difficulty posed by such campaigns is that they will not appear on the platforms as paid advertising, because those who engage in these platforms are simulating accounts on the networks. The marketers—whether they are a Russian information operations center or a behavioral marketing firm—engage with the network through multiple accounts, as though they are authentic users, and control and operate the accounts from outside the platform.
The Honest Ads Act definition of “qualified Internet or digital communication” is “any communication which is placed or promoted for a fee on an online platform.” This definition is certainly broad enough to encompass the products sold by third-party paid providers whose product is to use the free affordances of the online network to produce the effect of a political communication, and to do so for a fee. As a practical matter, such a definition would reduce the effectiveness of viral political marketing that uses botnets or sockpuppets to simulate authentic grassroots engagement, because each bot, sockpuppet, or paid influencer would have to carry a disclaimer as to the fact that they are paid and the source of payment. Although the whole purpose of such coordinated campaigns is to create the false impression that the views expressed are expressed authentically in the target Facebook or Twitter community, the burden on expression is no greater than the burden on any political advertiser who would have preferred to communicate without being clearly labeled as political advertising. The party seeking to communicate is still permitted to communicate, to exactly the same people (unless the false accounts violate the platform’s terms of service, but it is not a legitimate complaint for the marketers to argue that the campaign disclosure rule makes it harder for them to violate the terms of service of the platforms they use). The disclaimer requirement would merely remove the misleading representation that the communication is by a person not paid to express such views.
While the general language of the definition of a qualified internet communication is broad enough to include paid bot and sockpuppet campaigns, and the disclaimer provisions seem to apply, the present text of the bill seems to exclude such campaigns from the provision that requires online platforms to maintain a public database of advertisements. The definition of “qualified political advertisement” to which the database requirement applies, includes “any advertisement (including search engine marketing, display advertisements, video advertisements, native advertisements, and sponsorships).” It would be preferable to include “coordinated social network campaigns” explicitly among the list of examples of “advertisement.” It is possible and certainly appropriate for courts to read “native advertisements” to include a sockpuppet or bot pushing a headline or meme that supports a candidate or campaign. But there is a risk that courts would not. Furthermore, the provision requires platforms only to keep a record of “any request to purchase on such online platform a qualified political advertisement,” and advertisers are only required to make information necessary for the online platform to comply with its obligations. It would be preferable to clarify that the advertisers owe an independent duty to disclose to the platform all the information they need to include paid coordinated campaigns in the database, even if the request for the advertisement and the payment are not made to the platform.
As with the more general requirements of disclaimer applied to explicit advertising, clarifying that the disclosure and disclaimer requirements apply to coordinated campaigns will not address every instance of media manipulation. A covert foreign information operation will not comply with new laws intended to exclude it any more that it complies with present laws designed for the same purpose. But just as the disclosure requirements and database for advertisements would limit the effectiveness of efforts by would-be propagandists (campaigns, activists, or foreign governments) to leverage the best data and marketing techniques that Google and Facebook have to offer, so too would an interpretation of the bill that extends to commercial marketing firms that provide synthetic social-behavioral marketing through paid sockpuppets, botnets, or human influencers. This will not address all propaganda, but it will certainly bring some of the most effective manipulation tactics into the sunlight.
Professional Journalism in a Propaganda-Rich Ecosystem
We have dedicated most of this volume to mapping the media ecosystem, documenting its asymmetric structure, and understanding the role this asymmetry plays in the dissemination of disinformation. We nonetheless emphasized the critical role of mainstream media in this system. In Chapter 6, we showed how our work relates to the work of other scholars who looked at coverage during the election to underscore the extent to which mainstream media focused on horse race coverage, negative coverage, and scandals over issues, particularly when it came to Hillary Clinton. We showed how the New York Times allowed itself to be used in a disinformation campaign, laundering politically motivated opposition research by Peter Schweizer by bestowing upon it the imprimatur and legitimacy of its name with coverage that raised questions in blaring headlines and buried caveats deep in the small print of the story. We similarly showed how coverage by other major outlets like the Washington Post and the Associated Press in August of 2016 led with the scandal-implying headline and buried the admission that evidence of corruption was thin at best. But we also showed in Chapters 3, 5, 6, and 8 how the reality-check dynamic in play across “the rest” of the media ecosystem was able to check disinformation and error on both sides, as mainstream and newer online media continuously checked each other’s worst impulses and corrected error and overreaching.
Most Americans do not occupy the right-wing media ecosystem. Likely more than half of conservative and more independent viewers, readers, and listeners are exposed to CNN as well as to network and local television and mainstream news sites. The crossover audience, much more than the audience already fully committed to the perspective of Sean Hannity or Rush Limbaugh, is the most important audience that can be influenced by mainstream media. When mainstream professional media sources insist on coverage that performs their own neutrality by giving equal weight to opposing views, even when one is false and the other is not, they fail. In a famous 2004 study, Maxwell Boykoff and Jules Boykoff showed that the climate change coverage of major prestige papers—the New York Times, the Washington Post, the Wall Street Journal, and the Los Angeles Times—gave “balanced” coverage, providing a platform not only for arguments that climate change was anthropogenic but also to those that climate change was not caused by human activity, even though the scientific consensus was by then well established.4Close As more recent work by Derek Koehler showed, readers form erroneous assessments of the weight of the evidence even when they are explicitly told that one view they are reading reflects a near-consensus of experts while the other is held by a small minority.5Close The balanced-reporting norm in that context significantly muddied the waters on the politics of climate change, which was exactly the purpose of the propagandists. But they could only achieve their goal at the population level with the implicit support of mainstream journalists who reported on the two sides as more-or-less equal voices in the controversy. Our observations regarding the prevalence and diffusion of disinformation in the right-wing media ecosystem suggest that mainstream professional journalism needs to treat many statements associated with current American politics as it later came to deal with coverage of climate science and climate denial.
Practically, this means that professional journalism needs to recalibrate its commitment to objective reporting further toward transparent, accountable verifiability and away from demonstrative neutrality. As Thomas Patterson wrote in explaining the negative reporting in election coverage, consistently negative reporting allowed media outlets to demonstrate their independence and neutrality to their audiences. Given the highly asymmetric patterns of lying and disinformation coming from the two campaigns, media outlets avoided the appearance of bias by emphasizing negativity and criticism in their reporting of both sides. In practical terms, that created a broad public impression of equivalent unsuitability for office and a paucity of substantive coverage of positions.
Instead of engaging in this kind of public performance of neutrality, what we might call demonstrative neutrality, objectivity needs to be performed by emphasizing the transparency and accountability of journalists’ sources and practices, what we might call accountable verifiability. We already see it to a degree in the ways in which mainstream media organizations dealt with some visible errors in Chapter 6. Some of this is simply emphasis and extension of existing practices—providing public access to underlying documentary materials, for example. Some would require developing institutional mechanisms for independent verification of sources, on the model of scientific access to materials and underlying code for reproducibility. Whether this should be done in a network of media outlets, as it is in peer review in science, by designating a set of independent nonprofit fact-checking organizations that will operate as quality assurance for journalistic enterprises and be provided with more access to underlying materials than under current practice, or at least initially by creating a more prominent role for internal “inspector generals” inside the most visible media remains to be seen. More controversially and harder to implement, these practices have to be applied equally to headlines and framing, where so much of the communicative value of the article is conveyed, as they are to discrete factual claims. None of this will persuade people who are already inside the propaganda feedback loop, nor need it be designed to satisfy them. It is intended to allow journalistic enterprises to avoid errors and gain enough confidence internally, and communicate enough objective truth seeking externally, so that they can abandon demonstrative neutrality and its attendant false impressions.
This broader reorientation should not postpone several narrower and more immediate adjustments in journalistic practice. Recognizing the asymmetry we document here requires editors to treat tips or “exclusives,” as well as emails or other leaked or hacked documents with greater care than they have in the past few years. The “Fool me once, shame on you . . .” adage suggests that after, for example, the New York Times’s experience with Peter Schweizer and the Uranium One story, mainstream professional journalists need to understand that they are subject to a persistent propaganda campaign trying to lure them into amplifying and accrediting propaganda. This happens of course as normal politics from both sides of the partisan system, but our work here shows that one side is armed with a vastly more powerful engine for generating and propagating propaganda.
It is certainly possible to resist these attacks without particularly supporting one side over the other. In November 2017, for example, the right-wing disinformation outfit Project Veritas tried to trip up the Washington Post, offering the Post a fake informant who told the Post that Roy Moore had impregnated her when she was a teenager. The sting operation was intended to undermine the credibility of the Post’s reporting on Roy Moore’s alleged pursuit and harassment of teens when he was a 30-something-year-old.6Close Rather than jumping at the opportunity to develop the Moore story, the Washington Post’s reporters followed the professional model—checked out the source, assessed her credibility, and ultimately detected and outed the attempt at manipulation. Mainstream media editors and journalists must understand that they are under a sustained attack, sometimes as premeditated and elaborate as this sting, usually more humdrum.
Russian and right-wing political actors have been particularly effective at using document dumps, particularly hacked emails, to lure journalists into over-reporting. The dynamic is clear. By offering real documents, activists and propagandists give journalists the illusion that they have both a trove of newsworthy documents—what could be more tempting than a tranche of secrets?—and complete control over the sources materials and ability to craft a narrative around them. After all, the journalist makes up her own mind about what matters and what does not. After a reporter spends hours or days pouring over materials, as other competitors are releasing juicy tidbits based on the documents, the pressure to produce something spectacular becomes powerful. It is not front-page news to report: “folks, we looked at these emails for days and they’re pretty humdrum; not much to report.” And yet, now we know that these document dumps are intended precisely to elicit attention-grabbing headlines followed by professional caveats buried in paragraph 12, on the model of “while there is no direct evidence of wrongdoing in these emails, they nonetheless raise legitimate questions. . . .” The DNC and Podesta email hacks, as well as the Judicial Watch FOIA-based email dumps, were all designed precisely to elicit this kind of predictable response. And the editors, reporters, and propagandists all know that very few readers will get to paragraph 12. It is irresponsible for professional journalists and editors to continue this practice given this pattern. We must develop new standards for evidence of wrongdoing before prominent publication of implied allegations of corruption. Likewise, we must develop new modes of investigative reporting that focus more on propagandists who create document dumps.
Similarly, researchers at the Data & Society Institute who have focused specifically on the nether-regions of the internet, like Joan Donovan, have questioned whether journalists and editors should curtail coverage of fake news and extremist memes.7Close Producers of this content thrive on such attention, and publication in major outlets only increases the diffusion of the falsehoods, whether debunked or not. What is perhaps gained with exposure and debunking may, according to these arguments, be more than outweighed by the encouragement the speakers receive from the attention and the exposure that the falsehoods received.8Close Our findings about the insularity of the right-wing media ecosystem suggest that for a given rumor or conspiracy theory to circulate outside right-wing media, it has to be picked up by outlets outside that insular system. Moreover, even inside that ecosystem, a story must be picked up by influencers or major outlets to escape obscurity. Decisions not to report such stories unless and until they hit a sufficiently influential node could in fact dampen distribution to people who would only be exposed to them from mainstream media. Whether this dampening effect would be outweighed by having the rumors circulate unchallenged should be addressed as an open research question.
Despite our various cautions, our findings are good news for professional journalistic organizations. The past decade has seen repeated claims that the sky is falling; that there is no more of a market for general interest professional journals; that audience attention is diffused around the internet; that audiences only want entertaining, bias-confirming news; and so forth. We find instead that professional mainstream media continue to play an enormously important role for most Americans. Indeed, the information abundance of this era makes professional mainstream media particularly valuable to those living at least partly outside the right-wing bubble. Going forward, professional journalists must offer precisely what makes them special: credible reporting, in organizations committed to journalistic norms, but with a heavier emphasis on verifiable and accountable truth and credibility rather than balance and neutrality. As long as the media ecosystem is highly asymmetric structurally and in its flow of propaganda, balance and neutrality amplify disinformation rather than combating it.
A Public Health Approach to the Media Ecosystem
The public database called for in Honest Ads presents a model for a broader public health approach to our media ecosystem. At the moment, Twitter offers expensive but broadly open access to its data, which explains why so much of the best research on fake news, bots, and so forth is conducted on Twitter. Facebook, by contrast, offers very limited access to its data to outside researchers. Google occupies a position somewhere in the middle, with reasonable access to YouTube usage patterns, for example, but less visibility to other aspects of search and advertising. Each of these companies has legitimate considerations concerning protecting user privacy. Each of these companies has legitimate considerations in terms of their own proprietary interests. And as each of these companies considers its own commercial interests, these will often align imperfectly, if at all, with the public interest. In order to understand how our information environment is changing, we need a mechanism through which to offer bona fide independent investigators access to data that would allow us, as a society, to understand how various changes in how we communicate—whether driven by technological change, regulatory intervention, or business decisions—affect the levels of truth and falsehood in our media ecosystem or the degrees of segmentation and polarization.
Our communications privacy as individuals and citizens is an important concern, but not more so than our privacy interest in health data. And yet we develop systems to allow bona fide health researchers access, under appropriate legal constraints, contractual limitations, and technical protections to access the health data of millions of residents to conduct detailed analyses of patterns of disease, treatment, and outcomes. We can no more trust Facebook to be the sole source of information about the effects of its platform on our media ecosystem than we could trust a pharmaceutical company to be the sole source of research on the health outcomes of its drugs, or an oil company to be the sole source of measurements of particles emissions or levels of CO2 in the atmosphere. We need a publicly regulated system that will regulate not only the companies but the researchers who access the data as well, so they do not play the role of brokers to companies like Cambridge Analytica.
Professional Journalism in a Propaganda-Rich Ecosystem
We have dedicated most of this volume to mapping the media ecosystem, documenting its asymmetric structure, and understanding the role this asymmetry plays in the dissemination of disinformation. We nonetheless emphasized the critical role of mainstream media in this system. In Chapter 6, we showed how our work relates to the work of other scholars who looked at coverage during the election to underscore the extent to which mainstream media focused on horse race coverage, negative coverage, and scandals over issues, particularly when it came to Hillary Clinton. We showed how the New York Times allowed itself to be used in a disinformation campaign, laundering politically motivated opposition research by Peter Schweizer by bestowing upon it the imprimatur and legitimacy of its name with coverage that raised questions in blaring headlines and buried caveats deep in the small print of the story. We similarly showed how coverage by other major outlets like the Washington Post and the Associated Press in August of 2016 led with the scandal-implying headline and buried the admission that evidence of corruption was thin at best. But we also showed in Chapters 3, 5, 6, and 8 how the reality-check dynamic in play across “the rest” of the media ecosystem was able to check disinformation and error on both sides, as mainstream and newer online media continuously checked each other’s worst impulses and corrected error and overreaching.
Most Americans do not occupy the right-wing media ecosystem. Likely more than half of conservative and more independent viewers, readers, and listeners are exposed to CNN as well as to network and local television and mainstream news sites. The crossover audience, much more than the audience already fully committed to the perspective of Sean Hannity or Rush Limbaugh, is the most important audience that can be influenced by mainstream media. When mainstream professional media sources insist on coverage that performs their own neutrality by giving equal weight to opposing views, even when one is false and the other is not, they fail. In a famous 2004 study, Maxwell Boykoff and Jules Boykoff showed that the climate change coverage of major prestige papers—the New York Times, the Washington Post, the Wall Street Journal, and the Los Angeles Times—gave “balanced” coverage, providing a platform not only for arguments that climate change was anthropogenic but also to those that climate change was not caused by human activity, even though the scientific consensus was by then well established.4Close As more recent work by Derek Koehler showed, readers form erroneous assessments of the weight of the evidence even when they are explicitly told that one view they are reading reflects a near-consensus of experts while the other is held by a small minority.5Close The balanced-reporting norm in that context significantly muddied the waters on the politics of climate change, which was exactly the purpose of the propagandists. But they could only achieve their goal at the population level with the implicit support of mainstream journalists who reported on the two sides as more-or-less equal voices in the controversy. Our observations regarding the prevalence and diffusion of disinformation in the right-wing media ecosystem suggest that mainstream professional journalism needs to treat many statements associated with current American politics as it later came to deal with coverage of climate science and climate denial.
Practically, this means that professional journalism needs to recalibrate its commitment to objective reporting further toward transparent, accountable verifiability and away from demonstrative neutrality. As Thomas Patterson wrote in explaining the negative reporting in election coverage, consistently negative reporting allowed media outlets to demonstrate their independence and neutrality to their audiences. Given the highly asymmetric patterns of lying and disinformation coming from the two campaigns, media outlets avoided the appearance of bias by emphasizing negativity and criticism in their reporting of both sides. In practical terms, that created a broad public impression of equivalent unsuitability for office and a paucity of substantive coverage of positions.
Instead of engaging in this kind of public performance of neutrality, what we might call demonstrative neutrality, objectivity needs to be performed by emphasizing the transparency and accountability of journalists’ sources and practices, what we might call accountable verifiability. We already see it to a degree in the ways in which mainstream media organizations dealt with some visible errors in Chapter 6. Some of this is simply emphasis and extension of existing practices—providing public access to underlying documentary materials, for example. Some would require developing institutional mechanisms for independent verification of sources, on the model of scientific access to materials and underlying code for reproducibility. Whether this should be done in a network of media outlets, as it is in peer review in science, by designating a set of independent nonprofit fact-checking organizations that will operate as quality assurance for journalistic enterprises and be provided with more access to underlying materials than under current practice, or at least initially by creating a more prominent role for internal “inspector generals” inside the most visible media remains to be seen. More controversially and harder to implement, these practices have to be applied equally to headlines and framing, where so much of the communicative value of the article is conveyed, as they are to discrete factual claims. None of this will persuade people who are already inside the propaganda feedback loop, nor need it be designed to satisfy them. It is intended to allow journalistic enterprises to avoid errors and gain enough confidence internally, and communicate enough objective truth seeking externally, so that they can abandon demonstrative neutrality and its attendant false impressions.
This broader reorientation should not postpone several narrower and more immediate adjustments in journalistic practice. Recognizing the asymmetry we document here requires editors to treat tips or “exclusives,” as well as emails or other leaked or hacked documents with greater care than they have in the past few years. The “Fool me once, shame on you . . .” adage suggests that after, for example, the New York Times’s experience with Peter Schweizer and the Uranium One story, mainstream professional journalists need to understand that they are subject to a persistent propaganda campaign trying to lure them into amplifying and accrediting propaganda. This happens of course as normal politics from both sides of the partisan system, but our work here shows that one side is armed with a vastly more powerful engine for generating and propagating propaganda.
It is certainly possible to resist these attacks without particularly supporting one side over the other. In November 2017, for example, the right-wing disinformation outfit Project Veritas tried to trip up the Washington Post, offering the Post a fake informant who told the Post that Roy Moore had impregnated her when she was a teenager. The sting operation was intended to undermine the credibility of the Post’s reporting on Roy Moore’s alleged pursuit and harassment of teens when he was a 30-something-year-old.6Close Rather than jumping at the opportunity to develop the Moore story, the Washington Post’s reporters followed the professional model—checked out the source, assessed her credibility, and ultimately detected and outed the attempt at manipulation. Mainstream media editors and journalists must understand that they are under a sustained attack, sometimes as premeditated and elaborate as this sting, usually more humdrum.
Russian and right-wing political actors have been particularly effective at using document dumps, particularly hacked emails, to lure journalists into over-reporting. The dynamic is clear. By offering real documents, activists and propagandists give journalists the illusion that they have both a trove of newsworthy documents—what could be more tempting than a tranche of secrets?—and complete control over the sources materials and ability to craft a narrative around them. After all, the journalist makes up her own mind about what matters and what does not. After a reporter spends hours or days pouring over materials, as other competitors are releasing juicy tidbits based on the documents, the pressure to produce something spectacular becomes powerful. It is not front-page news to report: “folks, we looked at these emails for days and they’re pretty humdrum; not much to report.” And yet, now we know that these document dumps are intended precisely to elicit attention-grabbing headlines followed by professional caveats buried in paragraph 12, on the model of “while there is no direct evidence of wrongdoing in these emails, they nonetheless raise legitimate questions. . . .” The DNC and Podesta email hacks, as well as the Judicial Watch FOIA-based email dumps, were all designed precisely to elicit this kind of predictable response. And the editors, reporters, and propagandists all know that very few readers will get to paragraph 12. It is irresponsible for professional journalists and editors to continue this practice given this pattern. We must develop new standards for evidence of wrongdoing before prominent publication of implied allegations of corruption. Likewise, we must develop new modes of investigative reporting that focus more on propagandists who create document dumps.
Similarly, researchers at the Data & Society Institute who have focused specifically on the nether-regions of the internet, like Joan Donovan, have questioned whether journalists and editors should curtail coverage of fake news and extremist memes.7Close Producers of this content thrive on such attention, and publication in major outlets only increases the diffusion of the falsehoods, whether debunked or not. What is perhaps gained with exposure and debunking may, according to these arguments, be more than outweighed by the encouragement the speakers receive from the attention and the exposure that the falsehoods received.8Close Our findings about the insularity of the right-wing media ecosystem suggest that for a given rumor or conspiracy theory to circulate outside right-wing media, it has to be picked up by outlets outside that insular system. Moreover, even inside that ecosystem, a story must be picked up by influencers or major outlets to escape obscurity. Decisions not to report such stories unless and until they hit a sufficiently influential node could in fact dampen distribution to people who would only be exposed to them from mainstream media. Whether this dampening effect would be outweighed by having the rumors circulate unchallenged should be addressed as an open research question.
Despite our various cautions, our findings are good news for professional journalistic organizations. The past decade has seen repeated claims that the sky is falling; that there is no more of a market for general interest professional journals; that audience attention is diffused around the internet; that audiences only want entertaining, bias-confirming news; and so forth. We find instead that professional mainstream media continue to play an enormously important role for most Americans. Indeed, the information abundance of this era makes professional mainstream media particularly valuable to those living at least partly outside the right-wing bubble. Going forward, professional journalists must offer precisely what makes them special: credible reporting, in organizations committed to journalistic norms, but with a heavier emphasis on verifiable and accountable truth and credibility rather than balance and neutrality. As long as the media ecosystem is highly asymmetric structurally and in its flow of propaganda, balance and neutrality amplify disinformation rather than combating it.
What About Defamation Law? Intentional or Reckless Falsehoods
As we brought this book to a close, the family of Seth Rich was suing Fox News for its false and disturbing story. Alex Jones was forced to retract his Pizzagate story, likely under threat of a lawsuit. Nothing under present or proposed election law would touch this kind of intentional lying. As Peter Thiel showed when he funded “Hulk Hogan’s” lawsuit that bankrupted Gawker, a motivated actor with enough money and patience can find a case with which to shut down a publication, even under the very permissive American defamation law framework. Should we have a system that would allow Jeb Bush to sue Alex Jones for portraying him as having “close Nazi ties”? Under normal circumstances such a path should raise concerns for anyone who is properly concerned with robust political speech. Certainly, defamation has been used in many countries as a way of silencing the government’s critics, and the strict limits under the New York Times v. Sullivan line of cases make this path appropriately difficult. The level of bile and sheer disinformation that characterized the 2016 election is such that perhaps raising the cost of reckless or intentional defamatory falsehood as a business model, at least, is a reasonable path to moderation of the most extreme instances of falsehood. But the persistence of the tabloid model in the United Kingdom, despite that country’s very permissive defamation law, suggests that even this approach would be of only moderate use. Whether such an approach is worth the candle depends on one’s empirical answer to the question of how much of the defamation comes from fly-by-night fake news outlets, which would be effectively judgment proof, and how much comes from a core number of commercial sites that have made it their business model to sell false information and peddle in conspiracy theory.
The Question of Platform Regulation
The primary focus of solutions-oriented conversations since 2016—in the United States, in Europe, and throughout the world—has been on changing how information is accessible, framed, shared, or remunerated on platforms, primarily on Facebook and Google. The big unanswered question is whether the time has come for new government regulation or whether the best of the imperfect options is to be patient and avoid invasive legal remedies while platforms muddle toward the right balance of content moderation tools and self-regulatory policies to satisfy their many constituencies, including investors, advertisers, and users.
There are a range of measures under consideration, some based on regulation, some on self-regulation or voluntary measures. They sometimes depend on algorithmic identification of false stories and sometimes on human detection and assessment; they usually focus on removal or treatment for falsehood or illegality, but sometimes for extreme views; sometimes such designations result in additional tagging and marking of the content, sometimes in removal or demotion; and sometimes they involve blocking advertising and payment to undermine the business model of the purveyors of bullshit.
The debate over misinformation and disinformation on online platforms has intersected with a growing concern over the degree of concentration that internet platforms enjoy in their respective markets. The core examples are Google and Facebook. Google dominates search and (through YouTube) video, while Facebook dominates social media, through the Facebook platform itself and through its ownership of Instagram and WhatsApp. But there is real tension between the goal of reducing concentration and increasing competition, on the one hand, and the goal of regulating a reasonably coherent public sphere, on the other hand. A Google search for “Gab” displays (at least in early 2018) a “people also searched for” box that includes “The Daily Stormer.” Gab is a social media platform that developed as an alternative for far-right and alt-right users who were banned or constrained by Twitter, Facebook, and even Reddit. A crackdown by the platforms in the presence of competition diverted their communications to a semi-segregated platform but did not remove them from the internet. Whether robust competition is helpful for combating misinformation depends on whether the critical goal is to eliminate the disinformation or merely to segregate it and reduce its diffusion pathways. But segregation only works in a relatively concentrated market, where banishment from a handful of major platforms contains content within smaller communities. Current regulatory and activist effort is focused on the major platforms precisely because they have so much power and changing their diffusion patterns has a large impact on overall diffusion patterns online.
From the perspective of disinformation and misinformation, the trouble with concentrated platforms is that if and when they fail, or become bad actors or beholden to bad actors, the negative effect is enormous. The Cambridge Analytica story, with all the caveats about how much of it was true and how much was hype, offers a perfect example. Even in the most Facebook-supportive version of the story, the company’s overwhelming influence and presence made it a single point of failure that enabled Cambridge Analytica, by abusing Facebook’s terms of service, to inappropriately collect and use the private data of 87 million Facebook users. The story is no better if, instead of fingering Cambridge Analytica as the bad actor, it is Facebook itself, using permission it obtained by forcing people to click “I agree” on inscrutable terms of service, that sells to a political campaign the means to mount a targeted advertising campaign intended to microtarget black voters in Florida to suppress their votes. The acts of both Facebook and the campaign in Florida were legal. But the danger presented by a single company having such massive influence over large portions of the population is reason enough to focus either on ensuring greater competition that will diffuse that power or on tightly regulating how that company can use its market power. In this Facebook represents a parallel problem to the challenge created by media consolidation after the 1996 Telecommunications Act in the United States, where companies like Clear Channel and Sinclair Broadcasting were able to expand their audience reach dramatically while combining it with aggressive distribution of right-wing propaganda on radio and television. In all these cases, solutions that tend to reinforce centralized control over the network of content outlets, rather than reduce concentration and with it the degree to which any of them provides a single point of failure, seem to exacerbate rather than solve the problem.
Our experience in the few areas where there has been normative consensus in favor of harnessing the power of platforms to regulate content means that we are not in uncharted territory. Child pornography and copyright, with all their fundamental differences, have been areas where platforms have been regulated, and have even adopted voluntary mechanisms beyond what law requires, to contain the spread of information about which we have made moral or legal decisions. The problem is that political speech is very different from pedophilia and that the copyright wars have taught us well that platform control over speech can shade into censorship, both intentionally and unintentionally.
Institutionalizing Fact Checking
An approach that has received a good bit of public attention and some platform integration focuses on efforts to institutionalize fact checking and generate ground-truth labels and marking. First among these are existing fact-checking sites and the efforts by platforms to use their collective judgment to remove, demote, or label stories. These organizations, PolitiFact, Snopes, Factcheck.org, the Washington Post’s Fact Checker, are all efforts to institutionalize and professionalize the process of checking how true or false the facts in a given story are. Of these organizations only one, PolitiFact, systematically reports both true and false judgments by a media source or personality, so as to give some broad overall sense of the veracity of an outlet or speaker. All these organizations limit themselves to “fact” checking, so they are not useful in determining sites that are extremist or hyperpartisan, as opposed to simply false. And they all suffer from the one failure that most newer projects will also have to contend with. They are treated by the media outlets and users of the right-wing media ecosystem as systematically biased, and as our work and the work of other researchers finds, are generally not visited, shared, linked to, or believed by users on the right.
Despite being ignored by users and outlets on the right, fact-checking sites serve an important role for the majority of people outside the right. For centrists and people of mixed views, who are not entirely in the thrall of the right-wing network, they offer an anchor through which to assess what they read and see. For people on the left, they offer an anchor in reality—a way of persuading themselves that they are not in the simple mirror image of the right-wing propaganda network. Nonetheless, the empirical evidence on the effects of fact checking even for this population are mixed. There are some studies that suggest that correction and fact checking merely reinforces recall by repeating the story and that over time all that is left is a sense that “I’ve heard this before.” Others suggest that correction fails, if it does, only for highly salient questions that conflict with a person’s prior beliefs.18Close Furthermore, fact checking takes time and, at least for short-term belief formation, is likely too slow to influence immediate perceptions. Nonetheless, our case studies show that some conspiracy theories, like Uranium One or Seth Rich, remain prevalent for days or weeks. In these cases, it seems plausible that an independent source, trusted at least by people outside the echo chamber from which the false stories emerge, could play a useful role.
Regardless of their impact on media consumers, fact-checking organizations play a valuable role for academics seeking to study diffusion of falsehoods. Determining “ground truth” is an extremely expensive and difficult process, fraught with judgments. Most scientific research groups are not set up to produce such ground-truth judgments about politically salient news, and so referring to some minimal number of decisions by these independent sites becomes a “ground truth” on which analyses can then proceed. A problem with all of the fact-checking sites is that they can check only a fraction of the overall universe of statements, true and false, that can be made, and they systematically only spend their time on claims that are at least arguably false. Looking at their statistics systematically overstates the amount of falsehood, because the numerator of total stories includes only stories that are both (a) suspect and (b) sufficiently visible to draw the attention of one or usually more fact checkers. So while these organizations play an important role, they overestimate the prevalence of falsehoods in the media ecosystem. Several new projects, such as CrossCheck or the Credibility Coalition, are trying to redress some of these problems—particularly by developing a richer ontology of offensive and misleading forms. Another important reform for these organizations would be to invest in creating a baseline ratio of truth to fiction for media outlets and politically prominent institutions by dedicating some of their resources to assessing a properly randomized sample of stories and statements from these outlets and speakers.
It is demonstrably true that institutional fact checking has not prevented the continued worsening of the epistemic crisis we describe in this book. But more and better fact checking can at least help us better understand the nature and the scope of the problem.
The Question of Platform Regulation
The primary focus of solutions-oriented conversations since 2016—in the United States, in Europe, and throughout the world—has been on changing how information is accessible, framed, shared, or remunerated on platforms, primarily on Facebook and Google. The big unanswered question is whether the time has come for new government regulation or whether the best of the imperfect options is to be patient and avoid invasive legal remedies while platforms muddle toward the right balance of content moderation tools and self-regulatory policies to satisfy their many constituencies, including investors, advertisers, and users.
There are a range of measures under consideration, some based on regulation, some on self-regulation or voluntary measures. They sometimes depend on algorithmic identification of false stories and sometimes on human detection and assessment; they usually focus on removal or treatment for falsehood or illegality, but sometimes for extreme views; sometimes such designations result in additional tagging and marking of the content, sometimes in removal or demotion; and sometimes they involve blocking advertising and payment to undermine the business model of the purveyors of bullshit.
The debate over misinformation and disinformation on online platforms has intersected with a growing concern over the degree of concentration that internet platforms enjoy in their respective markets. The core examples are Google and Facebook. Google dominates search and (through YouTube) video, while Facebook dominates social media, through the Facebook platform itself and through its ownership of Instagram and WhatsApp. But there is real tension between the goal of reducing concentration and increasing competition, on the one hand, and the goal of regulating a reasonably coherent public sphere, on the other hand. A Google search for “Gab” displays (at least in early 2018) a “people also searched for” box that includes “The Daily Stormer.” Gab is a social media platform that developed as an alternative for far-right and alt-right users who were banned or constrained by Twitter, Facebook, and even Reddit. A crackdown by the platforms in the presence of competition diverted their communications to a semi-segregated platform but did not remove them from the internet. Whether robust competition is helpful for combating misinformation depends on whether the critical goal is to eliminate the disinformation or merely to segregate it and reduce its diffusion pathways. But segregation only works in a relatively concentrated market, where banishment from a handful of major platforms contains content within smaller communities. Current regulatory and activist effort is focused on the major platforms precisely because they have so much power and changing their diffusion patterns has a large impact on overall diffusion patterns online.
From the perspective of disinformation and misinformation, the trouble with concentrated platforms is that if and when they fail, or become bad actors or beholden to bad actors, the negative effect is enormous. The Cambridge Analytica story, with all the caveats about how much of it was true and how much was hype, offers a perfect example. Even in the most Facebook-supportive version of the story, the company’s overwhelming influence and presence made it a single point of failure that enabled Cambridge Analytica, by abusing Facebook’s terms of service, to inappropriately collect and use the private data of 87 million Facebook users. The story is no better if, instead of fingering Cambridge Analytica as the bad actor, it is Facebook itself, using permission it obtained by forcing people to click “I agree” on inscrutable terms of service, that sells to a political campaign the means to mount a targeted advertising campaign intended to microtarget black voters in Florida to suppress their votes. The acts of both Facebook and the campaign in Florida were legal. But the danger presented by a single company having such massive influence over large portions of the population is reason enough to focus either on ensuring greater competition that will diffuse that power or on tightly regulating how that company can use its market power. In this Facebook represents a parallel problem to the challenge created by media consolidation after the 1996 Telecommunications Act in the United States, where companies like Clear Channel and Sinclair Broadcasting were able to expand their audience reach dramatically while combining it with aggressive distribution of right-wing propaganda on radio and television. In all these cases, solutions that tend to reinforce centralized control over the network of content outlets, rather than reduce concentration and with it the degree to which any of them provides a single point of failure, seem to exacerbate rather than solve the problem.
Our experience in the few areas where there has been normative consensus in favor of harnessing the power of platforms to regulate content means that we are not in uncharted territory. Child pornography and copyright, with all their fundamental differences, have been areas where platforms have been regulated, and have even adopted voluntary mechanisms beyond what law requires, to contain the spread of information about which we have made moral or legal decisions. The problem is that political speech is very different from pedophilia and that the copyright wars have taught us well that platform control over speech can shade into censorship, both intentionally and unintentionally.
Media Literacy Education
One class of interventions proposed has been media literacy education. This is a vitally important and active area of research, experimentation, and applied work. Survey after survey have demonstrated that many people are shockingly poor at effectively evaluating the accuracy of news reporting. A natural response is to improve education and explicitly train consumers to be more discerning about what they read and believe. Some programs are being rolled out to specifically address the “fake news” problem by teaching specific strategies to ferret out disinformation and propagation from online news sources. These are laudable projects and may help to chip away at the problem. This is not a panacea and will not by itself disarm the incredibly resilient psychological and social-identity-based factors that so often lead us astray. As foundations and government entities invest in developing these efforts, we suggest that they at least acknowledge how little evidence there is that media literacy training will relieve the kinds of pressures we identified in this volume. There appears to be little evidence that improvement on the ability to answer test or classroom media literacy questions actually translates into adoption of critical viewing and listening when students consume media in the real world.19Close A more recent, trenchant critique of media literacy comes from danah boyd, whose years of research with youth in media have led her to hypothesize that media literacy efforts have trained media consumers to be distrustful of all media and in a perverse way less discerning about what is credible and what is not.20Close Given the considerable faith and resources being invested in media literacy training, the apparent dearth of evidence, and the possibility of negative implications, we suggest that such programs be well instrumented to assess their positive and negative effects.
The Contentious Role of Government
The United States has generally taken a light touch approach to internet regulation, particularly in respect to political speech. Many other countries, especially more authoritarian countries such as China, Iran, and Saudi Arabia, have forcefully policed online speech. The most aggressive effort in a liberal democracy to respond to disinformation and hate speech on social media by regulating social media platforms is the German NetzDG law that became effective on January 1, 2018. The act applies to platforms with more than two million registered users in Germany, thereby preserving the possibility for smaller, more insulated platforms to exist without following these rules, as well as for new entrants to roll out and grow before they must incur the costs of compliance. The law requires these larger platforms to provide an easily usable procedure by which users can complain that certain content is “unlawful” under a defined set of provisions in the German criminal code. These provisions include not only hate speech against groups and Holocaust denial but also criminal prohibitions on insulting and defamatory publications.
On its face, this would seem to cover much of the immigration coverage we described in Chapter 4 and practically all of the personal attacks we described in Chapter 3 and Part Two. And while the German criminal code offers reduced penalties when the targets of defamation are political figures, it does not excuse the defamation. The NetzDG requires companies it covers to either delete the content if it is “manifestly unlawful” within 24 hours, or, if its unlawfulness is not “manifest,” to take one of two actions. The company may decide on the content’s lawfulness within seven days, or it may refer the content to an industry-funded tribunal that will be overseen by a public authority, which then has seven days to decide on the unlawfulness of the content. Violations can be sanctioned at various levels, all the way up to 50 million euros. Sharp criticism that the draft of the act was overly restrictive came from within Germany and outside. In response to the critiques, several of the definitions were tightened, an alternative dispute resolution mechanism was added, and provisions were added to allow users to challenge blocks of allegedly factually false content. These changes did not satisfy opponents, and blistering criticism continued from a broad range of observers, including Human Rights Watch.
Critics argued that, faced with a very short time to make decisions, companies would err on the side of over-censoring rather than take the risk of being found in violation of a law that carries very high fines. Examples cropped up as soon as the law began to be enforced. Comedian Sophie Passman poked fun at the far-right’s claims that immigrants destroyed German culture with a tweet that said that, as long as the practice of airing “Dinner for One” on television on New Year’s Eve remained a part of German cultural tradition, immigrants were totally welcome to come and destroy it. Her tweet was removed because some users misinterpreted it as espousing the rhetoric she was mocking. Less silly, but more threatening to democratic contestation, at least from an American perspective, were the rapid removals of tweets by leaders of the far-right Alternative for Germany (AfD) party. After the Cologne police tweeted out a New Year’s message in Arabic, among several other foreign languages, Beatrix von Storch, deputy-leader of AfD, tweeted that the police were appeasing “barbaric, gang-raping Muslim hordes of men.” Twitter removed von Storch’s tweet. The company then also removed a tweet from AfD co-leader, Alice Weidel, arguing that “[o]ur authorities submit to imported, marauding, groping, beating, knife-stabbing migrant mobs.”9Close In Germany, incitement to hatred is one of the criminal offenses to which the NetzDG applies, and it is difficult to argue that the words on their face fail that test—and yet the result is state-induced private censorship of the political viewpoints of the leaders of a party that represents the views of about 13% of the votes. The problem is that there is often a wide gap, filled by case law in common law countries and commentary and experience in civil law countries, between what a law says on its face and what will be prosecuted, and what may lead to a conviction. Think of how unimaginably broad the prohibition on insulting someone might be without working knowledge of German law and its application by the judicial system. Due to lack of public process or appeal, companies are expected to err on the side of caution (censoring arguably criminal speech to avoid the fine for inaction). Another major concern is that the NetzDG law legitimizes efforts in Singapore, the Philippines, Russia, Venezuela, and Kenya to adopt copycat models that incorporate more speech-repressive criminal provisions.
In grappling with the trade-off between free speech and extreme, counterdemocratic speech, Germany’s experience with the rise of Nazism has led it to adopt a more aggressive model than the U.S. First Amendment permits. Britain’s and France’s forays into platform regulation focused more on countering terrorist recruitment and propaganda, but hate speech is generally regulated more tightly in Western Europe than it is in the United States. And a 2017 European Commission report underscored that the model by which platforms regulate various forms of illegal speech is hardly new. From materials depicting the sexual abuse of children to materials that violate someone’s intellectual property rights, democracies generally find some materials that they are willing to prohibit and then impose on platforms some obligation to take those materials down.
The basic concern of critics of the German NetzDG law—that fear of liability would lead firms to exercise private censorship well beyond what the legislature could constitutionally impose directly—was at the foundation of American legal protection for online platforms. In the United States, the Communications Decency Act of 1996 (CDA) section 230 and the Digital Millennium Copyright Act of 1998 (DMCA) are the foundational laws governing liability of platforms for content posted by their users. CDA section 230 gave platform providers broad immunity from liability for pretty much anything their users published and has been widely considered the foundation of the freewheeling speech culture on the net. Courts were happy to let platforms avoid liability for quite substantial interpersonal abuse on their platforms,10Close much less hate speech, which is mostly protected under the First Amendment. In the absence of legal constraint in the United States, pressure from users has occasionally moved some of the major platforms to regulate offensive speech, resulting in “community guidelines” approaches. These became the de facto content control rules of the platforms in most areas, enforced through end-user agreements as interpreted by the platform companies, and punctuated by moments of public outrage that nudge these practices one way or another.
Unlike other forms of disfavored speech, the U.S. Supreme Court has long held that when copyright is used as the reason to prohibit publication of words or images, there is usually no violation of the First Amendment. We will not here rehearse the decades of academic commentary that has explained why that interpretation of the relationship between copyright and the First Amendment is incoherent. It is the law of the land, and for our purposes it explains how, faced with relative constitutional license, Congress developed in the DMCA a much more restrictive “notice-and-takedown” regime for speech challenged as violating copyright. In doing so, it identified a structure that, while not as restrictive as the NetzDG, nonetheless has similar structural attributes. Here, platform providers can only escape copyright liability over the acts of their users if they maintain a system to allow copyright challenges. Those challenges allow copyright owners to challenge the use of specific pieces of content on grounds of copyright infringement and to take down that content unless the person posting the content submitted a counternotification. This approach would encounter strong constitutional headwinds in any context but copyright. In the United States it is easier to remove YouTube videos showing students performing a show tune at a school play than YouTube videos showing neo-Nazis singing the “Horst Wessel.”
The approach taken by the CDA section 230 has been remarkably successful in supporting the development of online publishing platforms. This same architecture is ill-suited for fine-grained moderation of human interaction. Whether by government mandate or voluntary company action, applying online content moderation at massive scale is a particularly gnarly challenge. There are a range of options for dealing with problematic speech. It can be removed at the source, blocked from circulation by intermediaries, removed or demoted by a search engine, or flagged for audiences as problematic. The hard part is that someone must first determine standards and guidelines for what constitutes problematic speech, and second, design a process to weed through billions of online posts, determine which posts are problematic, and then find a way to review these decisions to be able to correct errors. In the United States the default answer to the first challenge is to leave it to companies to decide how to deal with transgressing actors and content. Germany has chosen a different path that will almost certainly result in the removal of considerably more content. Companies are meanwhile expanding the infrastructure for monitoring and sorting content at a massive scale, combining algorithms and human review. It is difficult to be optimistic about how this can be applied at such scale. The promise of machine learning and artificial intelligence to accurately separate the good from the bad has not been realized, and the interim solution is hiring tens of thousands of workers to review content with company instructions in hand. These processes are aided by flagging mechanisms that allow users to flag problematic content. Predictably, these mechanisms have their limits as they are easily gamed to attack rivals and political opponents. Our experiences, in the United States and elsewhere, with both over- and underinclusiveness in the determinations of existing systems for managing copyright or nudity, leave us very skeptical that these processes will work well, much less seamlessly.
Thinking About Solutions
We began this chapter with the acknowledgement that our diagnosis of the present information disorder makes identifying solutions difficult. Solutions that are based on misdiagnosis, particularly on imagining that Facebook, or bots, or the Russians are the core threat, will likely miss their mark. Some level of effort from platforms, backed by legal liability as necessary, could help clean platforms of some of the garbage they now carry; but regulation informed by misdiagnosis, aimed at the wrong targets, will almost certainly lead to over-censorship. There are important discrete interventions that can help alleviate the present sense of disorder. In particular, we emphasize efforts to make sure that political advertising becomes more transparent and susceptible to public scrutiny because we see large scale preference manipulation as the core of the platform business, and we see that core business presenting a profound future threat to democracy.
But our central argument has been that the present source of information disorder in American political communications is the profound asymmetry between the propaganda feedback loop that typifies the right-wing media ecosystem and the reality-check dynamic that typifies the rest of the media system. The most important attainable change in the face of that asymmetry would be to the practice of professional journalism. Our findings make clear that mainstream professional journalists continue to influence the majority of the population, including crossover audiences exposed both to right-wing propaganda and to journalism in mainstream media. We argued in Chapter 6 that the present journalistic practice of objectivity as neutrality has perverse effects in the media ecosystem we document here. By maintaining the “one side says x, the other side says y” model of objectivity in the presence of highly asymmetric propaganda efforts, mainstream media become sources of legitimation and amplification for the propagandists. Here, we suggest that a shift in emphasis in how journalists practice objectivity, from demonstrative neutrality to accountable verifiability, could help counteract some of the reinforcement and legitimation that the present practice creates on the background of highly asymmetric propaganda practices. This will not change the perceptions of the 25 to 30 percent of the population that attends purely to the right-wing media ecosystem, but could make a significant difference to crossover audiences, and would prevent erosion of the present patterns of reliance on objective media in partisan audiences on the left.
A more foundational change is, at present, aspirational. It is that Republican leaders recognize the dangers that the propaganda feedback loop poses to American democracy, and find a way to lead their party and voters out of it. Only those who have credibility and power within the partisan media sphere stand a chance of breaking the destructive cycle. While the revolution in talk radio and Fox News has provided Republicans with a highly mobilized core of supporters, it has done so at the expense of pulling the party far to the right, disconnecting its base from reality, and disabling party leaders from delivering and acting on bad news where bad news turns out to be the case. It is extremely unlikely that this change will happen, however, as long as the propaganda feedback loop delivers electoral gains. And such an effort, even if undertaken, faces the daunting challenge of communicating with a population exposed to decades of propaganda that instilled in them a profound distrust of messages that do not conform to their partisan beliefs.
The Contentious Role of Government
The United States has generally taken a light touch approach to internet regulation, particularly in respect to political speech. Many other countries, especially more authoritarian countries such as China, Iran, and Saudi Arabia, have forcefully policed online speech. The most aggressive effort in a liberal democracy to respond to disinformation and hate speech on social media by regulating social media platforms is the German NetzDG law that became effective on January 1, 2018. The act applies to platforms with more than two million registered users in Germany, thereby preserving the possibility for smaller, more insulated platforms to exist without following these rules, as well as for new entrants to roll out and grow before they must incur the costs of compliance. The law requires these larger platforms to provide an easily usable procedure by which users can complain that certain content is “unlawful” under a defined set of provisions in the German criminal code. These provisions include not only hate speech against groups and Holocaust denial but also criminal prohibitions on insulting and defamatory publications.
On its face, this would seem to cover much of the immigration coverage we described in Chapter 4 and practically all of the personal attacks we described in Chapter 3 and Part Two. And while the German criminal code offers reduced penalties when the targets of defamation are political figures, it does not excuse the defamation. The NetzDG requires companies it covers to either delete the content if it is “manifestly unlawful” within 24 hours, or, if its unlawfulness is not “manifest,” to take one of two actions. The company may decide on the content’s lawfulness within seven days, or it may refer the content to an industry-funded tribunal that will be overseen by a public authority, which then has seven days to decide on the unlawfulness of the content. Violations can be sanctioned at various levels, all the way up to 50 million euros. Sharp criticism that the draft of the act was overly restrictive came from within Germany and outside. In response to the critiques, several of the definitions were tightened, an alternative dispute resolution mechanism was added, and provisions were added to allow users to challenge blocks of allegedly factually false content. These changes did not satisfy opponents, and blistering criticism continued from a broad range of observers, including Human Rights Watch.
Critics argued that, faced with a very short time to make decisions, companies would err on the side of over-censoring rather than take the risk of being found in violation of a law that carries very high fines. Examples cropped up as soon as the law began to be enforced. Comedian Sophie Passman poked fun at the far-right’s claims that immigrants destroyed German culture with a tweet that said that, as long as the practice of airing “Dinner for One” on television on New Year’s Eve remained a part of German cultural tradition, immigrants were totally welcome to come and destroy it. Her tweet was removed because some users misinterpreted it as espousing the rhetoric she was mocking. Less silly, but more threatening to democratic contestation, at least from an American perspective, were the rapid removals of tweets by leaders of the far-right Alternative for Germany (AfD) party. After the Cologne police tweeted out a New Year’s message in Arabic, among several other foreign languages, Beatrix von Storch, deputy-leader of AfD, tweeted that the police were appeasing “barbaric, gang-raping Muslim hordes of men.” Twitter removed von Storch’s tweet. The company then also removed a tweet from AfD co-leader, Alice Weidel, arguing that “[o]ur authorities submit to imported, marauding, groping, beating, knife-stabbing migrant mobs.”9Close In Germany, incitement to hatred is one of the criminal offenses to which the NetzDG applies, and it is difficult to argue that the words on their face fail that test—and yet the result is state-induced private censorship of the political viewpoints of the leaders of a party that represents the views of about 13% of the votes. The problem is that there is often a wide gap, filled by case law in common law countries and commentary and experience in civil law countries, between what a law says on its face and what will be prosecuted, and what may lead to a conviction. Think of how unimaginably broad the prohibition on insulting someone might be without working knowledge of German law and its application by the judicial system. Due to lack of public process or appeal, companies are expected to err on the side of caution (censoring arguably criminal speech to avoid the fine for inaction). Another major concern is that the NetzDG law legitimizes efforts in Singapore, the Philippines, Russia, Venezuela, and Kenya to adopt copycat models that incorporate more speech-repressive criminal provisions.
In grappling with the trade-off between free speech and extreme, counterdemocratic speech, Germany’s experience with the rise of Nazism has led it to adopt a more aggressive model than the U.S. First Amendment permits. Britain’s and France’s forays into platform regulation focused more on countering terrorist recruitment and propaganda, but hate speech is generally regulated more tightly in Western Europe than it is in the United States. And a 2017 European Commission report underscored that the model by which platforms regulate various forms of illegal speech is hardly new. From materials depicting the sexual abuse of children to materials that violate someone’s intellectual property rights, democracies generally find some materials that they are willing to prohibit and then impose on platforms some obligation to take those materials down.
The basic concern of critics of the German NetzDG law—that fear of liability would lead firms to exercise private censorship well beyond what the legislature could constitutionally impose directly—was at the foundation of American legal protection for online platforms. In the United States, the Communications Decency Act of 1996 (CDA) section 230 and the Digital Millennium Copyright Act of 1998 (DMCA) are the foundational laws governing liability of platforms for content posted by their users. CDA section 230 gave platform providers broad immunity from liability for pretty much anything their users published and has been widely considered the foundation of the freewheeling speech culture on the net. Courts were happy to let platforms avoid liability for quite substantial interpersonal abuse on their platforms,10Close much less hate speech, which is mostly protected under the First Amendment. In the absence of legal constraint in the United States, pressure from users has occasionally moved some of the major platforms to regulate offensive speech, resulting in “community guidelines” approaches. These became the de facto content control rules of the platforms in most areas, enforced through end-user agreements as interpreted by the platform companies, and punctuated by moments of public outrage that nudge these practices one way or another.
Unlike other forms of disfavored speech, the U.S. Supreme Court has long held that when copyright is used as the reason to prohibit publication of words or images, there is usually no violation of the First Amendment. We will not here rehearse the decades of academic commentary that has explained why that interpretation of the relationship between copyright and the First Amendment is incoherent. It is the law of the land, and for our purposes it explains how, faced with relative constitutional license, Congress developed in the DMCA a much more restrictive “notice-and-takedown” regime for speech challenged as violating copyright. In doing so, it identified a structure that, while not as restrictive as the NetzDG, nonetheless has similar structural attributes. Here, platform providers can only escape copyright liability over the acts of their users if they maintain a system to allow copyright challenges. Those challenges allow copyright owners to challenge the use of specific pieces of content on grounds of copyright infringement and to take down that content unless the person posting the content submitted a counternotification. This approach would encounter strong constitutional headwinds in any context but copyright. In the United States it is easier to remove YouTube videos showing students performing a show tune at a school play than YouTube videos showing neo-Nazis singing the “Horst Wessel.”
The approach taken by the CDA section 230 has been remarkably successful in supporting the development of online publishing platforms. This same architecture is ill-suited for fine-grained moderation of human interaction. Whether by government mandate or voluntary company action, applying online content moderation at massive scale is a particularly gnarly challenge. There are a range of options for dealing with problematic speech. It can be removed at the source, blocked from circulation by intermediaries, removed or demoted by a search engine, or flagged for audiences as problematic. The hard part is that someone must first determine standards and guidelines for what constitutes problematic speech, and second, design a process to weed through billions of online posts, determine which posts are problematic, and then find a way to review these decisions to be able to correct errors. In the United States the default answer to the first challenge is to leave it to companies to decide how to deal with transgressing actors and content. Germany has chosen a different path that will almost certainly result in the removal of considerably more content. Companies are meanwhile expanding the infrastructure for monitoring and sorting content at a massive scale, combining algorithms and human review. It is difficult to be optimistic about how this can be applied at such scale. The promise of machine learning and artificial intelligence to accurately separate the good from the bad has not been realized, and the interim solution is hiring tens of thousands of workers to review content with company instructions in hand. These processes are aided by flagging mechanisms that allow users to flag problematic content. Predictably, these mechanisms have their limits as they are easily gamed to attack rivals and political opponents. Our experiences, in the United States and elsewhere, with both over- and underinclusiveness in the determinations of existing systems for managing copyright or nudity, leave us very skeptical that these processes will work well, much less seamlessly.
Self-Regulation and Its Discontents
For the lack of better alternatives, the hardest questions of content moderation in the United States are being left in the hands of companies: whether to allow the Daily Stormer to be treated like any other web publisher, whether conspiracy theorists should have equal standing on YouTube and in the Facebook newsfeed, and whether anti-vaxxers should be allowed to freely distribute information that will result in sickness and death.
Over the course of 2017 and early 2018, both Google and Facebook announced various measures intended to combat “fake news,” misleading information, and illegal political advertising. These measures combined various elements of the systems that are already in place and will almost certainly suffer from similar difficulties and imperfections, while delivering some higher enforcement. It is important not to get caught up in the supposed newness of the problem and to recognize that we are dealing with known problems with bad information—spam, phishing, sexual exploitation of children, hate speech, and so on.
The commercial bullshit or clickbait sites are the most familiar challenge. They are simply a new breed of spammers or search engine optimizers. It is feasible to identify them through some combination of machine learning based on traffic patterns, fact checking, and human judgment, likely outsourced to independent fact-checking organizations. Excluding clickbait factories is normatively the least problematic, since they are not genuine participants in the polity. That is why most of the announcements and efforts have been directed at this class of actors. Similarly, dealing with Russian or other foreign propaganda is normatively unproblematic, though technically more challenging because of the sophistication of the attacks.
The much more vexing problem is intentional political disinformation, including hyperpartisan hate speech that is harmful but not false. Perhaps in Germany it is imaginable to remove posts by the leader of a political party (AfD) supported by over 10 percent of the electorate. In the United States private platforms are allowed under current interpretations of the First Amendment to censor political speech for whatever reason they choose. But widespread political censorship by the major private platforms would certainly generate howls of protest from large swaths of the political spectrum. Moreover, the most effective propaganda generally builds on a core set of true facts and then constructs a narrative that is materially misleading. Efforts we ourselves, and many of our colleagues who are studying the landscape of disinformation, propaganda, and bullshit, have made to create well-defined research instruments that reliably permit trained coders to identify this kind of manipulative propaganda leave us skeptical that a reliable machine-learning algorithm will emerge to solve these questions in the near future.
There have been some successful efforts to pressure advertisers to remove ads from certain programs. Alex Jones of Infowars offers the clearest example of a publisher who has repeatedly published defamatory falsehoods, like Pizzagate, that have been debunked by many fact checkers. One company, AdRoll, suspended its advertising relationship with Infowars, but Google has not turned off advertising for Infowars videos. Some brands, however, have instructed Google not to run ads for their products alongside Infowars videos. In Jones’s case, this may not be too great of a loss for him, since his business model is largely based on directly selling products he markets on his shows (over 20 percent of Infowars outgoing traffic is to the site’s store, where Jones sells his own branded supplements for “male vitality,” brain enhancement, and such). These campaigns may reduce the economic incentives for some disinformation sites, and they are cathartic for the activists pushing the companies to sever ties. But they do not appear to present a systematic, scalable, and long-term response to the broader phenomenon of disinformation in the media ecosystem. Many of the most widely visited, shared, or linked sites in the right wing of the American media ecosystem engage in disinformation regularly or episodically. Asking the platforms to solve the problem by blocking a broad swath of the right-wing media ecosystem would be palpably antidemocratic.
In 2013 Pew reported that about one-quarter of American adults watch only Fox News among the cable news channels.11Close A later Pew study found that the number of Americans who preferred getting news on radio, rather than television, was about one-quarter the number who preferred television news.12Close Yet a third Pew study found that, after Fox News, the most trusted news sources for consistently conservative respondents were the radio shows of Sean Hannity, Rush Limbaugh, and Glenn Beck.13Close These all suggest that somewhere between 25 and 30 percent of Americans willingly and intentionally pay attention to media outlets that consistently tell that audience what it wants to hear, and what that audience wants to hear is often untrue. For the rest of the population to ask a small oligopoly of platforms to prevent those 30 percent from getting the content they want is, to say the least, problematic. Platforms can certainly help with the commercial pollution, and to some extent they can help with foreign propaganda. But we suggest that asking platforms to solve the fundamental political and institutional breakdown represented by the asymmetric polarization of the American polity is neither feasible nor normatively attractive.
Self-Regulation and Its Discontents
For the lack of better alternatives, the hardest questions of content moderation in the United States are being left in the hands of companies: whether to allow the Daily Stormer to be treated like any other web publisher, whether conspiracy theorists should have equal standing on YouTube and in the Facebook newsfeed, and whether anti-vaxxers should be allowed to freely distribute information that will result in sickness and death.
Over the course of 2017 and early 2018, both Google and Facebook announced various measures intended to combat “fake news,” misleading information, and illegal political advertising. These measures combined various elements of the systems that are already in place and will almost certainly suffer from similar difficulties and imperfections, while delivering some higher enforcement. It is important not to get caught up in the supposed newness of the problem and to recognize that we are dealing with known problems with bad information—spam, phishing, sexual exploitation of children, hate speech, and so on.
The commercial bullshit or clickbait sites are the most familiar challenge. They are simply a new breed of spammers or search engine optimizers. It is feasible to identify them through some combination of machine learning based on traffic patterns, fact checking, and human judgment, likely outsourced to independent fact-checking organizations. Excluding clickbait factories is normatively the least problematic, since they are not genuine participants in the polity. That is why most of the announcements and efforts have been directed at this class of actors. Similarly, dealing with Russian or other foreign propaganda is normatively unproblematic, though technically more challenging because of the sophistication of the attacks.
The much more vexing problem is intentional political disinformation, including hyperpartisan hate speech that is harmful but not false. Perhaps in Germany it is imaginable to remove posts by the leader of a political party (AfD) supported by over 10 percent of the electorate. In the United States private platforms are allowed under current interpretations of the First Amendment to censor political speech for whatever reason they choose. But widespread political censorship by the major private platforms would certainly generate howls of protest from large swaths of the political spectrum. Moreover, the most effective propaganda generally builds on a core set of true facts and then constructs a narrative that is materially misleading. Efforts we ourselves, and many of our colleagues who are studying the landscape of disinformation, propaganda, and bullshit, have made to create well-defined research instruments that reliably permit trained coders to identify this kind of manipulative propaganda leave us skeptical that a reliable machine-learning algorithm will emerge to solve these questions in the near future.
There have been some successful efforts to pressure advertisers to remove ads from certain programs. Alex Jones of Infowars offers the clearest example of a publisher who has repeatedly published defamatory falsehoods, like Pizzagate, that have been debunked by many fact checkers. One company, AdRoll, suspended its advertising relationship with Infowars, but Google has not turned off advertising for Infowars videos. Some brands, however, have instructed Google not to run ads for their products alongside Infowars videos. In Jones’s case, this may not be too great of a loss for him, since his business model is largely based on directly selling products he markets on his shows (over 20 percent of Infowars outgoing traffic is to the site’s store, where Jones sells his own branded supplements for “male vitality,” brain enhancement, and such). These campaigns may reduce the economic incentives for some disinformation sites, and they are cathartic for the activists pushing the companies to sever ties. But they do not appear to present a systematic, scalable, and long-term response to the broader phenomenon of disinformation in the media ecosystem. Many of the most widely visited, shared, or linked sites in the right wing of the American media ecosystem engage in disinformation regularly or episodically. Asking the platforms to solve the problem by blocking a broad swath of the right-wing media ecosystem would be palpably antidemocratic.
In 2013 Pew reported that about one-quarter of American adults watch only Fox News among the cable news channels.11Close A later Pew study found that the number of Americans who preferred getting news on radio, rather than television, was about one-quarter the number who preferred television news.12Close Yet a third Pew study found that, after Fox News, the most trusted news sources for consistently conservative respondents were the radio shows of Sean Hannity, Rush Limbaugh, and Glenn Beck.13Close These all suggest that somewhere between 25 and 30 percent of Americans willingly and intentionally pay attention to media outlets that consistently tell that audience what it wants to hear, and what that audience wants to hear is often untrue. For the rest of the population to ask a small oligopoly of platforms to prevent those 30 percent from getting the content they want is, to say the least, problematic. Platforms can certainly help with the commercial pollution, and to some extent they can help with foreign propaganda. But we suggest that asking platforms to solve the fundamental political and institutional breakdown represented by the asymmetric polarization of the American polity is neither feasible nor normatively attractive.
Political Advertising: Disclosure and Accountability
A much more tractable problem, for both law and self-regulation, is online political advertising. The problem presents three distinct aspects. First, online ads have been to this date exempt from the disclosure requirements that normally apply to television, radio, and print advertising. This is a holdover from an earlier, “hands off the internet” laissez-faire attitude that can no longer be justified given the size of the companies involved and the magnitude of the role they play in political advertising. Second, online advertising may be substantially more effective and narrowly targeted in ways that subvert judgment and are more amenable to experimentally validated behavioral manipulation. Cambridge Analytica’s claims were likely overstated, and it is likely that the 2016 cycle did not see these developing techniques of psychographically informed behavioral marketing techniques deployed with any measureable success. But there is little doubt that the confluence of techniques for analysis of very large datasets, A/B testing in product marketing, and the rapidly developing fields of behavioral sciences will continue to improve techniques of emotional and psychological manipulation. The literature we reviewed in Chapter 9 makes clear that the claims of the effectiveness of narrowly targeted advertising are not yet scientifically proven. But the continued efforts of industry suggest that platforms will continue to increase their ability to individualize and will seek to increase their effectiveness at manipulating the preferences of their targets. The third problem is that, just as we saw with Russian sockpuppets and bots, behavioral marketing techniques often do not take the form of explicit advertising. Rather, they are paid influencers who deceptively manipulate genuine users. Any regime that focuses its definitions purely on explicit paid advertising, and does not address to some extent the problem of masked influence, will push propaganda and political marketing from the regulated and explicit to the unregulated underground.
Political Advertising: Disclosure and Accountability
A much more tractable problem, for both law and self-regulation, is online political advertising. The problem presents three distinct aspects. First, online ads have been to this date exempt from the disclosure requirements that normally apply to television, radio, and print advertising. This is a holdover from an earlier, “hands off the internet” laissez-faire attitude that can no longer be justified given the size of the companies involved and the magnitude of the role they play in political advertising. Second, online advertising may be substantially more effective and narrowly targeted in ways that subvert judgment and are more amenable to experimentally validated behavioral manipulation. Cambridge Analytica’s claims were likely overstated, and it is likely that the 2016 cycle did not see these developing techniques of psychographically informed behavioral marketing techniques deployed with any measureable success. But there is little doubt that the confluence of techniques for analysis of very large datasets, A/B testing in product marketing, and the rapidly developing fields of behavioral sciences will continue to improve techniques of emotional and psychological manipulation. The literature we reviewed in Chapter 9 makes clear that the claims of the effectiveness of narrowly targeted advertising are not yet scientifically proven. But the continued efforts of industry suggest that platforms will continue to increase their ability to individualize and will seek to increase their effectiveness at manipulating the preferences of their targets. The third problem is that, just as we saw with Russian sockpuppets and bots, behavioral marketing techniques often do not take the form of explicit advertising. Rather, they are paid influencers who deceptively manipulate genuine users. Any regime that focuses its definitions purely on explicit paid advertising, and does not address to some extent the problem of masked influence, will push propaganda and political marketing from the regulated and explicit to the unregulated underground.
The Honest Ads Act
The Honest Ads Act introduced by Senators Amy Klobuchar, Mark Warner, and John McCain was the first significant legislative effort to address the new challenges of network propaganda. The bill sought to do three things. First, it separated paid internet communications from unpaid communications, incorporating paid communications into the normal model adopted for communication generally, and leaving volunteer or unpaid communications alone. Second, it required disclaimers on online advertising, so that people exposed to political advertising can see that it is political advertising, not part of the organic flow of communications, and who is paying for it. And third, and perhaps most important, it required the creation of a fine-grained public database of online political issue advertising, reaching well beyond electioneering.
Paid Internet as Political Communication. First, the bill included “paid Internet, or paid digital communications” as part of the normal framework of contributions and expenditures, addressing what was an anachronistic exclusion given the dramatic increase in the significance of internet and social media as core modes of political communication. And the bill also expanded electioneering—express advocacy for or against a candidate by anyone just before an election—to include placement or promotion on an online platform for a fee. The use of “paid” and “for a fee” are clearly intended to exclude genuine grassroots campaigns. By doing so, the bill recognized the importance of preserving the democratizing aspect of the internet—its capacity to empower decentralized citizens to self-organize rather than depending on the established parties and wealthy donors. This latter provision is also the only one that might be interpreted to apply not only to communications made in payment to the platforms themselves, as with advertising, but also to behavioral social-media marketing firms that specialize in simulating social attention to a topic or concern by deploying paid human confederates or automated and semi-automated accounts—botnets and sockpuppets.
Disclaimers on Facebook and Google Ads. Second, the bill required online advertising to include the kinds of disclaimers television viewers have come to expect: “paid for by” or “I am so and so and I approve this message.” These provisions of the bill emphasize the anomaly that inconclusive Federal Election Commission (FEC) advisory opinions have enabled Google and Facebook to market to political advertisers not only reach and focus but also the ability to remain masked. In 2010 Google persuaded a divided FEC that its ads were too short to include a full disclaimer, and the commissioners were split between those who wanted to simply exclude Google’s ads from the requirements of disclaimer altogether, and those who wanted to condition the exclusion on the ad carrying a link to the advertiser’s site, where the disclaimer would appear prominently.14Close In 2011 Facebook tried to piggyback on Google’s effort by arguing that its own advertising was not only too brief to allow the disclaimer to show on its face, but that because much of the advertising directed not to a campaign site but to news stories supportive of a campaign, the FEC should adopt the more complete exclusion supported by some of its members in the 2010 opinion.15Close In other words, because Facebook’s ads were designed to be short to fit with users’ usage patterns, and because ads often sent users to media sites rather than to a campaign site where a disclaimer could be displayed, imposing any disclaimer requirement on Facebook advertising, even one satisfied merely by disclosure on the target site, was “impractical,” a recognized exception to the disclaimer requirement in the act.
The Honest Ads Act would explicitly reject the possibility that advertising on social media and search would be covered by this “impractical” exception. The idea that the biggest and most sophisticated technology companies in the world can build driverless cars and optimize messaging and interfaces by running thousands of experiments a day, but cannot figure out how to include an economical indication that a communication is a political ad and construct a pop-up or other mechanism for letting users who want to figure out who is behind the act, is laughable. The bill simply states a clear minimal requirement: users have to know the name of the sponsor and have to be given the means to get all the legally required information about the sponsor without being exposed to any other information.
The necessity of this kind of provision is clear. We assess the credibility of any statement in the context of what we think the agenda of the speaker is. That’s why we require political advertising to disclose its sponsor to begin with. If the Clinton campaign were to target evangelical voters with communications that emphasized her opponent’s comments on the Hollywood Access video, these voters would treat the communications with more of a grain of salt even if its contents are true. The same would be true if the Trump campaign had targeted African American voters with narrowly tailored targeted ads quoting Hillary Clinton’s use of the term “superpredator” in the context of a 1996 criminal law reform debate.16Close There is nothing wrong with trying to persuade your opponents’ base that their candidate is unworthy of their support. But doing so behind a mask undermines those voters’ ability to judge your statements fairly, including by discounting properly the reliability or honest intentions of the speaker.
In 2018 the Federal Election Commission invited comments on its own version of the disclosure requirements. These are tailored to the FEC’s mandate to deal with electioneering, and largely deal with the nature of the disclaimer requirement. One option would simply treat online video like television, online audio like radio, and online text like print. The other tries to offer more flexibility for online platforms to tailor their disclosure to the technology. Presumably, the more flexibility the platforms have to design the disclosures, the easier it will be for them to design and test forms of disclosure that comply with the letter of the law but offer their clients the ability to minimize the number of recipients who are exposed to the disclosure. Our own sense is that, if in 2010 the internet companies deserved protection from overly onerous requirements, by 2018 the risk is inverted. Starting with a very demanding and strict requirement and then loosening the constraints through case-by-case advisory opinions, seems the more prudent course today.
The Honest Ads Act
The Honest Ads Act introduced by Senators Amy Klobuchar, Mark Warner, and John McCain was the first significant legislative effort to address the new challenges of network propaganda. The bill sought to do three things. First, it separated paid internet communications from unpaid communications, incorporating paid communications into the normal model adopted for communication generally, and leaving volunteer or unpaid communications alone. Second, it required disclaimers on online advertising, so that people exposed to political advertising can see that it is political advertising, not part of the organic flow of communications, and who is paying for it. And third, and perhaps most important, it required the creation of a fine-grained public database of online political issue advertising, reaching well beyond electioneering.
Paid Internet as Political Communication. First, the bill included “paid Internet, or paid digital communications” as part of the normal framework of contributions and expenditures, addressing what was an anachronistic exclusion given the dramatic increase in the significance of internet and social media as core modes of political communication. And the bill also expanded electioneering—express advocacy for or against a candidate by anyone just before an election—to include placement or promotion on an online platform for a fee. The use of “paid” and “for a fee” are clearly intended to exclude genuine grassroots campaigns. By doing so, the bill recognized the importance of preserving the democratizing aspect of the internet—its capacity to empower decentralized citizens to self-organize rather than depending on the established parties and wealthy donors. This latter provision is also the only one that might be interpreted to apply not only to communications made in payment to the platforms themselves, as with advertising, but also to behavioral social-media marketing firms that specialize in simulating social attention to a topic or concern by deploying paid human confederates or automated and semi-automated accounts—botnets and sockpuppets.
Disclaimers on Facebook and Google Ads. Second, the bill required online advertising to include the kinds of disclaimers television viewers have come to expect: “paid for by” or “I am so and so and I approve this message.” These provisions of the bill emphasize the anomaly that inconclusive Federal Election Commission (FEC) advisory opinions have enabled Google and Facebook to market to political advertisers not only reach and focus but also the ability to remain masked. In 2010 Google persuaded a divided FEC that its ads were too short to include a full disclaimer, and the commissioners were split between those who wanted to simply exclude Google’s ads from the requirements of disclaimer altogether, and those who wanted to condition the exclusion on the ad carrying a link to the advertiser’s site, where the disclaimer would appear prominently.14Close In 2011 Facebook tried to piggyback on Google’s effort by arguing that its own advertising was not only too brief to allow the disclaimer to show on its face, but that because much of the advertising directed not to a campaign site but to news stories supportive of a campaign, the FEC should adopt the more complete exclusion supported by some of its members in the 2010 opinion.15Close In other words, because Facebook’s ads were designed to be short to fit with users’ usage patterns, and because ads often sent users to media sites rather than to a campaign site where a disclaimer could be displayed, imposing any disclaimer requirement on Facebook advertising, even one satisfied merely by disclosure on the target site, was “impractical,” a recognized exception to the disclaimer requirement in the act.
The Honest Ads Act would explicitly reject the possibility that advertising on social media and search would be covered by this “impractical” exception. The idea that the biggest and most sophisticated technology companies in the world can build driverless cars and optimize messaging and interfaces by running thousands of experiments a day, but cannot figure out how to include an economical indication that a communication is a political ad and construct a pop-up or other mechanism for letting users who want to figure out who is behind the act, is laughable. The bill simply states a clear minimal requirement: users have to know the name of the sponsor and have to be given the means to get all the legally required information about the sponsor without being exposed to any other information.
The necessity of this kind of provision is clear. We assess the credibility of any statement in the context of what we think the agenda of the speaker is. That’s why we require political advertising to disclose its sponsor to begin with. If the Clinton campaign were to target evangelical voters with communications that emphasized her opponent’s comments on the Hollywood Access video, these voters would treat the communications with more of a grain of salt even if its contents are true. The same would be true if the Trump campaign had targeted African American voters with narrowly tailored targeted ads quoting Hillary Clinton’s use of the term “superpredator” in the context of a 1996 criminal law reform debate.16Close There is nothing wrong with trying to persuade your opponents’ base that their candidate is unworthy of their support. But doing so behind a mask undermines those voters’ ability to judge your statements fairly, including by discounting properly the reliability or honest intentions of the speaker.
In 2018 the Federal Election Commission invited comments on its own version of the disclosure requirements. These are tailored to the FEC’s mandate to deal with electioneering, and largely deal with the nature of the disclaimer requirement. One option would simply treat online video like television, online audio like radio, and online text like print. The other tries to offer more flexibility for online platforms to tailor their disclosure to the technology. Presumably, the more flexibility the platforms have to design the disclosures, the easier it will be for them to design and test forms of disclosure that comply with the letter of the law but offer their clients the ability to minimize the number of recipients who are exposed to the disclosure. Our own sense is that, if in 2010 the internet companies deserved protection from overly onerous requirements, by 2018 the risk is inverted. Starting with a very demanding and strict requirement and then loosening the constraints through case-by-case advisory opinions, seems the more prudent course today.
A Public Machine-Readable Open Database of Political Issue Advertising
The major innovation of the bill is to leverage the technological capabilities of online advertising to create a timely, publicly open record of online advertising that would be available “as soon as possible” and would be open to public inspection in machine-readable form. This is perhaps the most important of the bill’s provisions, because, executed faithfully, it should allow public watchdog organizations to offer accountability for lies and manipulation in almost real time. Moreover, this is the only provision of the bill that applies to issue campaigns as well as electoral campaigns, and so it is the only one where the American public will get some visibility into the campaign dynamics on any “national legislative issue of public importance.”
The bill requires the very biggest online platforms (with over 50 million unique monthly U.S. visitors) to report all ads placed by anyone who spends more than $500 a year on political advertising to be placed in an open, publicly accessible database. The data would include a copy of the ad, the audience targeted, the views, and the time of first and last display, as well as the name and contact information of the purchaser. The online platforms already collect all this data as a function of their basic service to advertisers and their ability to price their ads and bill their clients. The additional requirement of formatting this data in an open, publicly known format and placing it in a public database is incrementally trivial by comparison to the investments these companies have made in developing their advertising base and their capacities to deliver viewers to advertisers. Having such a database, by contrast, would allow campaigns to be each other’s watchdogs—keeping each other somewhat more honest and constrained—and perhaps more importantly, would allow users anywhere on the net, from professional journalists and nonprofits to concerned citizens with a knack for data, to see what the campaigns and others are doing, and to be able to report promptly on these practices to offer us, as a society, at least a measure of transparency about how our elections are conducted. This public database could allow the many and diverse organizations that have significant knowledge in machine learning and pattern recognition to deploy their considerable capabilities to identify manipulative campaigns by foreign governments and to help Americans understand who, more generally, is trying to manipulate public opinion and how.
The database is particularly critical because Facebook and Google will continuously improve their ability to deliver advertisements finely tuned to very narrowly targeted populations. In the television or newspaper era, if a campaign wanted to appeal to neo-Nazis, it could only do so in the public eye, suffering whatever consequences that association entails with other populations. That constraint on how narrow, incendiary, or outright false a campaign candidates and their supporters can run is disappearing in an era when Facebook can already identify and target advertising to populations in the few thousands range—down to the level of American followers of a German far-right ultranationalist party.17Close Hypertargeted marketing of this sort frees a campaign from being associated with particularly controversial messages, while still being able to use them on very narrow populations where they appeal. A database that is publicly accessible and allows many parties to review and identify particularly false, abusive, or underhanded microtargeted campaigns will impose at least some pressure on campaigns not to issue messages that they cannot defend to the bulk of their likely voters.
In 2017 Google released a plan to voluntarily implement some of these affordances. It announced that it would publish a transparency report about who is buying election-related ads on Google platforms and how much money is being spent, a publicly accessible database of election ads purchased on AdWords and YouTube, with information about who bought each ad, and will implement in-ad disclosures—Google will identify the names of advertisers running election-related campaigns on Google Search, YouTube, and the Google Display Network via Google’s “Why This Ad” icon. These are all desirable features, and they may offer some insight into how these elements may operate when adopted voluntarily, but the electoral system and its integrity is fundamentally a matter of public concern and should be regulated uniformly, across all companies and platform, and subject to appropriate administrative and judicial procedures. Falling back on private action may be the only first step available, given a dysfunctional legislative process, but it cannot be the primary permanent solution for a foundational piece of the democratic process.
A Public Machine-Readable Open Database of Political Issue Advertising
The major innovation of the bill is to leverage the technological capabilities of online advertising to create a timely, publicly open record of online advertising that would be available “as soon as possible” and would be open to public inspection in machine-readable form. This is perhaps the most important of the bill’s provisions, because, executed faithfully, it should allow public watchdog organizations to offer accountability for lies and manipulation in almost real time. Moreover, this is the only provision of the bill that applies to issue campaigns as well as electoral campaigns, and so it is the only one where the American public will get some visibility into the campaign dynamics on any “national legislative issue of public importance.”
The bill requires the very biggest online platforms (with over 50 million unique monthly U.S. visitors) to report all ads placed by anyone who spends more than $500 a year on political advertising to be placed in an open, publicly accessible database. The data would include a copy of the ad, the audience targeted, the views, and the time of first and last display, as well as the name and contact information of the purchaser. The online platforms already collect all this data as a function of their basic service to advertisers and their ability to price their ads and bill their clients. The additional requirement of formatting this data in an open, publicly known format and placing it in a public database is incrementally trivial by comparison to the investments these companies have made in developing their advertising base and their capacities to deliver viewers to advertisers. Having such a database, by contrast, would allow campaigns to be each other’s watchdogs—keeping each other somewhat more honest and constrained—and perhaps more importantly, would allow users anywhere on the net, from professional journalists and nonprofits to concerned citizens with a knack for data, to see what the campaigns and others are doing, and to be able to report promptly on these practices to offer us, as a society, at least a measure of transparency about how our elections are conducted. This public database could allow the many and diverse organizations that have significant knowledge in machine learning and pattern recognition to deploy their considerable capabilities to identify manipulative campaigns by foreign governments and to help Americans understand who, more generally, is trying to manipulate public opinion and how.
The database is particularly critical because Facebook and Google will continuously improve their ability to deliver advertisements finely tuned to very narrowly targeted populations. In the television or newspaper era, if a campaign wanted to appeal to neo-Nazis, it could only do so in the public eye, suffering whatever consequences that association entails with other populations. That constraint on how narrow, incendiary, or outright false a campaign candidates and their supporters can run is disappearing in an era when Facebook can already identify and target advertising to populations in the few thousands range—down to the level of American followers of a German far-right ultranationalist party.17Close Hypertargeted marketing of this sort frees a campaign from being associated with particularly controversial messages, while still being able to use them on very narrow populations where they appeal. A database that is publicly accessible and allows many parties to review and identify particularly false, abusive, or underhanded microtargeted campaigns will impose at least some pressure on campaigns not to issue messages that they cannot defend to the bulk of their likely voters.
In 2017 Google released a plan to voluntarily implement some of these affordances. It announced that it would publish a transparency report about who is buying election-related ads on Google platforms and how much money is being spent, a publicly accessible database of election ads purchased on AdWords and YouTube, with information about who bought each ad, and will implement in-ad disclosures—Google will identify the names of advertisers running election-related campaigns on Google Search, YouTube, and the Google Display Network via Google’s “Why This Ad” icon. These are all desirable features, and they may offer some insight into how these elements may operate when adopted voluntarily, but the electoral system and its integrity is fundamentally a matter of public concern and should be regulated uniformly, across all companies and platform, and subject to appropriate administrative and judicial procedures. Falling back on private action may be the only first step available, given a dysfunctional legislative process, but it cannot be the primary permanent solution for a foundational piece of the democratic process.
What About Botnets, Sockpuppets, and Paid Social Promoters?
Addressing paid ads will not obviously address “astroturf” social influence—bots, sockpuppets, and paid influencers. Particularly in Chapter 8, in our discussion of Russian efforts, we saw that coordinated campaigns aim to simulate public engagement and attention, and to draw other, real citizens to follow the astroturfing networks in terms of agenda setting, framing of issues, and levels of credibility assigned to various narratives. This practice is used by marketing firms as well as foreign governments.
If regulation stopped at “paid advertising” as traditionally defined, the solution would be significant but partial even with regard to paid advertising. Historically, when a broadcast station or editor was a bottleneck that needed to be paid to publish anything on the platform, defining “paid” as “paid to the publisher” would have made sense. Here, however, a major pathway to communicating on a platform whose human users are provided the service for free is by hiring outside marketing firms that specialize in using that free access to provide a paid service to the person seeking political influence. Search engine optimizers who try to manipulate Google search results to come out on top, or behavioral marketing firms that use coordinated accounts, whether automated or not, to simulate social engagement, are firms that offer paid services to engage in political communication. The difficulty posed by such campaigns is that they will not appear on the platforms as paid advertising, because those who engage in these platforms are simulating accounts on the networks. The marketers—whether they are a Russian information operations center or a behavioral marketing firm—engage with the network through multiple accounts, as though they are authentic users, and control and operate the accounts from outside the platform.
The Honest Ads Act definition of “qualified Internet or digital communication” is “any communication which is placed or promoted for a fee on an online platform.” This definition is certainly broad enough to encompass the products sold by third-party paid providers whose product is to use the free affordances of the online network to produce the effect of a political communication, and to do so for a fee. As a practical matter, such a definition would reduce the effectiveness of viral political marketing that uses botnets or sockpuppets to simulate authentic grassroots engagement, because each bot, sockpuppet, or paid influencer would have to carry a disclaimer as to the fact that they are paid and the source of payment. Although the whole purpose of such coordinated campaigns is to create the false impression that the views expressed are expressed authentically in the target Facebook or Twitter community, the burden on expression is no greater than the burden on any political advertiser who would have preferred to communicate without being clearly labeled as political advertising. The party seeking to communicate is still permitted to communicate, to exactly the same people (unless the false accounts violate the platform’s terms of service, but it is not a legitimate complaint for the marketers to argue that the campaign disclosure rule makes it harder for them to violate the terms of service of the platforms they use). The disclaimer requirement would merely remove the misleading representation that the communication is by a person not paid to express such views.
While the general language of the definition of a qualified internet communication is broad enough to include paid bot and sockpuppet campaigns, and the disclaimer provisions seem to apply, the present text of the bill seems to exclude such campaigns from the provision that requires online platforms to maintain a public database of advertisements. The definition of “qualified political advertisement” to which the database requirement applies, includes “any advertisement (including search engine marketing, display advertisements, video advertisements, native advertisements, and sponsorships).” It would be preferable to include “coordinated social network campaigns” explicitly among the list of examples of “advertisement.” It is possible and certainly appropriate for courts to read “native advertisements” to include a sockpuppet or bot pushing a headline or meme that supports a candidate or campaign. But there is a risk that courts would not. Furthermore, the provision requires platforms only to keep a record of “any request to purchase on such online platform a qualified political advertisement,” and advertisers are only required to make information necessary for the online platform to comply with its obligations. It would be preferable to clarify that the advertisers owe an independent duty to disclose to the platform all the information they need to include paid coordinated campaigns in the database, even if the request for the advertisement and the payment are not made to the platform.
As with the more general requirements of disclaimer applied to explicit advertising, clarifying that the disclosure and disclaimer requirements apply to coordinated campaigns will not address every instance of media manipulation. A covert foreign information operation will not comply with new laws intended to exclude it any more that it complies with present laws designed for the same purpose. But just as the disclosure requirements and database for advertisements would limit the effectiveness of efforts by would-be propagandists (campaigns, activists, or foreign governments) to leverage the best data and marketing techniques that Google and Facebook have to offer, so too would an interpretation of the bill that extends to commercial marketing firms that provide synthetic social-behavioral marketing through paid sockpuppets, botnets, or human influencers. This will not address all propaganda, but it will certainly bring some of the most effective manipulation tactics into the sunlight.
What About Botnets, Sockpuppets, and Paid Social Promoters?
Addressing paid ads will not obviously address “astroturf” social influence—bots, sockpuppets, and paid influencers. Particularly in Chapter 8, in our discussion of Russian efforts, we saw that coordinated campaigns aim to simulate public engagement and attention, and to draw other, real citizens to follow the astroturfing networks in terms of agenda setting, framing of issues, and levels of credibility assigned to various narratives. This practice is used by marketing firms as well as foreign governments.
If regulation stopped at “paid advertising” as traditionally defined, the solution would be significant but partial even with regard to paid advertising. Historically, when a broadcast station or editor was a bottleneck that needed to be paid to publish anything on the platform, defining “paid” as “paid to the publisher” would have made sense. Here, however, a major pathway to communicating on a platform whose human users are provided the service for free is by hiring outside marketing firms that specialize in using that free access to provide a paid service to the person seeking political influence. Search engine optimizers who try to manipulate Google search results to come out on top, or behavioral marketing firms that use coordinated accounts, whether automated or not, to simulate social engagement, are firms that offer paid services to engage in political communication. The difficulty posed by such campaigns is that they will not appear on the platforms as paid advertising, because those who engage in these platforms are simulating accounts on the networks. The marketers—whether they are a Russian information operations center or a behavioral marketing firm—engage with the network through multiple accounts, as though they are authentic users, and control and operate the accounts from outside the platform.
The Honest Ads Act definition of “qualified Internet or digital communication” is “any communication which is placed or promoted for a fee on an online platform.” This definition is certainly broad enough to encompass the products sold by third-party paid providers whose product is to use the free affordances of the online network to produce the effect of a political communication, and to do so for a fee. As a practical matter, such a definition would reduce the effectiveness of viral political marketing that uses botnets or sockpuppets to simulate authentic grassroots engagement, because each bot, sockpuppet, or paid influencer would have to carry a disclaimer as to the fact that they are paid and the source of payment. Although the whole purpose of such coordinated campaigns is to create the false impression that the views expressed are expressed authentically in the target Facebook or Twitter community, the burden on expression is no greater than the burden on any political advertiser who would have preferred to communicate without being clearly labeled as political advertising. The party seeking to communicate is still permitted to communicate, to exactly the same people (unless the false accounts violate the platform’s terms of service, but it is not a legitimate complaint for the marketers to argue that the campaign disclosure rule makes it harder for them to violate the terms of service of the platforms they use). The disclaimer requirement would merely remove the misleading representation that the communication is by a person not paid to express such views.
While the general language of the definition of a qualified internet communication is broad enough to include paid bot and sockpuppet campaigns, and the disclaimer provisions seem to apply, the present text of the bill seems to exclude such campaigns from the provision that requires online platforms to maintain a public database of advertisements. The definition of “qualified political advertisement” to which the database requirement applies, includes “any advertisement (including search engine marketing, display advertisements, video advertisements, native advertisements, and sponsorships).” It would be preferable to include “coordinated social network campaigns” explicitly among the list of examples of “advertisement.” It is possible and certainly appropriate for courts to read “native advertisements” to include a sockpuppet or bot pushing a headline or meme that supports a candidate or campaign. But there is a risk that courts would not. Furthermore, the provision requires platforms only to keep a record of “any request to purchase on such online platform a qualified political advertisement,” and advertisers are only required to make information necessary for the online platform to comply with its obligations. It would be preferable to clarify that the advertisers owe an independent duty to disclose to the platform all the information they need to include paid coordinated campaigns in the database, even if the request for the advertisement and the payment are not made to the platform.
As with the more general requirements of disclaimer applied to explicit advertising, clarifying that the disclosure and disclaimer requirements apply to coordinated campaigns will not address every instance of media manipulation. A covert foreign information operation will not comply with new laws intended to exclude it any more that it complies with present laws designed for the same purpose. But just as the disclosure requirements and database for advertisements would limit the effectiveness of efforts by would-be propagandists (campaigns, activists, or foreign governments) to leverage the best data and marketing techniques that Google and Facebook have to offer, so too would an interpretation of the bill that extends to commercial marketing firms that provide synthetic social-behavioral marketing through paid sockpuppets, botnets, or human influencers. This will not address all propaganda, but it will certainly bring some of the most effective manipulation tactics into the sunlight.
A Public Health Approach to the Media Ecosystem
The public database called for in Honest Ads presents a model for a broader public health approach to our media ecosystem. At the moment, Twitter offers expensive but broadly open access to its data, which explains why so much of the best research on fake news, bots, and so forth is conducted on Twitter. Facebook, by contrast, offers very limited access to its data to outside researchers. Google occupies a position somewhere in the middle, with reasonable access to YouTube usage patterns, for example, but less visibility to other aspects of search and advertising. Each of these companies has legitimate considerations concerning protecting user privacy. Each of these companies has legitimate considerations in terms of their own proprietary interests. And as each of these companies considers its own commercial interests, these will often align imperfectly, if at all, with the public interest. In order to understand how our information environment is changing, we need a mechanism through which to offer bona fide independent investigators access to data that would allow us, as a society, to understand how various changes in how we communicate—whether driven by technological change, regulatory intervention, or business decisions—affect the levels of truth and falsehood in our media ecosystem or the degrees of segmentation and polarization.
Our communications privacy as individuals and citizens is an important concern, but not more so than our privacy interest in health data. And yet we develop systems to allow bona fide health researchers access, under appropriate legal constraints, contractual limitations, and technical protections to access the health data of millions of residents to conduct detailed analyses of patterns of disease, treatment, and outcomes. We can no more trust Facebook to be the sole source of information about the effects of its platform on our media ecosystem than we could trust a pharmaceutical company to be the sole source of research on the health outcomes of its drugs, or an oil company to be the sole source of measurements of particles emissions or levels of CO2 in the atmosphere. We need a publicly regulated system that will regulate not only the companies but the researchers who access the data as well, so they do not play the role of brokers to companies like Cambridge Analytica.
A Public Health Approach to the Media Ecosystem
The public database called for in Honest Ads presents a model for a broader public health approach to our media ecosystem. At the moment, Twitter offers expensive but broadly open access to its data, which explains why so much of the best research on fake news, bots, and so forth is conducted on Twitter. Facebook, by contrast, offers very limited access to its data to outside researchers. Google occupies a position somewhere in the middle, with reasonable access to YouTube usage patterns, for example, but less visibility to other aspects of search and advertising. Each of these companies has legitimate considerations concerning protecting user privacy. Each of these companies has legitimate considerations in terms of their own proprietary interests. And as each of these companies considers its own commercial interests, these will often align imperfectly, if at all, with the public interest. In order to understand how our information environment is changing, we need a mechanism through which to offer bona fide independent investigators access to data that would allow us, as a society, to understand how various changes in how we communicate—whether driven by technological change, regulatory intervention, or business decisions—affect the levels of truth and falsehood in our media ecosystem or the degrees of segmentation and polarization.
Our communications privacy as individuals and citizens is an important concern, but not more so than our privacy interest in health data. And yet we develop systems to allow bona fide health researchers access, under appropriate legal constraints, contractual limitations, and technical protections to access the health data of millions of residents to conduct detailed analyses of patterns of disease, treatment, and outcomes. We can no more trust Facebook to be the sole source of information about the effects of its platform on our media ecosystem than we could trust a pharmaceutical company to be the sole source of research on the health outcomes of its drugs, or an oil company to be the sole source of measurements of particles emissions or levels of CO2 in the atmosphere. We need a publicly regulated system that will regulate not only the companies but the researchers who access the data as well, so they do not play the role of brokers to companies like Cambridge Analytica.
What About Defamation Law? Intentional or Reckless Falsehoods
As we brought this book to a close, the family of Seth Rich was suing Fox News for its false and disturbing story. Alex Jones was forced to retract his Pizzagate story, likely under threat of a lawsuit. Nothing under present or proposed election law would touch this kind of intentional lying. As Peter Thiel showed when he funded “Hulk Hogan’s” lawsuit that bankrupted Gawker, a motivated actor with enough money and patience can find a case with which to shut down a publication, even under the very permissive American defamation law framework. Should we have a system that would allow Jeb Bush to sue Alex Jones for portraying him as having “close Nazi ties”? Under normal circumstances such a path should raise concerns for anyone who is properly concerned with robust political speech. Certainly, defamation has been used in many countries as a way of silencing the government’s critics, and the strict limits under the New York Times v. Sullivan line of cases make this path appropriately difficult. The level of bile and sheer disinformation that characterized the 2016 election is such that perhaps raising the cost of reckless or intentional defamatory falsehood as a business model, at least, is a reasonable path to moderation of the most extreme instances of falsehood. But the persistence of the tabloid model in the United Kingdom, despite that country’s very permissive defamation law, suggests that even this approach would be of only moderate use. Whether such an approach is worth the candle depends on one’s empirical answer to the question of how much of the defamation comes from fly-by-night fake news outlets, which would be effectively judgment proof, and how much comes from a core number of commercial sites that have made it their business model to sell false information and peddle in conspiracy theory.
What About Defamation Law? Intentional or Reckless Falsehoods
As we brought this book to a close, the family of Seth Rich was suing Fox News for its false and disturbing story. Alex Jones was forced to retract his Pizzagate story, likely under threat of a lawsuit. Nothing under present or proposed election law would touch this kind of intentional lying. As Peter Thiel showed when he funded “Hulk Hogan’s” lawsuit that bankrupted Gawker, a motivated actor with enough money and patience can find a case with which to shut down a publication, even under the very permissive American defamation law framework. Should we have a system that would allow Jeb Bush to sue Alex Jones for portraying him as having “close Nazi ties”? Under normal circumstances such a path should raise concerns for anyone who is properly concerned with robust political speech. Certainly, defamation has been used in many countries as a way of silencing the government’s critics, and the strict limits under the New York Times v. Sullivan line of cases make this path appropriately difficult. The level of bile and sheer disinformation that characterized the 2016 election is such that perhaps raising the cost of reckless or intentional defamatory falsehood as a business model, at least, is a reasonable path to moderation of the most extreme instances of falsehood. But the persistence of the tabloid model in the United Kingdom, despite that country’s very permissive defamation law, suggests that even this approach would be of only moderate use. Whether such an approach is worth the candle depends on one’s empirical answer to the question of how much of the defamation comes from fly-by-night fake news outlets, which would be effectively judgment proof, and how much comes from a core number of commercial sites that have made it their business model to sell false information and peddle in conspiracy theory.
Institutionalizing Fact Checking
An approach that has received a good bit of public attention and some platform integration focuses on efforts to institutionalize fact checking and generate ground-truth labels and marking. First among these are existing fact-checking sites and the efforts by platforms to use their collective judgment to remove, demote, or label stories. These organizations, PolitiFact, Snopes, Factcheck.org, the Washington Post’s Fact Checker, are all efforts to institutionalize and professionalize the process of checking how true or false the facts in a given story are. Of these organizations only one, PolitiFact, systematically reports both true and false judgments by a media source or personality, so as to give some broad overall sense of the veracity of an outlet or speaker. All these organizations limit themselves to “fact” checking, so they are not useful in determining sites that are extremist or hyperpartisan, as opposed to simply false. And they all suffer from the one failure that most newer projects will also have to contend with. They are treated by the media outlets and users of the right-wing media ecosystem as systematically biased, and as our work and the work of other researchers finds, are generally not visited, shared, linked to, or believed by users on the right.
Despite being ignored by users and outlets on the right, fact-checking sites serve an important role for the majority of people outside the right. For centrists and people of mixed views, who are not entirely in the thrall of the right-wing network, they offer an anchor through which to assess what they read and see. For people on the left, they offer an anchor in reality—a way of persuading themselves that they are not in the simple mirror image of the right-wing propaganda network. Nonetheless, the empirical evidence on the effects of fact checking even for this population are mixed. There are some studies that suggest that correction and fact checking merely reinforces recall by repeating the story and that over time all that is left is a sense that “I’ve heard this before.” Others suggest that correction fails, if it does, only for highly salient questions that conflict with a person’s prior beliefs.18Close Furthermore, fact checking takes time and, at least for short-term belief formation, is likely too slow to influence immediate perceptions. Nonetheless, our case studies show that some conspiracy theories, like Uranium One or Seth Rich, remain prevalent for days or weeks. In these cases, it seems plausible that an independent source, trusted at least by people outside the echo chamber from which the false stories emerge, could play a useful role.
Regardless of their impact on media consumers, fact-checking organizations play a valuable role for academics seeking to study diffusion of falsehoods. Determining “ground truth” is an extremely expensive and difficult process, fraught with judgments. Most scientific research groups are not set up to produce such ground-truth judgments about politically salient news, and so referring to some minimal number of decisions by these independent sites becomes a “ground truth” on which analyses can then proceed. A problem with all of the fact-checking sites is that they can check only a fraction of the overall universe of statements, true and false, that can be made, and they systematically only spend their time on claims that are at least arguably false. Looking at their statistics systematically overstates the amount of falsehood, because the numerator of total stories includes only stories that are both (a) suspect and (b) sufficiently visible to draw the attention of one or usually more fact checkers. So while these organizations play an important role, they overestimate the prevalence of falsehoods in the media ecosystem. Several new projects, such as CrossCheck or the Credibility Coalition, are trying to redress some of these problems—particularly by developing a richer ontology of offensive and misleading forms. Another important reform for these organizations would be to invest in creating a baseline ratio of truth to fiction for media outlets and politically prominent institutions by dedicating some of their resources to assessing a properly randomized sample of stories and statements from these outlets and speakers.
It is demonstrably true that institutional fact checking has not prevented the continued worsening of the epistemic crisis we describe in this book. But more and better fact checking can at least help us better understand the nature and the scope of the problem.
Institutionalizing Fact Checking
An approach that has received a good bit of public attention and some platform integration focuses on efforts to institutionalize fact checking and generate ground-truth labels and marking. First among these are existing fact-checking sites and the efforts by platforms to use their collective judgment to remove, demote, or label stories. These organizations, PolitiFact, Snopes, Factcheck.org, the Washington Post’s Fact Checker, are all efforts to institutionalize and professionalize the process of checking how true or false the facts in a given story are. Of these organizations only one, PolitiFact, systematically reports both true and false judgments by a media source or personality, so as to give some broad overall sense of the veracity of an outlet or speaker. All these organizations limit themselves to “fact” checking, so they are not useful in determining sites that are extremist or hyperpartisan, as opposed to simply false. And they all suffer from the one failure that most newer projects will also have to contend with. They are treated by the media outlets and users of the right-wing media ecosystem as systematically biased, and as our work and the work of other researchers finds, are generally not visited, shared, linked to, or believed by users on the right.
Despite being ignored by users and outlets on the right, fact-checking sites serve an important role for the majority of people outside the right. For centrists and people of mixed views, who are not entirely in the thrall of the right-wing network, they offer an anchor through which to assess what they read and see. For people on the left, they offer an anchor in reality—a way of persuading themselves that they are not in the simple mirror image of the right-wing propaganda network. Nonetheless, the empirical evidence on the effects of fact checking even for this population are mixed. There are some studies that suggest that correction and fact checking merely reinforces recall by repeating the story and that over time all that is left is a sense that “I’ve heard this before.” Others suggest that correction fails, if it does, only for highly salient questions that conflict with a person’s prior beliefs.18Close Furthermore, fact checking takes time and, at least for short-term belief formation, is likely too slow to influence immediate perceptions. Nonetheless, our case studies show that some conspiracy theories, like Uranium One or Seth Rich, remain prevalent for days or weeks. In these cases, it seems plausible that an independent source, trusted at least by people outside the echo chamber from which the false stories emerge, could play a useful role.
Regardless of their impact on media consumers, fact-checking organizations play a valuable role for academics seeking to study diffusion of falsehoods. Determining “ground truth” is an extremely expensive and difficult process, fraught with judgments. Most scientific research groups are not set up to produce such ground-truth judgments about politically salient news, and so referring to some minimal number of decisions by these independent sites becomes a “ground truth” on which analyses can then proceed. A problem with all of the fact-checking sites is that they can check only a fraction of the overall universe of statements, true and false, that can be made, and they systematically only spend their time on claims that are at least arguably false. Looking at their statistics systematically overstates the amount of falsehood, because the numerator of total stories includes only stories that are both (a) suspect and (b) sufficiently visible to draw the attention of one or usually more fact checkers. So while these organizations play an important role, they overestimate the prevalence of falsehoods in the media ecosystem. Several new projects, such as CrossCheck or the Credibility Coalition, are trying to redress some of these problems—particularly by developing a richer ontology of offensive and misleading forms. Another important reform for these organizations would be to invest in creating a baseline ratio of truth to fiction for media outlets and politically prominent institutions by dedicating some of their resources to assessing a properly randomized sample of stories and statements from these outlets and speakers.
It is demonstrably true that institutional fact checking has not prevented the continued worsening of the epistemic crisis we describe in this book. But more and better fact checking can at least help us better understand the nature and the scope of the problem.
Media Literacy Education
One class of interventions proposed has been media literacy education. This is a vitally important and active area of research, experimentation, and applied work. Survey after survey have demonstrated that many people are shockingly poor at effectively evaluating the accuracy of news reporting. A natural response is to improve education and explicitly train consumers to be more discerning about what they read and believe. Some programs are being rolled out to specifically address the “fake news” problem by teaching specific strategies to ferret out disinformation and propagation from online news sources. These are laudable projects and may help to chip away at the problem. This is not a panacea and will not by itself disarm the incredibly resilient psychological and social-identity-based factors that so often lead us astray. As foundations and government entities invest in developing these efforts, we suggest that they at least acknowledge how little evidence there is that media literacy training will relieve the kinds of pressures we identified in this volume. There appears to be little evidence that improvement on the ability to answer test or classroom media literacy questions actually translates into adoption of critical viewing and listening when students consume media in the real world.19Close A more recent, trenchant critique of media literacy comes from danah boyd, whose years of research with youth in media have led her to hypothesize that media literacy efforts have trained media consumers to be distrustful of all media and in a perverse way less discerning about what is credible and what is not.20Close Given the considerable faith and resources being invested in media literacy training, the apparent dearth of evidence, and the possibility of negative implications, we suggest that such programs be well instrumented to assess their positive and negative effects.
Media Literacy Education
One class of interventions proposed has been media literacy education. This is a vitally important and active area of research, experimentation, and applied work. Survey after survey have demonstrated that many people are shockingly poor at effectively evaluating the accuracy of news reporting. A natural response is to improve education and explicitly train consumers to be more discerning about what they read and believe. Some programs are being rolled out to specifically address the “fake news” problem by teaching specific strategies to ferret out disinformation and propagation from online news sources. These are laudable projects and may help to chip away at the problem. This is not a panacea and will not by itself disarm the incredibly resilient psychological and social-identity-based factors that so often lead us astray. As foundations and government entities invest in developing these efforts, we suggest that they at least acknowledge how little evidence there is that media literacy training will relieve the kinds of pressures we identified in this volume. There appears to be little evidence that improvement on the ability to answer test or classroom media literacy questions actually translates into adoption of critical viewing and listening when students consume media in the real world.19Close A more recent, trenchant critique of media literacy comes from danah boyd, whose years of research with youth in media have led her to hypothesize that media literacy efforts have trained media consumers to be distrustful of all media and in a perverse way less discerning about what is credible and what is not.20Close Given the considerable faith and resources being invested in media literacy training, the apparent dearth of evidence, and the possibility of negative implications, we suggest that such programs be well instrumented to assess their positive and negative effects.
Thinking About Solutions
We began this chapter with the acknowledgement that our diagnosis of the present information disorder makes identifying solutions difficult. Solutions that are based on misdiagnosis, particularly on imagining that Facebook, or bots, or the Russians are the core threat, will likely miss their mark. Some level of effort from platforms, backed by legal liability as necessary, could help clean platforms of some of the garbage they now carry; but regulation informed by misdiagnosis, aimed at the wrong targets, will almost certainly lead to over-censorship. There are important discrete interventions that can help alleviate the present sense of disorder. In particular, we emphasize efforts to make sure that political advertising becomes more transparent and susceptible to public scrutiny because we see large scale preference manipulation as the core of the platform business, and we see that core business presenting a profound future threat to democracy.
But our central argument has been that the present source of information disorder in American political communications is the profound asymmetry between the propaganda feedback loop that typifies the right-wing media ecosystem and the reality-check dynamic that typifies the rest of the media system. The most important attainable change in the face of that asymmetry would be to the practice of professional journalism. Our findings make clear that mainstream professional journalists continue to influence the majority of the population, including crossover audiences exposed both to right-wing propaganda and to journalism in mainstream media. We argued in Chapter 6 that the present journalistic practice of objectivity as neutrality has perverse effects in the media ecosystem we document here. By maintaining the “one side says x, the other side says y” model of objectivity in the presence of highly asymmetric propaganda efforts, mainstream media become sources of legitimation and amplification for the propagandists. Here, we suggest that a shift in emphasis in how journalists practice objectivity, from demonstrative neutrality to accountable verifiability, could help counteract some of the reinforcement and legitimation that the present practice creates on the background of highly asymmetric propaganda practices. This will not change the perceptions of the 25 to 30 percent of the population that attends purely to the right-wing media ecosystem, but could make a significant difference to crossover audiences, and would prevent erosion of the present patterns of reliance on objective media in partisan audiences on the left.
A more foundational change is, at present, aspirational. It is that Republican leaders recognize the dangers that the propaganda feedback loop poses to American democracy, and find a way to lead their party and voters out of it. Only those who have credibility and power within the partisan media sphere stand a chance of breaking the destructive cycle. While the revolution in talk radio and Fox News has provided Republicans with a highly mobilized core of supporters, it has done so at the expense of pulling the party far to the right, disconnecting its base from reality, and disabling party leaders from delivering and acting on bad news where bad news turns out to be the case. It is extremely unlikely that this change will happen, however, as long as the propaganda feedback loop delivers electoral gains. And such an effort, even if undertaken, faces the daunting challenge of communicating with a population exposed to decades of propaganda that instilled in them a profound distrust of messages that do not conform to their partisan beliefs.
Thinking About Solutions
We began this chapter with the acknowledgement that our diagnosis of the present information disorder makes identifying solutions difficult. Solutions that are based on misdiagnosis, particularly on imagining that Facebook, or bots, or the Russians are the core threat, will likely miss their mark. Some level of effort from platforms, backed by legal liability as necessary, could help clean platforms of some of the garbage they now carry; but regulation informed by misdiagnosis, aimed at the wrong targets, will almost certainly lead to over-censorship. There are important discrete interventions that can help alleviate the present sense of disorder. In particular, we emphasize efforts to make sure that political advertising becomes more transparent and susceptible to public scrutiny because we see large scale preference manipulation as the core of the platform business, and we see that core business presenting a profound future threat to democracy.
But our central argument has been that the present source of information disorder in American political communications is the profound asymmetry between the propaganda feedback loop that typifies the right-wing media ecosystem and the reality-check dynamic that typifies the rest of the media system. The most important attainable change in the face of that asymmetry would be to the practice of professional journalism. Our findings make clear that mainstream professional journalists continue to influence the majority of the population, including crossover audiences exposed both to right-wing propaganda and to journalism in mainstream media. We argued in Chapter 6 that the present journalistic practice of objectivity as neutrality has perverse effects in the media ecosystem we document here. By maintaining the “one side says x, the other side says y” model of objectivity in the presence of highly asymmetric propaganda efforts, mainstream media become sources of legitimation and amplification for the propagandists. Here, we suggest that a shift in emphasis in how journalists practice objectivity, from demonstrative neutrality to accountable verifiability, could help counteract some of the reinforcement and legitimation that the present practice creates on the background of highly asymmetric propaganda practices. This will not change the perceptions of the 25 to 30 percent of the population that attends purely to the right-wing media ecosystem, but could make a significant difference to crossover audiences, and would prevent erosion of the present patterns of reliance on objective media in partisan audiences on the left.
A more foundational change is, at present, aspirational. It is that Republican leaders recognize the dangers that the propaganda feedback loop poses to American democracy, and find a way to lead their party and voters out of it. Only those who have credibility and power within the partisan media sphere stand a chance of breaking the destructive cycle. While the revolution in talk radio and Fox News has provided Republicans with a highly mobilized core of supporters, it has done so at the expense of pulling the party far to the right, disconnecting its base from reality, and disabling party leaders from delivering and acting on bad news where bad news turns out to be the case. It is extremely unlikely that this change will happen, however, as long as the propaganda feedback loop delivers electoral gains. And such an effort, even if undertaken, faces the daunting challenge of communicating with a population exposed to decades of propaganda that instilled in them a profound distrust of messages that do not conform to their partisan beliefs.
Download all slides