Media Bias Study Review
Updated March 14, 2021
The underlying data offer some useful findings, but flawed study design and methodology do not meaningfully address the central question. Stated conclusions are not supported by the experimental evidence, and are employed to push a political narrative.
The article “There is no liberal media bias in which news stories political journalists choose to cover” by political scientists Hans Hassell, John Holbein, and Matthew Miles was published in the April 2020 issue of Science Advances.(1) This article has become a prominent part of the public discussion. In January 2021, it was returned on the first page of various Google searches related to media bias, often in the top three.
The researchers applied a validated methodology to data from journalists’ Twitter feeds to identify biases of 6,801 of the 13,500 journalists in their sampling frame. They conducted a large-scale email correspondence experiment and conjoint survey of journalists and editors.
The correspondence experiment included an email to journalists which “appeared to be from a campaign staffer” of a fictitious candidate for state legislature named Bryan Larson who “was about to announce his candidacy within the next week.” Candidate party and political positions varied according to defined text cited in supplemental materials.(2)
A “conjoint experiment” was embedded in a separate email survey. Information regarding two fictitious candidates ostensibly about to announce their candidacy was randomized, including names, party affiliations, and characteristics. Respondents were asked “to indicate which of the two candidate announcements they would send a reporter to cover in person” after indicating to journalists “that they were in a situation where the timing of the announcements, their location, and the staffing limitations of the paper are such that the newspaper is unable to have a reporter at both announcements.” These instruments were intended to evaluate “the perceived newsworthiness of a candidate announcement.”
The authors found from Twitter data that “journalists are dominantly liberal and often fall far to the left of Americans…66% are even more liberal than former President Obama, 62.3% are to the left of the median Senate Democrat (in the 114th Congress), and a full 14.5% are more liberal than Alexandria Ocasio-Cortez (one of the most liberal members of the House).” They noted this finding is consistent with self-reported political preferences of journalists as well as public perceptions documented across multiple national surveys.
A non-bounce response rate of 13.1% to the survey and 22% to the fictitious campaign email was reported. The authors acknowledged their response rate is “on the lower end of response rate in correspondence studies.” There was no statistically significant difference between positive journalist responses to interview requests from the fictitious candidate for state legislature, regardless of party affiliation and stated political views. The hypothetical asking journalists to choose between in-person coverage of two candidates with different affiliations and characteristics also found no significant difference.
In the Science Advances article, the researchers briefly acknowledge the limitations of their work. The low email response rate warrants caution. They noted that “correspondence experiments do not capture all forms of potential bias,” and elsewhere, that their study “design may not generalize to all potential news stories.”
The authors noted that their research is limited to evaluation of selection or gatekeeping bias but not on presentation bias. They summarized their findings in a Washington Post article noting that “journalists show a great deal of impartiality in the types of candidates that they choose to write about when a potential story is presented to them.”(3) This formulation acknowledges that the authors’ findings are limited specifically to coverage requests, at least in a generic form without accompanying complementary or derogatory revelations.
Analysis and Conclusions
The research breaks down at this stage. The authors’ previously-acknowledged cautions and limitations are disregarded completely. The hypothetical candidate interview request and the conjoint survey candidate coverage query are extrapolated without evidence to be proxies for political news stories of all kinds.
The abstract claims: “we show definitively that the media exhibits no bias against conservatives (or liberals for that matter) in what news that they choose to cover.” In the article text, the authors wrote: “Our well-powered correspondence experiment allows us to confidently rule out even very slight biases against conservatives. This implies that journalists do not exhibit ideological gatekeeping bias: that liberal media bias does not manifest itself in the vital early stage of news generation despite strong reasons to think it might.”
None of these claims are logical conclusions from the research findings. Even the article’s title is not supported by the research data. The title of the Washington Post essay ("Journalists may be liberal, but this doesn’t affect which candidates they choose to cover”) is a far more accurate representation of the study findings than the title of the Science Advances paper (“There is no liberal media bias in which news stories political journalists choose to cover.”) Some assertions in the Post essay also attempt to extend the authors’ findings to political news stories more broadly.
Yet the authors construe their extrapolations as fact, avoiding more substantive discussion of limitations. In the Washington Post Article, they concluded that “powerful norms of balanced coverage in the journalism profession” prevent gatekeeping bias with no attempt to consider other potential explanations. The article concludes with a call for further research to explain why gatekeeping bias is allegedly almost entirely absent from the US media. Claims that their research has shown “definitively that the media exhibits no bias…in what news that they choose to cover” are premature.
A more appropriate call would be for further research to evaluate whether the research findings can be reproduced in scenarios which more accurately represent public concerns regarding gatekeeping bias, or how reliably these findings correspond to observed actual coverage of political stories across outlets with different leanings.
Problems and Discrepancies
The authors’ exuberant triumphalism in claiming to have definitively shown no bias and ruling out “even very slight biases” in political news coverage decisions generally involve inductive leaps far beyond the narrow experimental parameters. Their self-characterization of their survey instrument as “ideal” might be true for ease of implementation and analysis, yet not for relevance. The authors quickly abandon their brief lip service to caution to push conclusions incongruous with the scope of their research. The lack of candid discussion of the work’s substantial limitations represents a failure of critical insight.
Surveys have found that Americans’ trust in media has hit all-time lows,(4) and journalists are among the least trusted of professions. Over two-thirds of US adults report seeing bias in their own favored news sources.(5) Most Americans report seeing either “a great deal” or “a fair amount” of bias in political news coverage.(6) These findings are consistent across large-scale polls administered by reputable nonpartisan polling agencies, including Pew, Gallup, Axios, Edelman, and others.
Just as the authors have documented that most Americans’ perception of political partisanship in journalists are well-founded, and even understated, widespread public concerns about media selection bias appear unlikely to arise from mass delusion. Americans overwhelmingly regard skepticism towards news media as healthy.(7) Widespread public concerns are unlikely to be gratuitous, and warrant far more rigorous and thoughtful examination than the article reviewed here provides.
Media watchdog groups on both the political left and right have widely documented asymmetric coverage of various political stories by outlets with differing political leanings.
One nonpartisan assessment of gatekeeping bias based on actual media stories demonstrated “a strong Gatekeeper bias on the two sides of the political spectrum,” finding that “there were NO top political news stories that appeared on both sides of the spectrum” on the date examined.(8) According to a 2020 Pew Research Center Survey, among both Republicans and Democrats, more than eight in ten Americans report that they “often get different facts depending on which news sources they turn to.”(5)
These and similar data are simply ignored by the authors of the article here reviewed. Claims that the selection of political stories generally is unbiased by journalists’ political leanings are skeptically regarded by most Americans, and with good reason.
Lack of Relevance of Experimental Construct
The authors’ acknowledged that the study “design may not generalize to all potential news stories” and that correspondence experiments “do not capture all forms of potential bias.” These cautions, inexplicably ignored in the authors’ subsequent analysis and media claims, are surely understated. As the authors have acknowledged, new candidate interview requests or announcements represent a minute fraction of political news. There is no basis for extrapolating these very limited scenarios to political news generally.
To answer a research question, a proper experimental instrument must be sensitive and reliable in detecting the intended finding, here media gatekeeping bias. It should also be able to account for previously-observed findings. Yet there seems to be little reasonable basis, and certainly no proof, for the authors' assumption that their posed scenarios offer a sensitive and reliable tool for the detection of journalistic gatekeeping bias across political news stories generally. To the contrary, the authors’ narrow instrument has no capacity to detect even glaring manifestations of the types of gatekeeping bias documented by Zvelo and watchdog groups.
The authors pose an artificial construct which is essentially a straw man. It is remote from the specific concerns documented in sociological surveys of the American public. It does not even engage public allegations of those who claim to have witnessed or experienced gatekeeping bias, including allegations cited in the article. They fail to engage the substantive issues and then construe their unsurprising lack of positive findings as disproof of allegations of other, unrelated manifestations of gatekeeping bias.
One research scenario presented journalists with an email request to interview a new, fictitious, unknown candidate. The other involved a Boolean survey choice between attending the candidacy announcement event of one of two fictitious candidates. In a brief review of publicized allegations of media gatekeeping bias, I have not been able to identify widespread complaints by new candidates allegedly being declined for media interviews because of their political leanings.
Interviewing various candidates is a routine part of a political journalist’s job. The response to such benign hypotheticals does not inform how journalists may act when more is at stake for their favored agenda. Rigorously defining the problem is a key starting point of research. It is not clear why the authors did not start by surveying allegations of gatekeeping bias and then designing a research instrument to evaluate similar manifestations.
Lack of Ethical Dilemmas
Proper examination of the study question must go to the roots of bias, which psychologists have documented may manifest subtly. A journalist may consciously work to avoid “gatekeeping bias” by offering similar initial coverage opportunities to candidates by generic political affiliation, whereas the ethics may break down under subtler circumstances. A vein of psychology research has argued that when substantial bias is present, external manifestations are likely to come through unwittingly, notwithstanding what may be at times disciplined and well-intended efforts to suppress them. The article’s research scenarios present no real test of a journalist’s ethics. Conditions are very when a journalist is faced with the prospect of covering stories that may materially affect the prospects of his or her favored candidate, or in a tight race, especially when the “balance of power” may be on the line.
Multidisciplinary research on human behavior has found that humans are most ethical when they know they are being watched and are likely to be held accountable. Many ethical breaches represent “crimes of opportunity.” Bias and unethical conduct often manifest under subtler conditions at the margins. Individuals who may demonstrate exemplary conduct under the bright lights may be tempted to deviate from their ethical obligations when the benefits appear high and the risk of being held accountable is low. This is clearly not the case for the scenarios examined.
The research evaluated journalistic coverage at the time of candidate announcements. At this stage before intra-party primaries, new candidates’ competitors are from their own party. Political journalists are presumably closely familiar with major-party front-runners in their area. The announcement of a “dark horse” candidate from either major party may evoke curiosity but may not be taken seriously. Most political races in the United States are not competitive, being held in districts or states in which candidates of one party are almost always elected and in which candidates of the other party have little prospect. In all of these cases, it is unclear what incentive even highly partisan journalists would have to “put their finger on the scale” with gatekeeping bias. Strategically-minded partisans would seem likely to focus biased interventions in areas where they would potentially make more of a difference.
More fruitful evaluation might examine questions such as the following. Are there differences in “gatekeeping bias” for different types of political news stories? If substantiated allegations regarding misconduct of one candidate come to light that could tip public opinion in a narrow election, is the journalist equally likely to cover allegations against a favored candidate compared to an unfavored candidate? If covered, does the journalist or editor provide equal time and emphasis to allegations against a favored candidate compared to an unfavored candidate? How does coverage of unsubstantiated allegations compare? How does coverage compare for potential stories which journalists pursue at their own initiative, in contrast to those pushed by an ostensibly influential third party (in the article, ostensible campaign staff) who may demand accountability? Is there a difference in gatekeeping among various types of stories regarding candidates with whom the journalist is already familiar with and feels intense dislike, compared to candidates the journalist knows and favors? Do manifestations of “gatekeeping bias” increase as a campaign progresses and elections loom?
These and other salient questions are ignored entirely by the authors. Blurred lines between news and opinion hosts extend these queries further. Do networks tend to “stack the deck” with commentators, ostensible experts, and even town hall participants who reflect the network’s favored perspectives, while under-representing perspectives of large segments of Americans not favored by the network? Do opinion editors demonstrate gatekeeping bias in being more likely to invite guests or print letters and editorials that promote viewpoints favored by the editor or the organization, compared to those promoting unfavored viewpoints?
The Nature of Coverage
Media gatekeeping consists of far more than a decision whether to interview a candidate or send a reporter to a candidacy announcement. It also includes decisions such as whether to run a story promptly or belatedly, whether to feature it on the front page or hidden away (or, at the top of prime-time hour versus a brief mention another time), and how much space or time to devote to it. The American public perceives “gatekeeping bias” in the decision of a network to repeat one story in 24/7 headlines, whereas another network may not cover the story at all, may cover it only belatedly, or may offer brief mention of allegations only to discredit them. Such discrepancies would constitute bona fide manifestations of potential bias. The article offers no examination of such differences, assuming them not to exist by fiat.
The study represents “gatekeeping bias” as potentially existing only at the level of individual journalists and editors. This assumption ignores the impact of institutional and administrative priorities, agendas, and directives which shape the news-generation process.
The Pew Research Center reported in 2020 that of the 79% of Americans who report that media bias favors one side, 83% "blame unfair news coverage on media outlets,” whereas only 16% blame the journalists.(9) While the prominent role of institutional directives in concerns of media bias is obvious to a supermajority of Americans across the political spectrum, this basic issue appears to have entirely eluded the three political science Ph.D. authors of the Science Advances study.
During the 2020 election cycle, numerous news media spokespersons or administrators noted (or acknowledged subsequently) institutional directives not to cover certain topics deemed unfavorable to their preferred candidate. Other administrative and editorial directives, some disclosed and many undisclosed, have been acknowledged even by editors and reporters themselves to play a substantial role in deciding which news a media organization chooses to cover. Well-documented resignations and firings at prominent media organizations, such as Bari Weiss’ departure from the New York Times to name one of many, have prominently cited concerns of gatekeeping bias.
Data derived from observation of conduct consistently reflect realities better than hypothetical surveys. Survey results are subject to response bias, sometimes called survey bias or social acceptability bias. Across a range of disciplines, it has been well-established that survey responses regarding behaviors from church attendance to alcohol consumption to charitable giving, to name a few, often diverge markedly from observed behaviors. This is especially the case with matters of the affective domain, including ethics and bias.
No attempt was made to evaluate how well the journalists’ stated survey responses correlate with actual conduct. A perfect correlation was simply assumed. Journalists responding to the conjoint email survey knew that it was hypothetical. These concerns are secondary in comparison to questions of relevance of the survey instrument itself.
The Fallacy of Proving a Negative
The analysis invokes the logical fallacy of proving a negative. Demonstrating that bias does not exist requires a higher bar than demonstrating that it does, as the failure to observe bias in one scenario does not prove that bias does not or cannot occur in different scenarios. It would be absurd to claim, for instance, that racial bias in the American workplace does not exist because a study of job-seekers of different ethnic backgrounds found that they were offered job interviews at similar rates.
These concerns are amplified as the chosen scenarios are not closely representative of those in which bias has been widely alleged or perceived. The exceptional claim that no meaningful media gatekeeping bias exists on the basis of narrow research scenarios largely unrelated to documented concerns is a non sequitur.
The authors’ assertions of definitive proof for their claims and that even small gatekeeping biases conclusively are ruled out, not only for the new candidate interview and announcement requests examined but in political news generation generally, are sufficiently displaced from the scope of their experimental research as to be discrediting. With so many documented cases in which few or even no major political stories are shared across networks of different political leaning (8), the authors' claims are problematic at best, and gaslighting at worst.
Audience and Direction
Science Advances is a non-specialty (“multidisciplinary”) open-access online journal which reportedly charges authors a $4,500 fee for publishing open-access articles under the gold model.(10) Science Advances breaks with long-standing scholarly protocol in listing the results first and appends methodology to the end of the article only after the authors’ discussion. This is a hallmark of work directed not toward other scholars, but to a public audience, who may be inclined to focus on the article’s title and conclusions with little scrutiny of methodology and analytical logic.
In this reviewer’s field, publications in non-specialty journals are often problematic due to the lower rigor of the peer review process and a lack of specialty knowledge by journal editors. In one well-publicized case some years ago, an article pitched to national news was widely panned by specialists in the field due to methodological and interpretive difficulties.
The authors’ media blitz included a Washington Post article dated April 10, 2020. As it takes time to develop contacts and prepare such essays, these publicity initiatives must have been in process well before the scholarly article was published. While some press coverage of substantive research breakthroughs can be warranted, the pitching of research “findings” principally to the press, the public, and political groups rather than scholarly peers is widely regarded as a “red flag.”
In this case, the authors’ conclusions do not correspond to the scope of their research. Experimental findings demonstrated only the lack of gatekeeping bias only for narrowly-examined scenarios of new candidate interview requests and candidacy announcement events, and not the absence of gatekeeping bias in political news stories generally.
It seems unlikely that any impartial scholar evaluating the authors’ data would arrive at their same conclusions. While Science Advances is a non-specialty journal which has rarely published political science research, it is unclear how such gross misstatements of the research findings passed through the peer review process. The lack of connection between the titular claims and the experimental findings can be identified by any careful reader trained in basic logic.
Caveats regarding a lack of generalizability of the experimental to all forms of gatekeeping bias are acknowledged by the authors themselves on several occasions in the body text, and then inexplicably ignored in their analysis and conclusions. The authors’ apparent inability to identify obvious difficulties with the relevance of their model, and a failure to engage other data on media gatekeeping bias, raise further concerns. One wonders why the authors were so eager to push expansive conclusions not supported by their research findings.
Activists or Scholars?
Late in the process of composing this review, the reviewer came across an essay by the authors with blunt political commentary summarizing their research on a progressive political website.(11) The essay, dated April 7, 2020, was posted less than a week after the Science Advances publication date of April 1, 2020. The authors’ research summary on a progressive website includes overtly-charged political comments which tip their hands further, leaving no doubt regarding their preferred interpretations. Public claims of media bias are construed as baseless allegations of political enemies, instead of broad-based concerns of an overwhelming majority of Americans as documented in multiple sociological surveys.
This partisan mindset may reflect why the authors apparently felt impelled to overstate their research findings to discredit claims of media bias. Such formulations are based in opinion rather than evidence, and in the case of the progressive website, are explicitly mobilized to push a political narrative. Such conduct makes it difficult to regard the authors as dispassionate scholars following the data.
Although it is likely that the essay’s URL on dataforprogress.org was not chosen by the authors, the URL “no-liberal-bias-in-media” overstates the research’s scope yet further. There is no indication that the authors objected to this framing, and their own writings indicate that the message they wish to convey publicly is far broader than the scope of their experimental research.
Summary of Concerns
The article offers useful and effort-intensive research regarding specific scenarios when read in context. Yet the scenarios examined are remote from stated public concerns and of unclear relevance to the far broader category of political news stories. Other potential sources of media bias are neither examined nor engaged.
The authors disregard their own cautions, presenting stated conclusions untethered from evidentiary findings with grand extrapolation. Their failure to identify or engage basic problems with their analysis, and repeated claims of rigor and generalizability on the basis of data which is clearly inadequate, leaves the reader with more questions and answers.
As non sequitur claims not supported by the evidence, the article’s title and similar claims in its abstract and body are false by basic fact-checking standards. While one appreciates the authors’ considerable effort and exuberance for their own work, their sweeping conclusions are unsupported by the study's narrow methodology. Readers should focus on the actual experimental methodology and results, while placing little stock in the authors’ expansive interpretation.
1. Hassell, Hans J.G., John B. Holbein, and Matthew R. Miles. “There is no liberal media bias in which news stories political journalists choose to cover.” Science Advances 6/14 (1 April 2020).
2. Hassell, Hans J.G., John B. Holbein, and Matthew R. Miles. Supplemental materials for “There is no liberal media bias in which news stories political journalists choose to cover.” Science Advances 6/14 (1 April 2020). “https://advances.sciencemag.org/content/suppl/2020/03/30/6.14.eaay9344.DC1/aay9344_SM.pdf
3. Hassell, Hans, John Holbein and Matthew Miles. "Journalists may be liberal, but this doesn’t affect which candidates they choose to cover." Washington Post, 10 April 2020.
4. Salmon, Felix. “Media trust hits new low.” Axios, 21 January 2021.
5. Shearer, Elisa. “Two-thirds of U.S. adults say they’ve seen their own news sources report facts meant to favor one side.” Pew Research Center, 2 November 2020.
6. Brenan, Megan, and Helen Stubbs. “News Media Viewed as Biased but Crucial to Democracy.” Gallup, 4 August 2020.
7. Gottfried, Jeffrey, Mason Walker and Amy Mitchell. "Americans See Skepticism of News Media as Healthy, Say Public Trust in the Institution Can Improve." Pew Research Center, 31 August 2020.
8. “Gatekeeper Bias and the Impact on News Content.” Zvelo.com, 10 October 2018.
9. Walker, Mason, and Jeffrey Gottfried. "Americans blame unfair news coverage on media outlets, not the journalists who work for them." Pew Research Center, 28 October 2020.
10. Brainard, Jeffrey. “Science journals to offer select authors open-access publishing for free.”
11 Hassell, Hans, John Holbein, and Matthew Miles. “There’s No Liberal Bias in What the Media Chooses to Cover.” Data for Progress, 7 April 2020.