Claim-Check Literature Review

#1
Nyhan, Brendan and Jason Reifler. 2014. "The Effect of Fact-checking on Elites: A field experiment on U.S. state legislators". American Journal of Political Science.

Abstract

Research suggests fact-checking may be ineffective at reducing public misperceptions about controversial issues.
H1: fact checking might instead help improve political discourse by increasing reputational costs or risks of spreading misinformation for political elites.

H1: fact checking ⬆ -> reputation costs ⬆ -> political discourse quality ⬆

The effect of fact-checking on politicians

[...] major news organizations [...] frequently refrain from questioning the accuracy of contested claims made by public figures even when the statements are verifiable
[fact checking] represents a potentially radical change in how journalism is practiced with significant consequences for political accountability and democratic discourse [... it] has the potential to create career risks for politicians by generating negative coverage [...] that could damage their reputation and credibility.
Experimental design

- IV: assignation of treatment condition i.e. assignation of letter emphasizing the risks of having misleading or inaccurate statements exposed by fact checkers.
- Placebo-IV: assignation of Hawthrone (placebo) condition i.e. assignation of letter alerting legislators that a study was being conducted of the accuracy of statements made by politicians (excluding language about fact-checking or consequences of inaccurate statements)
- Control group: no treatment
- DVs:
1) negative rating from PolitiFact
2) media coverage questioning statement accuracy of legislators
3) combination of 1) & 2)
- Control variables: state, political party, legislative chamber, wether or not a legislator had previously received a PolitiFact rating, previous vote share, and fundraising.

Due to the complexities of language and politics, no perfectly objective measure of statement accuracy has yet been created.
(! - that's us)

One potential concern is whether PolitiFact truth ratings are consistent and accurate.
(! - that's what we want to achieve)

It would be worth it asking for the Supporting Materials of this article, there are probably many useful things, among others the details of the procedures followed to ensure intercoder reliability in their experiment using Krippendorff's alpha. nyhan@dartmouth.edu / jreifler@exeter.ac.uk

Results

State legislators who were sent letters about the threat posed by fact-checkers (IV) were less likely to have their claims questioned as misleading or inaccurate (DV) during the fall campaign.

- Treatment effect for DV1: not significant at the p < .05 level (one tailed), but expected direction.
- Treatment effect for DV2: significant at the p < .05 level.
- Treatment effect for DV3: highly significant at the p < .02 level.

Literature worth checking

- Amazeen, Michelle. 2012. "Blind spots; Examining political advertising misinformation and how the U.S. news media hold political actors accountable." Ph.D. dissertation, Temple University.
- Nyhan, Brendan. 2012. "Another fact checking fiasco: Journalistic failure in coverage of Harry Reid and his mysterious source." Columbia Journalism Review. United States Project.
- Nyhan, Brendan. 2013. "That's not a factcheck! How punditry undermines the mission of journalistic watchdogs."Columbia Journalism Review. United States Project.
- Those are the highlights, but there are more valuable topics in the References.
 
Last edited:

Jack Harich

Administrator
Staff member
#2
My, my, what a fascinating read. Very productive paper summary in terms of educating us on this research and its context, plus gaps that we can possibly fill. Filling them become easier with academics, because THEY have pointed out the gaps that need filling.

This will make for a wonderful discussion topic in our meeting. I had questions about some terms: the up arrows, IV, and DV.

Due to the complexities of language and politics, no perfectly objective measure of statement accuracy has yet been created.
I'm surprise they inserted the word "perfectly" here. That's a pretty high quality bar! :)
 
#3
Nyhan, Brendan. 2012. "Another Factchecking Fiasco: Journalistic failure in coverage of Harry Reid and his mysterious source". Columbia Journalism Review. United States Project

Politicians exploiting "he said", "she said" reporting

For many political reporters, journalism is largely a matter of writing down what powerful people say and do and analyzing why they say or do those things. Within that framework, the accuracy of the claims that powerful people make is rarely the focus of coverage.
The "he said", "she said" reporting style is a false objectivity. Using Structured Argument Analysis allows the reporter to move towards real objectivity by uncovering the scientific truth

the political costs of irresponsible accusations are low when “there are reporters willing to write up” [...] speculative claim and “ensure its dissemination to a wider audience” in a neutrally framed article
Using semantics to undermine factchecking

PolitiFact’s rating system doesn’t work well for irresponsible and unsubstantiated claims that can’t be definitively falsified [...]. The site is right to hold public figures [...] accountable for making [irresponsible and unsubstantiated] claims, but the standard to which they are held does not easily map onto a scale of truth and falsehood.
This is not a problem for Structured Argument Analysis, since it takes into account different types of fallacies.

A better approach would consider whether claims can be supported and whether they are consistent with the best available evidence without assigning labels to them
This is what was done at the Spinsanity project (now dead). They were doing what we're trying to do, but without the tool! This is super interesting, and the perfect preamble to present the Structured Argument Analysis tool to the author.
 
Last edited:
#4
Nyhan, Brendan. 2013. "That's not a factcheck! How punditry undermines the mission of journalistic watchdogs". Columbia Journalism Review. United States Project.

- Fact checkers often make the mistake of making arguments that are semantic, not factual, and based in their own ideology. Which is actually more an opinion than a fact checking, but there's no clear line between those two things. Right now fact checking is still a subjective enterprise, SAA is the tool that would ensure objectivity in the matter.

it is therefore essential that factcheckers [...] only invoke the authority of facts when assessing claims that can be resolved on evidentiary grounds, rather than straying into subjective judgments about the political process or semantic debates over terminology.
This limits very much what fact-checking is able to do, because facts that are correct can also be used deceivingly.

The idea that Obama could resolve the impasse if only he tried harder is a common fallacy of centrist commentary in Washington, but it’s especially perverse when offered as a factcheck. The problem, once again, is a lack of discipline in selecting targets for factchecks and a tendency to make subjective pronouncements about language and process.
This is the mentioned example
In the fact-checking world not everything can be analyzed, and when the wrong target is chosen, the result can be yet another fallacy as the factcheck itself!

[by] acting as if journalistic methods can resolve the argument, the factcheckers weaken the morally freighted language that’s designed to give their work power.
This is the factckeckers falling into the exact same trap than their targets, because of a lack of good tools. Journalistic methods alone won't cut it.
 
Last edited:
#5
Amazeen, Michelle. 2015. "Checking the fact-checkers in 2008: predicting political ad scrutiny and assessing consistency". Journal of Political Marketing.

Abstract

a high level of agreement between the fact-checkers indicates their success at selecting political claims that can be consistently evaluated [...] what may be more critical in drawing evaluations from fact-checkers is the verifiability of a claim.
Introduction

RQ1: what are the types of political ads that are most likely to draw scrutiny [from fact-checkers]? (H1, H2a, & H2b)
RQ2: what is the consistency of fact-checkers in evaluating ads (from the 2008 presidential election)? (H3)

Results1: it was the attack ads from candidates (rather than promotional ads of ads from independent interest groups) that were most likely to draw scrutiny from the fact-checkers (in 2008).
Results2: the study found a high level of agreement among the fact-checkers, indicating their success at selecting political claims that can be consistently evaluated.

The empirical section of the article describes the first known content analysis to date establishing the level of agreement among the three leading political fact-checking organizations.
In order to prove that our results are assessing claims right, we may need to do a similar type of analysis comparing ourselves w/ results of fact-checked articles.
For this we will have to be extremely careful and have a strong theory a priori discussing what the expected results are. That way, if our results point to entirely different directions, we will be able to explain if that a sign that we are wrong, or if it rather is a sign that we are conducting a more strict analysis than the fact checkers did, & ∴ have different results. To do this we may consider working together w/ a fact-checker.


Arbitrating Political Facts

The enterprise of fact-checking was born out of concern that traditional journalism was no longer able or willing to hold political actors accountable for the veracity of their claims.
Fact-checking has the right intention, but the wrong approach. The efforts from the fact-checking community won't be enough to raise truth literacy.

The viability of fact-checking depends upon it being generally accepted as unbiased. The challenge of this enterprise [...] is [...] rendering judgment as to whether a
claim is factually true. [...] Fact-checkers must carefully negotiate which claims to check and how selected claims should be evaluated.

Using SAA the problem of "carefully negotiating which claims to check" wouldn't be a problem, ∵ there is a tool to always do it unbiased and transparently.

Considerations of newsworthiness, fairness, practicality and scientific validity guide all judgments the fact-checkers make in finding facts to check.
Again, all of this would be avoided when using SAA.

H1: attack ads -> scrutiny from fact-checkers ⬆ ↔ promotional ads -> scrutiny from fact-checkers ⬇
H2a: out-party candidate -> (evidence based) attack ads ⬆ -> scrutiny from fact-checkers
H2b: ads from independent expenditure groups -> scrutiny from fact-checkers ↔ ads from candidates -> scrutiny from fact checkers

H3: importance of source credibility ⬆ -> interest of non-partisan fact-checkers in providing independent news ⬆ -> level of agreement on the accuracy of factual claims among elite fact-checkers

Data and Method

Claim:
any statement made in the ad regardless of whether its facticity could be established
Here, claims were coded using a theme modified from Geer's (2006) study. It would be worth it requesting copies of the coding instructions and instrument. mamazeen@bu.edu

The coding instrument (i.e. their protocol) was tested by 2 coders and the inter-coder reliability was measured using Krippendorff's alpha.
The same measurement was calculated to assess the inter-coder reliability between the three fact checkers.
 
Last edited:
#6
Results

1) Likelihood of Scrutiny:
The mere presence on evidence had little bearing on drawing an evaluation because most ads had some minimal amount of evidence
H1: attacking ads are ss more fact-checked than contrast & promotional ads

H2a: out-party (Obama) ads included ss more supporting evidence than ads from the incumbent
H2b: interest group ads were ss less factchecked than candidate ads


2) Fact-checker Consistency:
H3: Elite fact-checkers are able to consistently agree on the accuracy of factual claims w/ a modest Krippendorffs α, maybe because of the exclusive use of binary variables.

3) Predicting Evaluations: to establish which ad attributes increase the odds that an ad will draw scrutiny from fact-checkers, a binomial logistic regression model was used, with the following variables: (nominal) ad sponsor, ad tone, presence or absence of supporting evidence, (scaled) number of claims, number of times an ad aired, and number of days the ad was released prior to Election Day.

- Using this model,
contrary to H2a, it was McCain ads rather than Obama ads that had greater odds of being evaluated
i.e. the incumbent candidate had greater chances of being evaluated.

- Also:
neither the number of claims in an ad nor the presence of any sort of supporting evidence were found as contributing factors in predicting whether an ad drew an evaluation from fact-checkers.
This is actually important. Maybe it is more relevant that the information presented seems more likely to be wrong, which could be more often the case when the content is about controversial topics. Why would fact checkers want to spend their time evaluating claims that nobody is questioning? To build the Politician's Truth Ratings tough, a representative sample of all claims made by the politician would have to be assessed.

Discussion

If not one, not two, but all three independent fact-checkers conclude that a particular political claim is misleading, Jane Q. Citizen will likely feel more confident that she is not getting a straight story from the politician and therefore needs more information.
Citizens will feel even more confident if they are able to see the analytical process right in front of their eyes.

fact-checking journalism may not only create more trust in media, but a better informed electorate as well.
All of this is truth, but it would be necessary to encourage a more generalized consumption of this type of information.

When a framework of fact-checking becomes established, it should become more difficult for a politician to fabricate claims. [...] With improved political behavior, moree qualified candidates who previously may not have considered running may enter the political process. This would be a win for democracy. But this is only possible with consistent fact-checking.
Even more so if a framework of PTRs becomes established.

Limitations and Future Research


[...] nearly half of the newspaper citations used in political ads from 2008 were based upon opinions rather than reporting. [...] Other mis-uses of evidence have included cherry-picking portions of news articles that favor a position while ignoring inconvenient aspects from the news source, mis-appropriating the source of information contained in a news article, or inventing headlines that did not appear with an original news story.
Those are only some of the fallacies that SAA ideally will take into account.

fact-checkers do not evaluate the accuracy of every statement in an ad, one cannot calculate "who lies more" or the overall inaccuracy rate of political advertising.
PTR can.


More research is needed to determine the type of fact-checking format that most effectively informs citizens.
That'd be us.


Literature worth checking
- Geer, John G. 2006. "In Defense of Negativity: Attack Ads in Presidential Campaigns". The University of Chicago Press. (!)
- Graves, Lucas. 2013. "Deciding What's True: Fact-Checking Journalism and the New Ecology of News". Ph. D. dissertation, Columbia University. (!!)
- Hayes, Andrew F. and Klaus Krippendorff. 2007. "Answering the Call for a Standard Reliability Measure for Coding Data". Communication Methods and Measures 1: 77-89. (!!!)
- Lombard, Matthew, Jennifer Snyder-Duch, and Cheryl Bracken. 2002. "Content Analysis in Mass Communication: Assessment and Reporting of Intercoder Reliability". Human Communication Research 28(4): 587 - 2109. (!!)
- Nyhan, Brendan and Jason Reifler. 2010. "When Corrections Fail: The Persistence of Political Misperceptions". Political Behavior 32: 303 - 330. (actually 2007) (!)
- Roberts, Chris. 2013. "A Functional Analysis Comparison of Web-only Advertisements and Traditional Television Advertisements from the 2004 and 2008 Presidential Campaigns". Journalism & Mass Communication Quarterly 90(1): 23-38. (!)
- Thorson, Emily. 2013. "The Consequences of Misinformation and Fact-Checking for Citizens, Politicians, and the Media." Paper presented at the annual meeting of the Midwest Political Science Association, Chicago, Illinois, April 11-13. (!)
 
Last edited:
#7
Hayes, Andrew F. and Klaus Krippendorff. 2007. "Answering the Call for a Standard Reliability Measure for Coding Data". Communication Methods and Measures 1: 77-89.

Krippendorff's alpha is general in that it can be used regardless of the number of observers, levels of measurement, sample sizes, and presence or absence of missing data.
This would allow us to use the same measurement for all cycles.

[In content analysis, generating]
data may take the form of judgements of kind (in which category does this unit belong?), magnitude (how prominent is an attribute within a unit?), or frequency (how often something occurs).
- kind: is it a claim? a rule? a reusable claim? a fact? an intermediate conclusion? other?
- magnitude: weigh of facts and intermediate conclusions.
- magnitude: CL


Among the kinds of reliability [...] reproducibility [...] amounts to evaluating whether a coding instrument, serving as common instructions to different observers of the same set of phenomena, yields the same data within a tolerable margin of error.
- coding instrument: protocol
- observers: claim checkers
- set of phenomena: claims
- outcome data: CL


The more observers agree on the data they generate, and the larger the sample of units they describe, the more comfortable we can be that their data are exchangeable with data provided by other sets of observers.
Criteria for a good measure of reliability
- The units of analysis whose properties are to be recorded or described must be independent of each other.
- The data generating process -informed by instructions that are common to all observers who identify, categorize, or describe the units- must be repeated by different observers working independently from each other.
- The set of units used in the reliability data should be a random sample (or at least approximating one 0from the universe of data whose reliability is in question.
- The observers should be common enough to be found elsewhere.

In its two-observer ordinal data version, α is identical to Spearman's rank correlation coefficient ρ (rho; without ties in ranks). In its two-observer interval data version, α equals Pearson et al.'s (1901) intraclass-correlation coefficient.
 
Last edited:
#8
Graves, Lucas. 2016. "Anatomy of a fact Check: Objective Practice ant the Contested Epistemology of Fact Checking". Communication, Culture & Critique.

The most serious and sustained critique of this brand of journalism [FC] is that it discounts the value-laden nature of political discourse by trying to offer decisive factual conclusions about subjective questions of opinion or ideology. This [...] epistemological critique [...] straws an implicit line between journalism and other fact-centered disciplines
Epistemological critique of FC

[There is]
a basic parallel among the world of fact checkers, investigative journalists, and scientists: Each deals with controversies in which not just facts but rules for determining them are in question, and thus affords a view of the way material, social, and discursive context structure factual inquiry.
CC accounts for this

The epistemological critique of fact checking
If we want to understand how things come to be taken as true [...] we must examine the meaning-making contexts that help to define our shared reality.
The "meaning-making contexts" are the sum/network of arguments behind a claim.

reasonable people can reach different conclusions about a claim
(Adair & Drobnic Holan, 2013)


PolitiFact's editor explains in a New York Times op-ed about the 2016 presidential race:
Our ratings are [...] not intended to be statistically representative but to show trends over time [...] I sometimes feel the need to remind people that [the Truth-O-Meter] is not an actual scientific instrument.
(Drobnic Holan, 2015)


unlike an opinion about the best flavor of ice cream, factual arguments can be more or less reasonable, supported by evidence, in line with expert consensus, and so on.
Any useful view of the epistemology of fact checking should understand it as a practical truth-seeking endeavor not fundamentally different from other kinds of factual inquiry.
This is why a scientific tool is much needed.


Five elements of fact checking
  1. Choosing claims to check
  2. Contacting the speaker
  3. Tracing false claims
  4. Working with experts
  5. Showing your work
Diffusion and contestation
Distilling complex fact checks into a single data point [...] exposes the fact checkers to criticism.
Discussion
Contrary tp the epistemological critique of fact checking, the challenges faced by these journalists are not different in kind from those faced by other fact seekers.
Points of comparison:
  1. [...] factual inquiry and discourse take place in, and are structured by social and linguistic contexts (paradigms)
  2. [...] the way that logical reasoning is embedded in not just social and discursive but material contexts.
  3. [...] the 'experimenter's regress," "a paradox which arises for those who want to use replication as a test of the truth of scientific knowledge claims" (Collins, 1992)
Literature worth checking
- Adair, B. & Drobnic Holan. 2013. "The principles of PolitiFact, PundiFact and the Truth-O-Meter. PolitiFact.
- Drobnic Holan, A. 2015. "All Ppoliticians lie. Some lie more than others. The New York Times.
- Gaines, B.J., Kuklinski, J.H., Quirk, P.J., Peyton, B., & Verkuilen, J. 2007 "Same facts, different interpretations: Partisan motivation and opinion on Iraq. Journal of Politics.
-
Khun, T.S. 2012. "The structure of scientific revolutions". University of Chicago Press.
-
Kuklinski, J.H., Quirk, P.J., Schwieder, D.W. & Rich, R.F. 1998. "Just the facts, ma'am": Political facts and public opinion. Annals of the American Academy of Political and Social Science.
-
Latour, B. 1987. "Science in action: How to follow scientists and engineers through society". Harvard University Press.
-
Nyhan, B. & Reifler, J. 2010. When corrections fail: The persistence of political misperceptions. Political Behavior.
- Tuchman, G. 1978. "Making news: A study in the construction of reality". NY Free Press.
 
Last edited: