Viral misinformation has been a fixture of civic discourse long before there was social media. The 2016 U.S. election raised the profile of this challenge, but then in 2020 something new evolved: we witnessed the rise of what the World Health Organization has termed an “infodemic.” The diversity and scale of the problem has evolved and expanded dramatically, and our efforts to understand and combat it have failed to keep pace. In the United States, this failure has been particularly acute with historically marginalized and underserved communities.
Our team at the Algorithmic Transparency Institute, a project of the National Conference on Citizenship, developed a pilot program to train ethnic media reporters to identify, surface, and analyze instances of problematic content impacting issues of key public significance in the United States in 2020: the U.S. census, the Covid-19 pandemic, and the U.S. presidential election.
We use the term “problematic content” to describe the range of messages we attempt to understand and address. This includes: mis- and disinformation, hate speech, conspiracies, and messages that lack context or employ flaws in logic. We investigate both false and misleading content as well as content that may not be factually incorrect, but can have ill effects nonetheless.
With this program we focused specifically on problematic content spreading in underserverd, immigrant and diaspora communities and in languages other than English because the problematic messages spreading in these spaces have historically been harder to capture, and less researched and reported on than English language and mainstream messages. This type of content also frequently spreads in closed and encrypted communications platforms that are nearly impossible to explore by anyone but the members of their respective communities.
To accomplish this goal, we trained a team of ethnic media reporters to identify, monitor, verify, and report problematic content found in their communities and languages. This team monitored open and closed social media and developed lists of accounts, hashtags, and search terms, so they could find and submit new content on a weekly basis to Junkipedia. org, our proprietary platform for collecting and analyzing problematic content.
Over the course of 2020, we worked with nine fellows representing Arabic, Black, Chinese, Filipino, Indian, Korean, and Latino communities. Collectively, this team monitored from March through December 2020 and reported 1,968 distinct issues to our system.
The work produced by this team of ethnic media reporting fellows contributed to broader efforts to address many forms of misinformation by news organizations, research groups, and critically, civil society organizations. In collaboration with Census Counts, our team identified census-related misinformation that informed get-out-the-count efforts by civil rights organizations. This collaboration also enabled the Census Bureau itself to escalate problematic narratives on its Rumor Control page and with social media platforms.
Our team also identified misinformation related to the pandemic and the 2020 U.S. election that enhanced mainstream media reporting, bolstered research programs, and informed outreach and training efforts to educate communities about how to recognize and respond to misinformation.
This pilot program identified key insights about the nature of problematic content spreading in ethnic communities as well as the value and needs of the ethnic media reporting ecosystem. Our fellows learned a great deal about how to monitor and respond to problematic content, and they taught us key lessons about how critical it is to support professional development and engagement with this underserved sector of the media industry.
This report details the approach, timeline, challenges, impacts, and takeaways from the 2020 ATI Ethnic Media fellowship program. It concludes with a vision for expanding this type of training and engagement effort as an integrated component of a broader blueprint for civic listening to build long-lasting resiliency to misinformation.