CIVIC INFRASTRUCTURE TO BRING TRANSPARENCY TO OPAQUE ALGORITHMS

PROJECTS

Junkipedia

Junkipedia is digital public infrastructure for civic listening. Understanding how problematic content such as misinformation, hate speech, or junk news impact society requires shared tools to identify and archive that content. Junkipedia is a technology platform that enables manual and automated collection of data from across the spectrum of digital communication platforms including open social media, fringe networks, and closed messaging apps. The system is powered by the active engagement of a large and diverse network of civic-minded stakeholders from the civil rights, journalism, and academic communities. Funding for Junkipedia will provide increased technical capacity to support more platforms and users, and will enable an expansion of efforts to recruit and train a wider range of contributors to the system.

CIVIC LISTENING CORPS

The Civic Listening Corps (CLC) is a training and engagement program designed to empower community-based organizations to tackle the challenges of misinformation directly by providing tools and strategies to activate their membership in the process of understanding, identifying, and analyzing problematic content circulating within open and closed social media. The CLC program aims to achieve two complementary goals: build resilience to misinformation within each community and aggregate insights across communities. The program establishes a training curriculum administered by expert misinformation researchers to lead participants through the entire misinformation ecosystem. Members of the corps learn how to recognize the different forms of misinformation, critically analyze the techniques used to amplify messages, and find and identify new sources of problematic content. This training program is paired with a regular cadence of assignments and virtual meetings so that participants have concrete deliverables and an opportunity to share learnings with other members of the corps.

 

contrast agent

Some social media content creators will pay money to shady businesses (often referred to as “like farms”) to increase their follower counts and the likes, comments, views, and shares on their posts. At first, this may seem innocuous. However, purchasing this inauthentic activity also dupes algorithms and unsuspecting users into believing the content and its creator is more popular and influential than in reality. Algorithms then promote those posts and content creators, clogging social media feeds with artificially popular posts that may spread disinformation, defraud unsuspecting users, or radicalize.

About Us

ATI seeks to improve the transparency of algorithms that shape society by providing  tools and data that can be leveraged by all to hold the powerful accountable.

Opaque algorithms created by tech platforms underpin virtually every experience we have online. On its most basic level, an algorithm is a set of instructions. How Facebook decides what should appear in our Newsfeeds, what YouTube thinks we’ll want to watch next, what products Amazon thinks we’re most likely to buy — are all decided by algorithms that use vast troves of our personal data to decide what to show us online. This impacts every facet of our lives: who we interact with, what we read, what we buy, where we live, where we work, and even what we believe. No one outside of those businesses knows exactly how these algorithms work. The Algorithmic Transparency Institute wants to change that.

Our goal is to research, understand, and shed light on these algorithms in order to combat the spread of misinformation, fraud, discrimination, and radicalization.

The Algorithmic Transparency Institute, a project of the National Conference on Citizenship, is dedicated to helping all internet users understand the forces that shape what we see online and how that impacts our lives. We develop tools, methodologies, data sets, and investigations that support research and journalism about digital platforms. We are committed to the highest ethical standards of research, data collection, and user privacy. We are equipping everyone to contribute to this effort through tools to help shed light on these powerful technologies.

Our Team

head shot of Cameron Hickey

Cameron Hickey

Cameron Hickey is the Project Director for Algorithmic Transparency at the National Conference on Citizenship. He leads an effort to develop methodologies and tools for collecting and analyzing data to increase transparency about how large digital platforms impact society.

Hickey was formerly a research fellow at the Shorenstein Center for Media, Politics and Public Policy at Harvard’s Kennedy School. As a fellow, he investigated the spread of mis- and dis-information on social media through the development of tools to identify and analyze problematic content. Hickey helped lead the Shorenstein Center’s Information Disorder Lab which monitored disinformation during the 2018 U.S. midterm elections.

 

Previously, Hickey covered science and technology for the PBS NewsHour and NOVA with correspondent Miles O’Brien. Hickey has won a News and Documentary Emmy Award and a Newhouse Mirror Award for his journalism and was also a Knight Foundation Prototype Grantee for his junk news monitoring tool NewsTracker, and won a 2019 Brown Institute Magic Grant to investigate inauthentic activity on social media. His work has appeared on the PBS NewsHour, NOVA, Bill Moyers, American Experience, WNET, and The New York Times.

Head shot of Kaitlyn Dowling

Kaitlyn Dowling

Kaitlyn Dowling is the Project Manager for Algorithmic Transparency at the National Conference on Citizenship. Kaitlyn’s background in digital communications includes serving as the Senior Editor in the Information Disorder Lab, housed in the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School. In this role, she managed a team of researchers investigating and reporting out on political mis- and disinformation spreading on the social web leading up to the 2018 U.S. midterm elections. Kaitlyn has also worked with a variety of organizations in executing their communications strategies, including Harvard Law School and as Content Director of Women Online, a boutique digital PR and marketing firm. Her clients have included Obama for America 2012, Priorities USA, HSBC, and the Bill and Melinda Gates Foundation, among others. Kaitlyn graduated summa cum laude with a B.A. in political science from the University of New Hampshire.

Karishma Shah

Karishma Shah is the Census Disinformation Project Coordinator at the Algorithmic Transparency Institute. She is also a Fellow at the Royal Society for the Encouragement of Arts, Manufactures and Commerce.

While pursuing a MPhil in International Relations and a Certification in Intermediate Arabic at the University of Oxford, her research focused on how tech companies were addressing problematic content like election interference and terrorist content on their platforms and how governments and international institutions were attempting to regulate the companies on these issues. Prior to Oxford, she received her B.A. degree at Harvard College where she joint concentrated (or double-majored) in Government and the Comparative Study of Religion with a minor in Economics and a Foreign Language Citation in Spanish.

She enjoys interacting with individuals from around the world and is passionate about data and technology, anti-corruption initiatives, and human rights.

Milky Way