BUILDING TOOLS AND COLLECTING DATA TO BRING TRANSPARENCY TO A WORLD RUN BY ALGORITHMS
Online advertising — including political ads — is little regulated. Advertisers target their ads to a specific audience who will see them online, but then the ads disappear. This makes online advertising difficult to track and study and, therefore, it’s nearly impossible to keep advertisers accountable. The ATI Browser Extension is a volunteer effort that empowers users like you to keep political campaigns, PACs, dark money groups, and businesses accountable. After you install the extension in your browser, it will make a copy of the public posts and ads you come across on social media (you control exactly what and how much you share) and information about why an ad was targeted at you. This data will help researchers and policymakers better understand how advertisers target their audiences and what strategies they may be using to persuade you.
Junkipedia is a platform for researchers and civil society organizations to collect and analyze disinformation. Organizations can create a centralized database where their staffers, members, and volunteers can submit disinformation they find spreading online or in the physical world. This allows organizations to track, analyze, and respond to threats most relevant to their work. In early 2020, a beta version of Junkipedia is focused on the U.S. census.
The spread of disinformation online is much bigger than so-called “fake news.” For-profit clickbait and hyperpartisan propaganda crafted by domestic and foreign idealogues sows discord and undermines the credibility of mainstream news. Broadly, we call this problematic content “junk news.” Newstracker is a system for automating detection of new suspect junk news sources using network analysis, continually tracking identified junk news sources across multiple social networks.
Some social media content creators will pay money to shady businesses (often referred to as “like farms”) to increase their follower counts and the likes, comments, views, and shares on their posts. At first, this may seem innocuous. However, purchasing this inauthentic activity also dupes algorithms and unsuspecting users into believing the content and its creator is more popular and influential than in reality. Algorithms then promote those posts and content creators, clogging social media feeds with artificially popular posts that may spread disinformation, defraud unsuspecting users, or radicalize.
ATI seeks to improve the transparency of algorithms that shape society by providing tools and data that can be leveraged by all to hold the powerful accountable.
Opaque algorithms created by tech platforms underpin virtually every experience we have online. On its most basic level, an algorithm is a set of instructions. How Facebook decides what should appear in our Newsfeeds, what YouTube thinks we’ll want to watch next, what products Amazon thinks we’re most likely to buy — are all decided by algorithms that use vast troves of our personal data to decide what to show us online. This impacts every facet of our lives: who we interact with, what we read, what we buy, where we live, where we work, and even what we believe. No one outside of those businesses knows exactly how these algorithms work. The Algorithmic Transparency Institute wants to change that.
Our goal is to research, understand, and shed light on these algorithms in order to combat the spread of misinformation, fraud, discrimination, and radicalization.
The Algorithmic Transparency Institute, a project of the National Conference on Citizenship, is dedicated to helping all internet users understand the forces that shape what we see online and how that impacts our lives. We develop tools, methodologies, data sets, and investigations that support research and journalism about digital platforms. We are committed to the highest ethical standards of research, data collection, and user privacy. We are equipping everyone to contribute to this effort through tools to help shed light on these powerful technologies.
Cameron Hickey is the Project Director for Algorithmic Transparency at the National Conference on Citizenship. He leads an effort to develop methodologies and tools for collecting and analyzing data to increase transparency about how large digital platforms impact society.
Hickey was formerly a research fellow at the Shorenstein Center for Media, Politics and Public Policy at Harvard’s Kennedy School. As a fellow, he investigated the spread of mis- and dis-information on social media through the development of tools to identify and analyze problematic content. Hickey helped lead the Shorenstein Center’s Information Disorder Lab which monitored disinformation during the 2018 U.S. midterm elections.
Previously, Hickey covered science and technology for the PBS NewsHour and NOVA with correspondent Miles O’Brien. Hickey has won a News and Documentary Emmy Award and a Newhouse Mirror Award for his journalism and was also a Knight Foundation Prototype Grantee for his junk news monitoring tool NewsTracker, and won a 2019 Brown Institute Magic Grant to investigate inauthentic activity on social media. His work has appeared on the PBS NewsHour, NOVA, Bill Moyers, American Experience, WNET, and The New York Times.
Laura Edelson is the Technical Director for Algorithmic Transparency at the National Conference on Citizenship and a PhD Candidate in Computer Science at NYU’s Tandon School of Engineering. Laura studies online political communication and develops methods to identify inauthentic content and activity. Her research has powered reporting on political advertising on social media in the New York Times, the Wall Street Journal and the Atlantic. Prior to her current time in academia, Laura was a software engineer for Palantir and Factset. During her time in industry, her work focused on applied machine learning and big data.
Kaitlyn Dowling is the Project Manager for Algorithmic Transparency at the National Conference on Citizenship. Kaitlyn’s background in digital communications includes serving as the Senior Editor in the Information Disorder Lab, housed in the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School. In this role, she managed a team of researchers investigating and reporting out on political mis- and disinformation spreading on the social web leading up to the 2018 U.S. midterm elections. Kaitlyn has also worked with a variety of organizations in executing their communications strategies, including Harvard Law School and as Content Director of Women Online, a boutique digital PR and marketing firm. Her clients have included Obama for America 2012, Priorities USA, HSBC, and the Bill and Melinda Gates Foundation, among others. Kaitlyn graduated summa cum laude with a B.A. in political science from the University of New Hampshire.
Karishma Shah is the Census Disinformation Project Coordinator at the Algorithmic Transparency Institute. She is also a Fellow at the Royal Society for the Encouragement of Arts, Manufactures and Commerce.
While pursuing a MPhil in International Relations and a Certification in Intermediate Arabic at the University of Oxford, her research focused on how tech companies were addressing problematic content like election interference and terrorist content on their platforms and how governments and international institutions were attempting to regulate the companies on these issues. Prior to Oxford, she received her B.A. degree at Harvard College where she joint concentrated (or double-majored) in Government and the Comparative Study of Religion with a minor in Economics and a Foreign Language Citation in Spanish.
She enjoys interacting with individuals from around the world and is passionate about data and technology, anti-corruption initiatives, and human rights.