Dr. David Corney

Keynote: How AI Can Help Fact Checkers Fight Bad Information”

Bad information ruins lives.

It harms our communities, spreads hate, hurts our democracy and leads to bad decisions. Like all fact checkers, “Full Fact” aims to change debate for the better. We start by monitoring discussions in the public sphere and verifying whether claims being made are in fact supported by the evidence. We then ask people to correct the record when they get things wrong, campaign for better information in public life and develop new technology to counter misleading claims. We have been developing technology to help increase the speed, scale and impact of fact checking. We are not trying to replace fact checkers with technology, but rather to empower fact checkers with the best tools. After talking with many fact checkers, we’ve identified three key areas we can use technology to help:

(1) Know the most important thing to be fact checking each day

(2) Know when someone repeats something they already know to be false

(3) Check things in as close to real-time as possible

In this talk, I will describe the tools we are building and the technology behind them, from simple keyword matching through information retrieval algorithms and large language models. I’ll also describe some of the journey we’ve been on through the development process.

Dr. David Corney

PhD in Machine Learning

 

New Frontiers in Tools for Fact-Checkers


 

Dr. Scott Hale

Dr Scott A. Hale is an Associate Professor and Senior Research Fellow at the OII and a Fellow of the Alan Turing Institute. He develops and applies techniques from computer science to research questions in the social sciences. His research seeks to see more equitable access to quality information and investigates the spread of information between speakers of different languages online, the roles of bilingual Internet users, collective action and mobilization, hate speech, and misinformation.

Scott graduated with degrees in Computer Science, Mathematics, and Spanish from Eckerd College, FL, USA. During his time at Eckerd, he published computer science research in the area of image processing while working on a larger research project, Darwin, to uniquely identify dolphins from digital photographs. After graduating, he worked in Okinawa, Japan, at the Okinawa Prefectural Education Centre with public school teachers to develop English immersion curricula and with IT professionals to deliver continuing education training through the Internet to staff members and students on outlying islands. He came to the OII as a master’s candidate in October 2009 and completed his DPhil (PhD) at the department in 2015. His DPhil research concentrated on how the design of social media platforms affects the amount of information shared across language divides.

 

Dr. David Corney

Dr David Corney joined Full Fact in 2019 as a data scientist specialising in natural language processing. He helps bring AI into Full Fact’s tools to better support fact checkers and other colleagues. This includes training large language models; training regular machine learning models; gathering and annotating data; and working closely with academics. David completed his PhD in machine learning 20 years ago and has spent the intervening time working in academia and for tech startups. He has developed numerous tools to analyse news articles, social media and other sources of text, as well as projects in visual neuroscience and botanical imaging.

 

Shalini Joshi

Shalini Joshi is a Program Director for Asia Pacific at Meedan. As a regional lead, Shalini is involved in expanding Meedan’s work and its global network in the Asia Pacific region. Shalini provides support to fact-checkers, newsrooms and academics involved in addressing and researching misinformation.
Shalini is also the co-founder of Khabar Lahariya, India’s only independent, digital news network available to viewers in remote rural areas and small towns.

 

Bias/Fairness in Dis-/Misinformation Studies


(Labeler Demographics, Race/Ethnicity, Issues in This Area and What We Can Do Better) 

 

Dr. Rachel Moran

 

Rachel Moran is a Postdoctoral Fellow at the Center for an Informed Public at the University of Washington’s Information School. She received her doctoral degree from the Annenberg School for Communication and Journalist at the University of Southern California. Her research explores the role of trust in digital information environments and is particularly concerned with how trust is implicated in the spread of mis- and disinformation. Her research has been published in Information, Communication & SocietyDigital JournalismJournalism PracticeMedia, Culture & Society and Telecommunications Policy. She has a BA and an MA in Social and Political Science from Cambridge University and an MA in Political Communications from Goldsmiths College, University of London. She is also a Fellow at the George Washington University’s Institute for Data, Democracy & Politics.

 

Dr. Sukrit Venkatagiri

 

Sukrit Venkatagiri is a Postdoctoral Researcher at the University of Washington’s Center for an Informed Public and the Department of Human Centered Design and Engineering. Starting Fall 2023, he will be an Assistant Professor in the Department of Computer Science at Swarthmore College.

He designs and evaluates sociotechnical systems to combat mis- and disinformation. To build these systems, he works with professional investigators — such as journalists, researchers, and human rights activists — as well as content moderators.

In his research, he takes a mixed-methods approach that draws upon my training in computer science and human–computer interaction. He first conducts qualitative inquiry to understand professional work practice and collective action “in the wild.” Next, he builds social computing and crowdsourcing tools to augment this work. He then empirically evaluates these tools through experiments, log analysis, user studies, and longitudinal deployments.

His work has been published in ACM CSCW, ACM CHI, AAAI HCOMP, and the Journal of Librarianship & Information. In the past, he has interned at Meta (Facebook) and Microsoft Research.

 

Angie Holan

 

Angie Drobnic Holan is the editor-in-chief of the Poynter Institute’s Pulitzer Prize-winning fact-checking website PolitiFact. She is currently on leave as a Nieman fellow at Harvard University, studying journalism’s ability to influence the preservation of democracy. She is an expert on fact-checking election campaigns and the federal government, as well as debunking online misinformation. She serves on the advisory board of the International Fact-Checking Network. She holds dual master’s degrees in journalism and library science and is a graduate of the Plan II program at the University of Texas at Austin.

Event Hosts


 

Dr. Dhiraj Murthy

 

Dhiraj Murthy is a Professor at the University of Texas at Austin. His research explores the intersections of social media, misinformation/disinformation, and race/ethnicity. Dr. Murthy has edited 3 journal special issues and authored over 70 articles, book chapters, and papers. Murthy wrote the first scholarly book about Twitter (second edition published by Polity Press, 2018). Dr. Murthy founded and directs the Computational Media Lab at UT Austin. He has chaired and co-chaired international social media conferences and serves on the advisory board of MediaWell, an anti-misinformation initiative by the Social Science Research Council (SSRC). His publications can be found at https://www.dhirajmurthy.com/about/

 

Dr. Matthew Lease

 

Matthew received degrees in Computer Science from Brown University (PhD, MSc) and the University of Washington (BSc). His research on information retrieval and crowdsourcing was recognized by three Early Career awards: from the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), and the Institute for Museum and Library Sciences (IMLS). More recent honors include Best Student Paper at the 2019 European Conference for Information Retrieval (ECIR). Lease’s industry experience includes stints at Intel Research, Computer game company HyperBole Studios, image compression startup LizardTech, crowdsourcing startup CrowdFlower, and Amazon.

 

Dr. Greg Durrett 

 

Greg Durrett is an Assistant Professor in Computer Science. His current research covers a range of topics in statistical natural language processing, including coreference resolution, entity linking, document summarization, and question answering. Solving these problems lets computers access the information in unstructured text and transform this information in structured ways. Greg received his Ph.D. from UC Berkeley in 2016, where he was a part of the Berkeley NLP Group. He completed his B.S. at MIT in Computer Science and Mathematics in 2010.

 

 

 

 

Dr. Maria De-Arteaga

 

Maria is an Assistant Professor at the Information, Risk and Operation Management Department at McCombs School of Business at the University of Texas at Austin. She is also a core faculty member in the interdepartmental Machine Learning Laboratory and a Good Systems researcher. She holds a joint PhD in Machine Learning and Public Policy from Carnegie Mellon University’s Machine Learning Department and Heinz College.
Her research is focused on algorithmic fairness and human-AI complementarity. As part of my work, she characterizes how societal biases encoded in historical data may be reproduced and amplified by ML models, and develop algorithms to mitigate these risks. Moreover, effective human-AI collaboration is often complicated by other factors, such as the fact that experts often care about constructs that are not well captured in the available labels. In her research, she aims to understand the limits and risks of using ML in these contexts, and to develop human-centered ML that can improve expert decision-making. 
She is currently co-Chair of Diversity & Inclusion for FAccT 2021-2022, and local arrangements co-Chair for WITS 2021. In 2017 she co-founded ML4D, and I now serve on its Steering Committee.

 

 

 

Dr. Jo Lukito 

 

Jo Lukito’s ongoing work focuses on the multi-platform spread of misinformation, disinformation, and unverified conspiracy theories in democracies, including the amplification of such messages (sometimes unintentionally) by news media and political actors.

A first-generation undergraduate and graduate student, Jo earned her B.A. in Political Science and Communication at the State University of New York, Geneseo, where she published her first paper. She earned her M.A. in Media Studies at the Newhouse School of Public Communications in Syracuse University. A version of her thesis, “Linguistic Abstractness as a Discursive Microframe: LCM Framing in International Reporting by American News Media,” received the second top student paper award at the 2015 Association in Journalism and Mass Communication (AEJMC) conference. 

Jo earned her Ph.D in Mass Communication from the University of Wisconsin-Madison in the Summer of 2020. Jo earned her Master’s in Media Studies from Syracuse University’s S.I. Newhouse School of Public Communications and her B.A. in Communication and Political Science from SUNY Geneseo.