Using AI to generate harmful content

Lightning talks feature five-minute, rapid fire presentations with time for questions.

Moderated by Elena Cryst, Stanford University

  • From Open-Source to Primetime: The Making of an AI News Anchor
    Maty Bohacek, Stanford University
  • Artificial Deception: How Bad Actors Leverage AI to Spread Disinformation
    Sarah Brandt, NewsGuard
  • Generative AI Misuse: A Taxonomy of Tactics and Insights from Media Reports
    Rachel Xu, Google Jigsaw
  • Generative Propaganda: Evidence of AI’s Impact from a State-Backed Disinformation Campaign
    Patrick Warren, Clemson University Media Forensics Hub
  • Generative AI and the Changing Business of Propaganda
    Madeleine Daepp, Microso Research
  • The Future of Trust & Safety: Gen AI and Alleged Reality
    Amie White, ALT Ethics Consultants, Hinge

Digital Threats

Lightning talks feature five-minute, rapid fire presentations with time for questions.

Moderated by Olivia Natan, UC Berkeley

  • Beyond Borders: Lessons from Ghana’s Fight Against Child Online Exploitation and CSAM – Implications for Africa: Challenges & Successes
    Emmanuel Adinkrah, Ghana Internet Safety Foundation
  • Assessing the Gendered Dimension of the Nexus between Data-Exploiting Cyberattacks and the Proliferation of Harmful Content Online
    Pavlina Pavlova, New America
  • Mean Megaphones: Criminal Advantages in Social Media Amplification
    Lucas Almeida, Northeastern University
  • Sextortion: Prevalence and Correlates in 10 Countries
    Rebecca Umbach, Google
  • AI and Your Amygdala: Partners in Cyber-Crime
    Scott Hellman, FBI
  • Keeping Online Marketplaces Safe in the World of AI
    Sarika Oak, Udemy
  • Uncovering a DPRK Hiring Scheme Targeting Remote IT Jobs
    Benjamin Racenberg, Nisos
  • Improving the Governance of Online Platforms with Truth Warrants
    Swapneel Mehta, BostonU and MIT
  • Mapping the Network Maze: Identifying and Tracking Coordinated Spam and Scam Campaigns on Social Media
    Fabio Giglietto, University of Urbino
  • Authentic or Artificial? AI’s Impact on Verification
    Steven Chua, Google

Polarization and Elections

Lightning talks feature five-minute, rapid fire presentations with time for questions.

Moderated by Izzy Gainsburg, Stanford Polarization and Social Change Lab

  • Exploring the Interaction of Trust in Science and Vaccine Hesitancy
    Pranav Goel, Northeastern University
  • Otherization via Disinformation: Text, Context and the Bahaʼis in Iran
    Fares Hedayati, Baha’i International Community
  • Foreign Information Manipulation and Interference: Lessons from the EU Elections
    Rachele Gilman, Global Disinformation Index
  • Understanding Online Hate Speech in Context
    Thomas Davidson, Rutgers University
  • The Musk Effect: Changes in Twitter’s Misinformation and Partisan Composition
    Burak Oztura, Northeastern University
  • Measuring the Effects of Harmful Social Media Narratives in Conflict Settings
    Bailey Ulbricht, Stanford Law School
  • Where Do Election Deniers Get their News?
    Hong Qu, Northeastern University
  • Revolutionary Rhetoric: Moderating the fine line between Patriotism and Dangerous Speech
    Cathy Buerger, Dangerous Speech Project
  • Election Misinformation: A Case Study from Shasta County California
    Paul Spencer, Disability Rights California
  • Hate Speech and Misinformation on WhatsApp: Insights from a Large Data Donation Program in India & Brazil
    Kiran Garimella, Rutgers University
  • Content Moderation to Prevent or Counter Violent Radicalism in Pakistan: Perspectives of Social Media Activists
    Muhammad Rizwan Safdar

Building Trust & Safety

Lightning talks feature five-minute, rapid fire presentations with time for questions.

Moderated by Dave Willner, Stanford University

  • Building Trust and Safety on Facebook
    Lluis Garcia Pueyo, Meta
  • “Thereʼs an Information war going on”: Understanding Motivations of Content Abusers
    Chelsea Johnson, LinkedIn
  • Safety Operations: Preventing Illegal and Harmful Behavior at Scale
    Tom Thorley, GitHub
  • Striking the Right Balance between Access to Information and User Safety: A Case Study of SafeSearch BLUR Design, Launch and Measurement from Trust & Safety Perspective
    Elzbieta Brzoz, Google
  • Digital Responses to Crises: An Action Plan for Platforms and CSOs Confronting Online Threats
    Rachelle Faust, National Democratic Institute
  • Leveraging User Surveys to Track Online Experiences: Lessons from 5 Waves of Neely Index Data Collection
    Juliana Schroeder, University of California, Berkeley
  • XR is not Social Media. And that’s a problem
    Michael Karanicolas, UCLA Institute for Technology, Law & Policy
  • A Strategic Approach to Navigating Integrity in Immersive Technologies
    Kelly Lundy, Meta

Understanding Algorithms and Online Environments

Lightning talks feature five-minute, rapid fire presentations with time for questions.

Moderated by Tracy Navichoque, Stanford University

  • The Benefits of Optimizing for Quality Instead of Engagement
    Ravi Iyer, University of Southern California Neely Center
  • Understanding Platform Users’ Algorithmic Knowledge
    John Wihbey, Northeastern University
  • The Cursed Equilibrium of Algorithmic Traumatization
    Cristiana Firullo, Cornell University
  • AI Imaginaries Shape Identity Infusion and Digital Futures
    Bu Zhong, Hong Kong Baptist University
  • User or Algorithm? Investigating what drives Congenial and Problematic Consumption on YouTube
    Muhammad Haroon, University of California, Davis
  • Homogenizing Harm Across Realities: A Comparative Study of Web 2.0 and XR Community Guidelines
    Kyooeun Jang, University of Southern California

Session 1: Mental Health & Wellbeing / Session 2: Data Access

Lightning talks feature five-minute, rapid fire presentations with time for questions. This session has two parts: (1) Mental Health & Wellbeing and (2) Data Access

Session 1: Mental Health & Wellbeing

Moderated by Shubhi Mathur, Stanford Internet Observatory

  • 988 Suicide Crisis Services: How Online Discussions of Service Experiences can Improve Service Efficacy and Dissemination
    Nora Kelsall, Columbia Mailman School of Public Health, Department of Epidemiology
  • Building Bonds: Harnessing AI for Mental Health and Connection
    Yulia Sullivan, Baylor University
  • Exploring Interpretable Crisis Moderation Using LLMs and Diagnostic Inventories
    Karen Mosoyan, BlueFever
  • Social Contagion and #Sadtok: The Risks and Benefits of Teens Self-diagnosing Mental Health Disorders from Social Media
    Ian Dull, ReD Associates
  • Exploring the use of Virtual Reality for Content Moderators to Enhance Rapid Decompression from Occupational Stress during Short Wellness Breaks
    Natalie Campbell, TikTok

Session 2: Data Access

Moderated by Zakir Durumeric, Stanford University

  • Analyzing DSA Research Access
    Cameron Hickey, National Conference on Citizenship
  • Data Sharing in K-12 EdTech Mobile Apps: Looking Under the Hood
    Lisa LeVasseur, Internet Safety Labs
  • Making Social Media Safer Requires Meaningful Transparency
    Jeff Allen, Integrity Institute
  • An Incentive-Compatible Framework for Online Surveys with Sensitive Questions
    John Ternovski, US Air Force Academy
  • Behind the Curtain: Understanding the Datasets that Platforms Have and What You Can Learn with them
    Matt Motyl, Integrity Institute

Media Literacy

Lightning talks feature five-minute, rapid fire presentations with time for questions.

Moderated by Angela Lee, Stanford University

  • Fact-checking Information Generated by a Large Language Model can decrease Headline Discernment
    Matthew DeVerna, Indiana University
  • Thoroughly Tracking the Takes and Trajectories of News Narratives from Trustworthy and Worrisome Websites
    Hans Hanley, Stanford University
  • Navigating Online Information Spaces: Strategies to Counteract Online Misinformation and Enhance Trust
    Lonnie Shumsky, Stanford Social Media Lab
  • Reducing Misinformation Sharing at Scale using Digital Accuracy Prompt Ads
    Hause Lin, Massachusetts Institute of Technology
  • Correcting Misinformation with a Large Language Model
    Xinyi Zhou, University of Washington
  • The Effect of AI Labeling on Perceptions of Images
    Zeve Sanderson, NYU Center for Social Media & Politics
  • Community-based fact-checking reduces the spread of misleading posts on social media
    Thomas Renault, Université Paris 1 Panthéon – Sorbonne
  • Building Resilience to Misinformation in Communities of Color: Results from Two Studies of Tailored Digital Media Literacy Interventions
    Ryan Moore, Stanford University
  • How Scientific Retractions Enable Further Misinformation (and What to Do About it)
    Rod Abhari, Northwestern University
  • Labeling AI-Generated Content: Promises, Perils, and Future Directions
    Zivvy Epstein, MIT

Regulation

Lightning talks feature five-minute, rapid fire presentations with time for questions.

Moderated by Daphne Keller, Stanford University

  • Burden of Proof: Lessons Learned for Regulators from The Oversight Boardʼs Implementation Work
    Manuel Parra Yagnam, Oversight Board
  • A Risk-Based Approach to Age Assurance
    Cami Goray, University of Michigan
  • Navigating New Frontiers: Article 21 of the Digital Services Act and the Future of Content Moderation
    Raphael Kneer, User Rights GmbH
  • Regulating ʻTrust and Safetyʼ Under the Digital Services Act
    Linda Weigl, University of Amsterdam
  • The EU Digital Services Act: Takeaways from One Year of Compliance
    Gerard de Graaf, European Commission, EU Office in San Francisco
  • Latest Developments on Children’s Rights Online
    James R. Marsh, Marsh Law
  • Brussels’ Effect Limited? Perspectives from Japan and Canada on Online Harm Legislation
    Toru Maruhashi, Meiji University
  • Localizing Policies and Data for Online Spaces Free from Violence
    Katherine Townsend, Open Data Collaborative
  • The Role of International Standards in Aligning Age Verification
    Alex Zeig, The Age Verification Providers Association
  • Whose Free Speech?
    Belen Bricchi, Duke University
  • Do General-Purpose AI Models Comply with the EU AI Act?
    Kevin Klyman, Stanford University

AI for Content Moderation

Lightning talks feature five-minute, rapid fire presentations with time for questions.

Moderated by Samidh Chakrabarti, Stanford University

  • Using LLMs for Labeling Task: Progress and Potential Risks
    Dave Willner, Stanford University
  • GenAI/LLMs tech is Swiss Army Knife for Guardians of the Internet
    Shiwani Gupta, Google
  • Navigating the Landscape of Automated Content Moderation: Insights from Ofcom’s Research
    Pedro Freire, Ofcom – UK Office of Communications
  • Utility of Generative AI vs Discriminative AI for Content Moderation
    Tom Siegel, TrustLab, Inc
  • Identifying Best Practices for the Use of AI and Automation to Detect, Enforce, and Review Abusive Content and Behavior
    David Sullivan, Digital Trust & Safety Partnership
  • Harmful YouTube Video Detection: A Taxonomy of Online Harm and MLLMs (GPT) as Alternative Annotators
    Claire Wonjeong Jo, University of California Davis
  • Contested Pathways to Trusted and Safe AI through Third-Party Audits
    Chris Tenove, University of British Columbia
  • Lessons Learned: Prepping for AI Automation in Trust & Safety Operations
    Jimin Lee, Change.org

Lightning Talks: Future of Search

A special lightning talk session with a panel discussion that looks at the Trust & Safety  issues unique to search products and novel ways their harms can be reduced. 

Co-Moderated by Ronald Robertson, Stanford Internet Observatory and Daniel Griffin, Trieve.ai

  • LLMs and Web Search: Questioning the Impact on User Subjectivities and the Findability of Knowledge
    Nora Freya Lindemann, University of Osnabrück, Germany
  • Examining The Influence of AI-Generated Search Results on User Behavior and Trust in Search Outputs
    Aleksandra Urman, University of Zurich
  • Building Responsible Meta AI Search Systems
    Yvonne Lee, Meta
  • New Contexts, Old Heuristics: How Young People in India and the US Trust Online Content in the Age of Generative AI
    Rachel Xu, Google Jigsaw
  • Circle to Search: A Case Study in User-Centric Privacy
    Mary Ioannidis, Google
  • Good AI Legal Help, Bad AI Legal Help
    Margaret Darin Hagan, Stanford Legal Design Lab
  • Searching for a New Search Algorithm
    Will Bryk, Exa
  • The Future Of Trust In LLMs — Lessons From You.com
    Bryan McCann, You.com
Back to top