Lightning talks feature five-minute, rapid fire presentations with time for questions.
Moderated by Olivia Natan, UC Berkeley
Beyond Borders: Lessons from Ghana’s Fight Against Child Online Exploitation and CSAM – Implications for Africa: Challenges & Successes Emmanuel Adinkrah, Ghana Internet Safety Foundation
Assessing the Gendered Dimension of the Nexus between Data-Exploiting Cyberattacks and the Proliferation of Harmful Content Online Pavlina Pavlova, New America
Mean Megaphones: Criminal Advantages in Social Media Amplification Lucas Almeida, Northeastern University
Sextortion: Prevalence and Correlates in 10 Countries Rebecca Umbach, Google
AI and Your Amygdala: Partners in Cyber-Crime Scott Hellman, FBI
Keeping Online Marketplaces Safe in the World of AI Sarika Oak, Udemy
Uncovering a DPRK Hiring Scheme Targeting Remote IT Jobs Benjamin Racenberg, Nisos
Improving the Governance of Online Platforms with Truth Warrants Swapneel Mehta, BostonU and MIT
Mapping the Network Maze: Identifying and Tracking Coordinated Spam and Scam Campaigns on Social Media Fabio Giglietto, University of Urbino
Authentic or Artificial? AI’s Impact on Verification Steven Chua, Google
Lightning talks feature five-minute, rapid fire presentations with time for questions.
Moderated by Dave Willner, Stanford University
Building Trust and Safety on Facebook Lluis Garcia Pueyo, Meta
“Thereʼs an Information war going on”: Understanding Motivations of Content Abusers Chelsea Johnson, LinkedIn
Safety Operations: Preventing Illegal and Harmful Behavior at Scale Tom Thorley, GitHub
Striking the Right Balance between Access to Information and User Safety: A Case Study of SafeSearch BLUR Design, Launch and Measurement from Trust & Safety Perspective Elzbieta Brzoz, Google
Digital Responses to Crises: An Action Plan for Platforms and CSOs Confronting Online Threats Rachelle Faust, National Democratic Institute
Leveraging User Surveys to Track Online Experiences: Lessons from 5 Waves of Neely Index Data Collection Juliana Schroeder, University of California, Berkeley
XR is not Social Media. And that’s a problem Michael Karanicolas, UCLA Institute for Technology, Law & Policy
A Strategic Approach to Navigating Integrity in Immersive Technologies Kelly Lundy, Meta
Lightning talks feature five-minute, rapid fire presentations with time for questions. This session has two parts: (1) Mental Health & Wellbeing and (2) Data Access
Session 1: Mental Health & Wellbeing
Moderated by Shubhi Mathur, Stanford Internet Observatory
988 Suicide Crisis Services: How Online Discussions of Service Experiences can Improve Service Efficacy and Dissemination Nora Kelsall, Columbia Mailman School of Public Health, Department of Epidemiology
Building Bonds: Harnessing AI for Mental Health and Connection Yulia Sullivan, Baylor University
Exploring Interpretable Crisis Moderation Using LLMs and Diagnostic Inventories Karen Mosoyan, BlueFever
Social Contagion and #Sadtok: The Risks and Benefits of Teens Self-diagnosing Mental Health Disorders from Social Media Ian Dull, ReD Associates
Exploring the use of Virtual Reality for Content Moderators to Enhance Rapid Decompression from Occupational Stress during Short Wellness Breaks Natalie Campbell, TikTok
Session 2: Data Access
Moderated by Zakir Durumeric, Stanford University
Analyzing DSA Research Access Cameron Hickey, National Conference on Citizenship
Data Sharing in K-12 EdTech Mobile Apps: Looking Under the Hood Lisa LeVasseur, Internet Safety Labs
Making Social Media Safer Requires Meaningful Transparency Jeff Allen, Integrity Institute
An Incentive-Compatible Framework for Online Surveys with Sensitive Questions John Ternovski, US Air Force Academy
Behind the Curtain: Understanding the Datasets that Platforms Have and What You Can Learn with them Matt Motyl, Integrity Institute
Lightning talks feature five-minute, rapid fire presentations with time for questions.
Moderated by Angela Lee, Stanford University
Fact-checking Information Generated by a Large Language Model can decrease Headline Discernment Matthew DeVerna, Indiana University
Thoroughly Tracking the Takes and Trajectories of News Narratives from Trustworthy and Worrisome Websites Hans Hanley, Stanford University
Navigating Online Information Spaces: Strategies to Counteract Online Misinformation and Enhance Trust Lonnie Shumsky, Stanford Social Media Lab
Reducing Misinformation Sharing at Scale using Digital Accuracy Prompt Ads Hause Lin, Massachusetts Institute of Technology
Correcting Misinformation with a Large Language Model Xinyi Zhou, University of Washington
The Effect of AI Labeling on Perceptions of Images Zeve Sanderson, NYU Center for Social Media & Politics
Community-based fact-checking reduces the spread of misleading posts on social media Thomas Renault, Université Paris 1 Panthéon – Sorbonne
Building Resilience to Misinformation in Communities of Color: Results from Two Studies of Tailored Digital Media Literacy Interventions Ryan Moore, Stanford University
How Scientific Retractions Enable Further Misinformation (and What to Do About it) Rod Abhari, Northwestern University
Labeling AI-Generated Content: Promises, Perils, and Future Directions Zivvy Epstein, MIT
Lightning talks feature five-minute, rapid fire presentations with time for questions.
Moderated by Samidh Chakrabarti, Stanford University
Using LLMs for Labeling Task: Progress and Potential Risks Dave Willner, Stanford University
GenAI/LLMs tech is Swiss Army Knife for Guardians of the Internet Shiwani Gupta, Google
Navigating the Landscape of Automated Content Moderation: Insights from Ofcom’s Research Pedro Freire, Ofcom – UK Office of Communications
Utility of Generative AI vs Discriminative AI for Content Moderation Tom Siegel, TrustLab, Inc
Identifying Best Practices for the Use of AI and Automation to Detect, Enforce, and Review Abusive Content and Behavior David Sullivan, Digital Trust & Safety Partnership
Harmful YouTube Video Detection: A Taxonomy of Online Harm and MLLMs (GPT) as Alternative Annotators Claire Wonjeong Jo, University of California Davis
Contested Pathways to Trusted and Safe AI through Third-Party Audits Chris Tenove, University of British Columbia
Lessons Learned: Prepping for AI Automation in Trust & Safety Operations Jimin Lee, Change.org
A special lightning talk session with a panel discussion that looks at the Trust & Safety issues unique to search products and novel ways their harms can be reduced.
Co-Moderated by Ronald Robertson, Stanford Internet Observatory and Daniel Griffin, Trieve.ai
LLMs and Web Search: Questioning the Impact on User Subjectivities and the Findability of Knowledge Nora Freya Lindemann, University of Osnabrück, Germany
Examining The Influence of AI-Generated Search Results on User Behavior and Trust in Search Outputs Aleksandra Urman, University of Zurich
Building Responsible Meta AI Search Systems Yvonne Lee, Meta
New Contexts, Old Heuristics: How Young People in India and the US Trust Online Content in the Age of Generative AI Rachel Xu, Google Jigsaw
Circle to Search: A Case Study in User-Centric Privacy
Mary Ioannidis, Google
Good AI Legal Help, Bad AI Legal Help Margaret Darin Hagan, Stanford Legal Design Lab
Searching for a New Search Algorithm Will Bryk, Exa
The Future Of Trust In LLMs — Lessons From You.com Bryan McCann, You.com