Note: This is a tentative agenda and is subject to change.
Reflecting on the state of “misinformation” research during a turbulent time
Kate Starbird is an Associate Professor at the Department of Human Centered Design & Engineering (HCDE) at the University of Washington (UW) and is currently the Director of the UW Center for an Informed Public. Dr. Starbird’s research sits at the intersection of human-computer interaction and the field of crisis informatics — i.e. the study […]
Mental Health and Artificial Intelligence
This panel will examine emerging insights around the interplay between artificial intelligence and mental health. In particular, we will discuss how applying AI to mental healthcare could impact humans in the short and long term, expand access to underserved communities and reduce costs, and what challenges need to be overcome before realizing these benefits.
Policy Proposals
Lightning Talks feature five minute rapid fire presentations on research, product ideas, and survey findings presented with time for questions.
Transparency Reporting with OfCom
Ofcom, the UK’s communications regulator and future UK online safety regulator, is in the early stages of developing its future transparency reporting regime for online platforms. This workshop will bring together Trust & Safety practitioners, academics, researchers and members of civil society in a conversation around transparency reporting and meaningful metrics. The discussions will inform Ofcom’s development of its transparency requirements for platforms, allowing key stakeholders to input early into our regulatory strategy. The workshop will feature a brainstorming session on meaningful metrics for different kinds of online platforms, a discussion of how transparency reporting can best benefit different audiences, and an analysis of lessons learned from platforms’ first transparency reports provided under the Digital Services Act.
Trust and Safety in Search
A special lightning talk session that looks at the trust and safety issues unique to search products and novel ways their harms can be reduced.
Policy Proposals in the Public Sector
Five studies on the role of the public sector in online safety.
Instagram: Exploring Tradeoffs in Ranking Algorithms
Social media platforms use ranking algorithms to distribute online content and help people discover information. In light of the growing public debate on the role of algorithmic amplification in society, this workshop is designed to break down the ‘black box’ around how ranking works on Instagram and facilitate a duologue within the trust and safety community about its current approach to ranking and ways to improve transparency and user agency. The workshop will feature a brainstorming session about pros and cons of different ranking methods, facilitate a candid exchange about the challenges and tradeoffs platforms face, and provide an avenue to inform industry approaches.
Evaluating Digital Literacy Interventions Across Platforms
Misinformation mitigation is pivotal at various levels, from fostering a well-informed and cohesive society to ensuring the smooth functioning of organizations, down to individuals leading contented lives. In this panel, we have assembled a distinguished group of experts from academia, industry, and non-profit sectors to delve deep into cutting-edge approaches for countering misinformation. Our discussion will span a broad range of topics, such as long-term initiatives to understand the importance of establishing trustworthy organizations that stand as beacons against misinformation; proactive methods to debunk myths before they gain traction; digital literacy interventions that aimed at equipping individuals with the skills to discern factual content from misleading information online; scalability techniques to amplify these strategies for wider reach and greater impact; and impact assessment methods to measure the success and influence of these misinformation mitigation strategies.
Trust and Safety Tooling
Lightning Talks feature five minute rapid fire presentations on research, product ideas, and survey findings presented with time for questions.
Content Moderation and Detection
Lightning Talks feature five minute rapid fire presentations on research, product ideas, and survey findings presented with time for questions.
Artificial Intelligence and Trust and Safety: New Risks, New Solutions
Five studies examining how artificial intelligence may mitigate or exacerbate harms.
Happy Hour and Poster Session
Join us at the happy hour for a poster session of student-collaborator projects.
Moderated Content Live!
A special live recording of Stanford Law School and Stanford Cyber Policy Center’s Moderated Content, a podcast about content moderation, with Evelyn Douek and Alex Stamos. The community standards of this podcast prohibit anything except the wonkiest conversations about the regulation—both public and private—of what you see, hear and do online.
A Platform-University Collaboration: Three Independent but Coordinated Studies on Crowdsourced Misinformation Judgments
How well can crowd workers judge whether individual news articles contain harmful misinformation? How can the task be structured to improve the quality and reduce partisanship of their judgments? And how should quality be measured when journalist judgments, the best gold standard that is available, are not uniform? Three university research teams independently conducted empirical studies. They met in two private workshops in 2019 and 2020, to share study designs and initial findings. Some articles and journalist judgments on those articles were shared between studies to enhance comparability of results. Facebook convened the two workshops, provided some of the articles, and funded one of the three studies.
Civic and Harmful Content
Five studies examining the classification, spread, or effects of civic, false, and violent content.
Generative Artificial Intelligence
Lightning Talks feature five minute rapid fire presentations on research, product ideas, and survey findings presented with time for questions.
Self-Harm Contagion
Social media emerged as a rapidly growing constellation of platforms in which people connect, share content, and interact with one another in ways that had not been previously available. While there are both identified risks and benefits to exposure to these kinds of apps, of particular concern is the potential for harmful social contagion via social media. The most common example of this is suicide contagion, however other kinds of behavior contagion have surfaced due to the unique nature of social media platforms. There have been observed effects with challenges and hoaxes, self-harm, and with the development of pro-eating disorder online communities which promote disordered relationships with food, tote the thin ideal, and paint eating disorder behaviors as desirable. Social media platforms provide the seemingly perfect storm for individuals who already struggle with identity instability or body image concerns to form communities that may fortify and spread eating disorder behaviors and thoughts. This is a complex issue, while there are clear risk areas in participating in these communities, individuals also find support in discussing taboo topics. There is a need for more discussion and clearer guidelines across these topics that can be incorporated into policy and clinical discussions.
The Experience of Reporting Abuse Online
Four studies on product design and victim experience of user reporting flows.
Misinformation and News
Lightning Talks feature five minute rapid fire presentations on research, product ideas, and survey findings presented with time for questions.
Industry <> Researcher Collaboration: Sharing insights from the Tech Coalition Safe Online Research Fund
In efforts to increase connectivity and invigorate the digital innovation ecosystem, there is an opportunity to promote cohesive approaches on digital platforms to make them safer for all users and especially for the most vulnerable – children and young people. The Tech Coalition Safe Online Research Fund focuses specifically on efforts around online child sexual exploitation and abuse. The lessons learned from this initiative are broadly applicable to the Trust & Safety community working at the intersection of independent research and industry policy and practice. The purpose of the Research Fund is to generate impact through the ultimate development of tools and resources for tech industry as well as for researchers to utilize around the shared mission of keeping children safe online. In this session, you will hear about the unique way companies and independent researchers engage through the Tech Coalition Safe Online Research Fund; hear insights and impact from the independent research to date; and have the chance to exchange with collaborators in this novel initiative.
Global Attitudes and Online Harm
Four studies on the ways online harms are handled in diverse global contexts.
Scaling Content Moderation to New Harms
Four studies on new ways of thinking about content moderation.
Applications of the Meta Content Library for Trust & Safety Research
This workshop will introduce Trust & Safety researchers to the Meta Content Library and demo new data fields and functionalities available in both the User Interface and the API. The Content Library gives researchers comprehensive access to posts, videos, photos, and reels posted to public Pages, Groups, and Events on Facebook as well as robust metadata about these data types (e.g. view count, reshares, reactions, etc.). For Instagram, the library includes content from public posts, albums, videos, and photos from creator and business accounts. This hands-on introduction deploys research use cases to demonstrate how the Meta Content Library can shed light on questions related to online Trust and Safety through the application of natural language processing, regression, time-series analysis, qualitative analysis, data visualization, and more.
Trust and Safety Teaching Consortium
The Trust & Safety Teaching Consortium is a coalition of academic, industry and non-profit experts in online trust and safety problems. The consortium’s goal is to create content that can be used to teach a variety of audiences about trust and safety issues in a wide variety of formats. Join members of the consortium in a discussion of how to grow and enhance the consortium and make it as useable as possible.
Platform Policy
Five studies on ways platforms can and do self-regulate around complex spaces in content moderation.
TikTok Research API Workshop for Academics
In this workshop, academic researchers will learn about TikTok’s Research API and have the opportunity to walk through sample queries. Registration instructions will be shared in advance.
Fireside Chat: Yoel Roth and Alex Stamos
Yoel Roth, Knight Visiting Scholar, University of Pennsylvania