Kurt Thomas
Research Areas
Authored Publications
Sort By
Preview abstract
Toxic comments are the top form of hate and harassment experienced online. While many studies have investigated the types of toxic comments posted online, the effects that such content has on people, and the impact of potential defenses, no study has captured the behaviors of the accounts that post toxic comments or how such attacks are operationalized. In this paper, we present a measurement study of 929K accounts that post toxic comments on Reddit over an 18 month period. Combined, these accounts posted over 14 million toxic comments that encompass insults, identity attacks, threats of violence, and sexual harassment. We explore the impact that these accounts have on Reddit, the targeting strategies that abusive accounts adopt, and the distinct patterns that distinguish classes of abusive accounts. Our analysis informs the nuanced interventions needed to curb unwanted toxic behaviors online.
View details
"Millions of people are watching you": Understanding the digital safety needs of creators
Patrawat Samermit
Patrick Gage Kelley
Tara Matthews
Vanessia Wu
(2023)
Preview abstract
Online content creators---who create and share their content on platforms such as Instagram, TikTok, Twitch, and YouTube---are uniquely at-risk of increased digital-safety threats due to their public prominence, the diverse social norms of wide-ranging audiences, and their access to audience members as a valuable resource. We interviewed 23 creators to understand their digital-safety experiences. This includes the security, privacy, and abuse threats they have experienced across multiple platforms and how the threats have changed over time. We also examined the protective practices they have employed to stay safer, including tensions in how they adopt the practices. We found that creators have diverse threat models that take into consideration their emotional, physical, relational, and financial safety. Most adopted protections---including distancing from technology, moderating their communities, and seeking external or social support---only after experiencing a serious safety incident. Lessons from their experiences help us better prepare and protect creators and ensure a diversity of voices are present online.
View details
Understanding Digital-Safety Experiences of Youth in the U.S.
Diana Freed
Natalie N. Bazarova
Eunice Han
Patrick Gage Kelley
Dan Cosley
The ACM CHI Conference on Human Factors in Computing Systems, ACM (2023)
Preview abstract
The seamless integration of technology into the lives of youth has raised concerns about their digital safety. While prior work has explored youth experiences with physical, sexual, and emotional threats—such as bullying and trafficking—a comprehensive and in-depth understanding of the myriad threats that youth experience is needed. By synthesizing the perspectives of 36 youth and 65 adult participants from the U.S., we provide an overview of today’s complex digital-safety landscape. We describe attacks youth experienced, how these moved across platforms and into the physical world, and the resulting harms. We also describe protective practices the youth and the adults who support them took to prevent, mitigate, and recover from attacks, and key barriers to doing this effectively. Our findings provide a broad perspective to help improve digital safety for youth and set directions for future work.
View details
Preview abstract
Online hate and harassment poses a threat to the digital safety of people globally. In light of this risk, there is a need to equip as many people as possible with advice to stay safer online. We interviewed 24 experts to understand what threats and advice internet users should prioritize to prevent or mitigate harm. As part of this, we asked experts to evaluate 45 pieces of existing hate-and-harassment-specific digital-safety advice to understand why they felt advice was viable or not. We find that experts frequently had competing perspectives for which threats and advice they would prioritize. We synthesize sources of disagreement, while also highlighting the primary threats and advice where experts concurred. Our results inform immediate efforts to protect users from online hate and harassment, as well as more expansive socio-technical efforts to establish enduring safety.
View details
SoK: A Framework for Unifying At-Risk User Research
Noel Warford
Tara Matthews
Kaitlyn Yang
Omer Akgul
Patrick Gage Kelley
Nathan Malkin
Michelle L. Mazurek
(2022)
Preview abstract
At-risk users are people who experience risk factors that augment or amplify their chances of being digitally attacked and/or suffering disproportionate harms. In this systematization work, we present a framework for reasoning about at-risk users based on a wide-ranging meta-analysis of 95 papers. Across the varied populations that we examined (e.g., children, activists, people with disabilities), we identified 10 unifying contextual risk factors—such as marginalization and access to a sensitive resource—that augment or amplify digital-safety risks and their resulting harms. We also identified technical and non-technical practices that at-risk users adopt to attempt to protect themselves from digital-safety risks. We use this framework to discuss barriers that limit at-risk users’ ability or willingness to take protective actions. We believe that researchers and technology creators can use our framework to identify and shape research investments to benefit at-risk users, and to guide technology design to better support at-risk users.
View details
Designing Toxic Content Classification for a Diversity of Perspectives
Deepak Kumar
Patrick Gage Kelley
Joshua Mason
Zakir Durumeric
Michael Bailey
(2021)
Preview abstract
In this work, we demonstrate how existing classifiers for identifying toxic comments online fail to generalize to the diverse concerns of Internet users. We survey 17,280 participants to understand how user expectations for what constitutes toxic content differ across demographics, beliefs, and personal experiences. We find that groups historically at-risk of harassment—such as people who identify as LGBTQ+ or young adults—are more likely to to flag a random comment drawn from Reddit, Twitter, or 4chan as toxic, as are people who have personally experienced harassment in the past. Based on our findings, we show how current one-size-fits-all toxicity classification algorithms, like the Perspective API from Jigsaw, can improve in accuracy by 86% on average through personalized model tuning. Ultimately, we highlight current pitfalls and new design directions that can improve the equity and efficacy of toxic content classifiers for all users.
View details
“Why wouldn’t someone think of democracy as a target?”: Security practices & challenges of people involved with U.S. political campaigns
Patrick Gage Kelley
Tara Matthews
Lee Carosi Dunn
Proceedings of the USENIX Security Symposium (2021)
Preview abstract
People who are involved with political campaigns face increased digital security threats from well-funded, sophisticated attackers, especially nation-states. Improving political campaign security is a vital part of protecting democracy. To identify campaign security issues, we conducted qualitative research with 28 participants across the U.S. political spectrum to understand the digital security practices, challenges, and perceptions of people involved in campaigns. A main, overarching finding is that a unique combination of threats, constraints, and work culture lead people involved with political campaigns to use technologies from across platforms and domains in ways that leave them—and democracy—vulnerable to security attacks. Sensitive data was kept in a plethora of personal and work accounts, with ad hoc adoption of strong passwords, two-factor authentication, encryption, and access controls. No individual company, committee, organization, campaign, or academic institution can solve the identified problems on their own. To this end, we provide an initial understanding of this complex problem space and recommendations for how a diverse group of experts can begin working together to improve security for political campaigns.
View details
SoK: Hate, Harassment, and the Changing Landscape of Online Abuse
Devdatta Akhawe
Michael Bailey
Dan Boneh
Nicola Dell
Zakir Durumeric
Patrick Gage Kelley
Deepak Kumar
Damon McCoy
Sarah Meiklejohn
Thomas Ristenpart
Gianluca Stringhini
(2021)
Preview abstract
We argue that existing security, privacy, and anti-abuse protections fail to address the growing threat of online hate and harassment. In order for our community to understand and address this gap, we propose a taxonomy for reasoning about online hate and harassment. Our taxonomy draws on over 150 interdisciplinary research papers that cover disparate threats ranging from intimate partner violence to coordinated mobs. In the process, we identify seven classes of attacks---such as toxic content and surveillance---that each stem from different attacker capabilities and intents. We also provide longitudinal evidence from a three-year survey that hate and harassment is a pervasive, growing experience for online users, particularly for at-risk communities like young adults and people who identify as LGBTQ+. Responding to each class of hate and harassment requires a unique strategy and we highlight five such potential research directions that ultimately empower individuals, communities, and platforms to do so.
View details
Who is targeted by email-based phishing and malware? Measuring factors that differentiate risk
Camelia Simoiu
Proceedings of the Internet Measurement Conference (2020)
Preview abstract
As technologies to defend against phishing and malware often impose an additional financial and usability cost on users (such as security keys), a question remains as to who should adopt these heightened protections. We measure over 1.2 billion email-based phishing and malware attacks against Gmail users to understand what factors place a person at heightened risk of attack. We find that attack campaigns are typically short-lived and at first glance indiscriminately target users on a global scale. However, by modeling the distribution of targeted users, we find that a person's demographics, location, email usage patterns, and security posture all significantly influence the likelihood of attack. Our findings represent a first step towards empirically identifying the most at-risk users.
View details