Mitigating Doxing Risks: Strategies to Prevent Online Threats from Translating to Offline Harms

Authors: Michaela Lee and Kenny Chen


Summary 

The Biden-Harris Administration should act to address and minimize the risks of malicious doxing, given the rising frequency of online harassment inciting offline harms. This proposal recommends four parallel and mutually reinforcing strategies that can improve protections, enforcement, governance, and awareness around the issue.


The growing use of smartphones, social media, and other channels for finding and sharing information about people have made doxing increasingly widespread and dangerous in recent years. A 2020 survey by the Anti-Defamation League found that 44% of Americans reported experiencing online harassment. 28% of Americans reported experiencing severe online harassment, which includes doxing as well as sexual harassment, stalking, physical threats, swatting, and sustained harassment. In addition, a series of disturbing events in 2020 suggest that some instances of coordinated doxing efforts have reached a level of sophistication that poses a serious threat to U.S. national security. The pronounced spike in doxing cases against election officials, federal judges, and local government officials should serve as evidence for the severity and urgency of this issue. Meanwhile, private citizens have faced elevated doxing risks as disruptions from the COVID-19 pandemic and tensions around contentious sociopolitical issues have provoked cycles of online harassment.


While several states have proposed anti-doxing bills over the past year, most states do not offer adequate protections for doxing victims or mechanisms to hold perpetrators accountable. The doxing regulations that do exist are inconsistent across state lines, and partially applicable federal laws—such as the Interstate Communications Statute and the Interstate Stalking Statute—neither fully address the doxing problem nor are sufficiently enforced. New federal legislation is a crucial step for ensuring that doxing risks and harms are appropriately addressed, and must come with complementary governance structures and enforcement capabilities in order to be effective.


Download PDF


About the Authors

Michaela Lee works at the intersection of emerging technology, policy, and human rights. She is currently pursuing a Master’s in Public Policy degree at the Harvard Kennedy School and is a Research Assistant with the Belfer Center, focused on cyber, emerging tech, and international security. Prior to joining the Kennedy School, Michaela served as a Tech and Human Rights Manager at BSR, where she covered responsible AI, end-to-end encryption, disinformation and hate speech, and platform governance. She was an Assembly Fellow on disinformation with the Berkman Klein Center for Internet & Society at Harvard and a Coro Fellow in Public Affairs.


Kenny Chen is a Master’s student in Public Policy at the Harvard Kennedy School, and a researcher at Harvard’s Technology and Public Purpose Project. He applies a cross-sector and multidisciplinary background toward exploring concepts of trust and trustworthiness in human-AI systems. Previously, Kenny served as Co-Founder and Executive Director of the Partnership to Advance Responsible Technology (PART) and PGH.AI. He remains an active member of various international communities and initiatives around AI ethics, policy, and governance, including XPRIZE, AI Commons, and the UN’s AI for Good platform.