Steering Innovation for Autonomous Vehicles Towards Societally Beneficial Outcomes

Authors: Thomas Krendl Gilbert, Cathy Wu, and Michael Dennis


Summary 

Vehicle automation, coupled with simultaneous mobility revolutions of vehicle electrification and ridesharing, is set to have major impacts on society—perhaps the biggest impacts of any development in transportation since the introduction of cars over 100 years ago. But whether those impacts will be positive or not is still unknown. For example, widespread deployment of AVs could slash U.S. energy consumption by as much as 40% due to improved driving efficiency; alternatively, it could double U.S. energy consumption due to increased availability of cheap transport options. Similar uncertainty surrounds the potential impacts of AVs on physical safety, transportation access for disabled communities, overall traffic efficiency, and long-term greenhouse-gas emissions. Guiding the evolution of AVs towards the future we want requires evaluating AVs using metrics that prioritize societally beneficial outcomes. The Biden-Harris administration should create an Evaluation Innovation Engine at the Department of Transportation (DOT) to propose, refine, and standardize public-interest metrics for AVs.


The Evaluation Innovation Engine (EIE) would do for AV metrics what the Defense Advanced Research Projects Agency (DARPA) Grand Challenge did for AV development: ignite productive competition among companies to achieve state-of-the-art performance. The EIE should have two main tasks (1) convening stakeholders to discuss potential metrics and providing opportunities for public comment on how proposed metrics should be prioritized, and (2) administering annual funding rounds of ~$72 million each for private firms and other entities to create, test, and optimize algorithms for publicly beneficial AV outcomes. The EIE should be overseen by the Secretary of Transportation and staffed by representatives from pertinent DOT offices (Office of Civil Rights, Office of Small and Disadvantaged Business Utilization, Office of Public Affairs) and administrations (National Highway Traffic Safety Administration (NHTSA), Federal Highway Administration (FHWA), Federal Motor Carrier Safety Administration (FMCSA), Federal Transit Administration (FTA)), as well as a broad coalition of civil-society advocates.


Download PDF


About the Authors



Thomas Krendl Gilbert is an interdisciplinary Ph.D. candidate in Machine Ethics and Epistemology at UC Berkeley. He researches the predicaments that emerge when artificial intelligence reshapes the context of organizational decision-making. His work investigates how specific algorithmic learning procedures reframe classical ethical questions and recall the foundations of democratic political philosophy, namely the significance of popular sovereignty for resolving ambiguities in norms. This work has concrete implications for the design of automated vehicle systems that are fair for distinct subpopulations, safe when enmeshed with municipal practices, and accountable to public concerns.


Photo of a person smiling

Cathy Wu is an Assistant Professor at MIT in the Laboratory for Information and Decision Systems. She holds a Ph.D. from UC Berkeley, as well as a B.S. and M.Eng in Electrical Engineering and Computer Science from MIT. She completed a postdoc at Microsoft Research. Cathy’s interests are broadly in machine learning and mobility. She studies the technical challenges surrounding the integration of autonomy into societal systems. Her work has been acknowledged with several awards, including the 2019 Institute of Electrical and Electronics Engineers (IEEE) Intelligent Transportation Systems Society (ITSS) Best Ph.D. Dissertation Award, the 2016 IEEE ITSC Best Paper Award, and fellowships from the National Science Foundation (NSF), the Berkeley Chancellor, the National Defense Science and Engineering Graduate (NDSEG) Graduate Program, and the Dwight David Eisenhower Transportation Fellowship Program. Her work has appeared in the press, including Wired and Science magazines.


Photo of a person smiling

Michael Dennis is a Ph.D. candidate at the Center for Human-Compatible AI at UC Berkeley. He studies how AI ought to make decisions, both in isolation and when interacting with other AI systems or human agents. His work has implications for AI safety, robustness in reinforcement learning, and multi-agent systems. He believes that strong normative theories of decision making are critical for understanding how AI ought to be designed, as well as for making interaction between AI systems and society at large more robustly beneficial.