Creating Transparency and Fairness in Automated Decision Systems for Administrative Agencies
Summary
Artificial intelligence is increasingly being used to make decisions about human welfare. Automated decision systems (ADS) administer U.S. social benefits programs—such as unemployment and disability benefits—across local, state, and Federal governments. While ADS have the potential to enable large gains in efficiency, they also run a high risk of reinforcing the class- and race-based inequities of the status quo. Additionally, the use of these systems is not transparent, often leaving individuals with no meaningful recourse after a decision has been made. Individuals may not even know that ADS played a role in the decision-making process.
The Federal Government should take immediate action to promote the transparency and accountability of automated decision systems. Agencies must build internal technical capacity as well as data cultures centered around transparency, accountability, and fairness. The White House should require that agencies using ADS undertake a notice-and-comment process to disclose information about these systems to the public. Finally, in the long-term, Congress must pass comprehensive legislation to implement a single, national standard regulating the use of ADS across sectors and use cases.
At the Office of Clean Energy Demonstrations, Dr. Glaser is paving the way for cutting-edge energy storage and battery technologies to scale up.
Outside of loans, the federal government can do more to support the restart and ensure other nuclear plants continue generating clean baseload energy for as long as safely possible.
The ongoing failure of the U.S. to invest comes at a time when our competitors continue to up their investments in science.
Science funding agencies are biased against risk, making transformative research difficult to fund. Forecast-based approaches to grantmaking could improve funding outcomes for high-risk, high-reward research.