First do no harm
Publication Year: 2023

First, Do No Harm: Algorithms, AI, and Digital Product Liability

Citation:

Pfeiffer, M. H., September 2023. First, Do No Harm: Algorithms, AI, and Digital Product Liability. Bloustein Local, a unit of the Center for Urban Policy Research, Bloustein School of Planning and Public Policy, Rutgers University.

The ethical imperative for technology should be “first, do no harm.” But digital innovations like AI and social media increasingly enable societal harms, from bias to misinformation. As these technologies grow ubiquitous, we need solutions to address unintended consequences.

This report proposes a model to incentivize developers to prevent foreseeable algorithmic harms. It does this by expanding negligence and product liability laws. Digital product developers would be incentivized to mitigate potential algorithmic risks before deployment to protect themselves and investors. Standards and penalties would be set proportional to harm. Insurers would require harm mitigation during development in order to obtain coverage.

This shifts tech ethics from “move fast and break things” to “first, do no harm.” Details would need careful refinement between stakeholders to enact reasonable guardrails without stifling innovation. Policy and harm prevention frameworks would likely evolve over time.

Similar accountability schemes have helped address workplace, environmental, and product safety. Introducing algorithmic harm negligence liability would acknowledge the real societal costs of unethical tech.

The timing is right for reform. This proposal provides a model to steer the digital revolution toward human rights and dignity. Harm prevention must be prioritized over reckless growth. Vigorous liability policies are essential to stop technologists from breaking things.

Additional Topics
AI