Who Is Liable When AI Kills?

Wed, 29 Jun 2022 05:00:00 GMT
Scientific American - Technology

We need to change rules and institutions, while still promoting innovation, to protect people from...

Our current liability system-our system to determine responsibility and payment for injuries-is completely unprepared for AI. Liability rules were designed for a time when humans caused the majority of mistakes or injuries.

Bad liability policy will harm patients, consumers and AI developers.

The time to think about liability is now-right as AI becomes ubiquitous but remains underregulated.

The wider adoption of AI in health care, autonomous vehicles and in other industries depends on the framework that determines who, if anyone, ends up liable for an injury caused by artificial intelligence systems.

How do we assign liability when a "Black box" algorithm-where the identity and weighting of variables changes dynamically so no one knows what goes into the prediction-recommends a treatment that ultimately causes harm, or drives a car recklessly before its human driver can react? Is that really the doctor or driver's fault? Is it the company that created the AI's fault? And what accountability should everyone else-health systems, insurers, manufacturers, regulators-face if they encouraged adoption? These are unanswered questions, and critical to establishing the responsible use of AI in consumer products.

Granted, if the end-user misuses an AI system or ignores its warnings, he or she should be liable.

Despite AI's revolutionary potential across industries, end-users will avoid using AI if they bear sole liability for potentially fatal errors.

The key is to ensure that all stakeholders-users, developers and everyone else along the chain from product development to use-bear enough liability to ensure AI safety and effectiveness-but not so much that they give up on AI. To protect people from faulty AI while still promoting innovation, we propose three ways to revamp traditional liability frameworks.

An independent safety system can provide AI stakeholders with a predictable liability system that adjusts to new technologies and methods.

Hampering AI with an outdated liability system would be tragic: Self-driving cars will bring mobility to many people who lack transportation access.

Summarized by 72%, original article size 2114 characters