Sen. Gu, Rep. Carson introduce bill to set liability standards for AI
Sen. Victoria Gu and Rep. Lauren H. Carson are
sponsoring legislation to ensure that victims of accidental harm caused by
artificial intelligence systems have legal recourse.Interesting article: CLICK HERE TO READ IT.
“This bill narrowly focuses on accidental harms from AI: when an AI system misbehaves and does something harmful that the user never intended, who is responsible?” said Senator Gu (D-Dist. 38, Westerly, Charlestown, South Kingstown), who is chairwoman of the newly created Senate Committee on Artificial Intelligence and Emerging Technologies.
“This legislation incentivizes AI developers to take the appropriate precautions to build safe systems by holding them responsible for the accidental harm caused by their products. Setting clear liability standards is an important first step to make sure the growth of AI proceeds in a safe, ethical manner.”
Said Representative Carson (D-Dist. 75, Newport), “Fully
self-driving cars will be on the roads this year in Texas, and more AI systems
are getting integrated into financial systems, health care and more
consequential parts of society. It is important to set a minimum standard of
liability and responsibility now, to make sure Rhode Islanders are protected
down the line.”
Under current law, if a user of an AI system intends to cause harm or could have reasonably anticipated that their actions would cause harm, that user is liable, just as if they had acted directly without the aid of the AI system.
But when the harm is caused not because of the malice or
negligence of the user but as an inadvertent outcome of the programming of the
AI system itself, the victims are unlikely to have any recourse against the
user, and they will have difficulty proving negligence and getting compensated
by the developers or providers of the AI system.
The legislation (2025-S 0358, 2025-H 5224) would fix this loophole by holding AI
developers liable for the harm caused by their systems to nonusers. The
legislation would not hold developers liable when the harm was caused because
of intentional or negligent behavior of the user of the AI system, nor would it
hold developers liable if they could show that the system satisfied the same
standard of care that applies to humans performing the same task as the AI
system.
“The developers and deployers of powerful but poorly
controlled AI systems are taking risks that innocent people will be harmed, and
current laws don’t always hold them accountable,” said Professor Gabriel Weil
of the Touro University School of Law. “This balanced, innovation-friendly bill
would ensure that victims can seek compensation from those responsible for
creating and providing these AI systems.”