security

Third-party liability and product liability for AI systems – International Association of Privacy Professionals


Artificial intelligence-specific legislative and regulatory trends are uncertain and evolving, and it can be difficult to make informed predictions about future oversight requirements. Despite the inconsistent and uncertain changes taking place, it is clear vendors of AI-based systems will need to implement greater controls to manage the risk of their own liability burdens, expand their oversight, and plan around these issues and legal trends relating to third-party and product liability for AI systems. 

Case law and regulation in the United States

Traditionally, consumer protection law has been favorable for software vendors, limiting their liability to end users. This has been particularly true for third-party vendors that have had liability managed by the judicious use of warranty disclaimers, contractual limitations of liability and limitations in the application of negligence law to such vendors.

However, recent U.S. case law signals an erosion of these traditional liability boundaries between vendors of software and their customers.

For example, in Connecticut Fair Housing Center v. Corelogic Rental Property Solutions, a 2019 case against a third-party vendor of tenant screening software, the U.S. District Court held that the vendor of the screening software was subject to the same nondiscrimination provisions of the Fair Housing Act as its landlord customers. Tenant screening criteria, including criminal records, was made available to landlords through the software. This could result in discrimination against those with criminal histories and violates of Department of Housing and Urban Development guidance regarding FHA protections.

The court rejected the vendor’s argument that it is precluded from FHA liability because its customers have exclusive control over setting the screening criteria. The court stressed the vendor had a duty to not sell a product which could cause a customer to either knowingly or unknowingly violate federal housing law and regulations.

Another noteworthy FHA-related lawsuit is a Department of Justice lawsuit against Meta Platforms for alleged discriminatory advertising for housing, settled in June 2022. The complaint alleged Meta developed algorithms which enabled advertisers to target their housing ads based on protected characteristics under the FHA. As part of the settlement, Meta is required to develop a new ad delivery algorithm that addresses the “racial and other disparities.”

This expansion of vendor liability has not been limited to antidiscrimination laws.

Readers Also Like:  PTA releases 2022 cyber security report, ranking top telecom operators’ compliance - SAMAA English

As part of the litigation following a large-scale Marriott data breach, a U.S. District Judge found that Accenture, in its role as Marriott’s information technology service provider, had an independent duty of care to Marriott’s customers to prevent a data breach. Accenture’s intimate involvement in implementing and maintaining compromised security systems as part of a long-standing contractual relationship with Marriott was a significant factor in this ruling.

Going forward the implication for client engagements, where the vendor controls or is closely involved with offering a product or service that allows for a duty of care toward third parties or where immunity from liability violates public policy, is that liability may be interpreted more broadly than in historical contexts and may override risk-management clauses in the contract. Certain areas will likely invite more legal scrutiny than others. Courts may be more willing to include vendors within the scope of liability when the context relates to laws which are traditionally interpreted liberally, such as civil rights laws. Absence of language specifically providing for vendor liability will likely not be determinative, especially in a civil rights statute.

Vendors may be subject to legal risks even when exclusively providing predeployment services, such as the development of prototype software to be personalized and put into operation after handoff to the business customer. Managing such risks requires additional considerations, such as ensuring the product is designed and accompanied with appropriate documentation, allowing operators to independently debug and validate the output, and being cautious about including configurable options which could enable legal noncompliance.

Traditional product liability

AI vendors should also understand their obligations under traditional product liability law, which, like other torts, is governed by state law. States vary significantly in their approach to product liability. For instance, some, like Florida, allow for all three traditional theories of product liability: strict liability, negligence and breach of warranty. Some states, like Indiana, do not allow actions for breach of implied warranty. And others, like New York, add a “failure to warn” category.

For this reason, there is no consistent set of seminal cases on product liability. That said, are some relevant principles have garnered a fair amount of agreement among various state courts.

In the technology sector, there is an ongoing debate and legal controversy around the distinction between “hardware” and “software,” and the evolving issue of whether or when software ought to be considered a “good,” subject to product liability regimes.

Readers Also Like:  Twitter: Millions of users' email addresses 'stolen' in data hack - BBC

However, if a client makes substantial modifications to the software sold by a vendor, the vendor is less likely to be liable under either a failure-to-warn theory or a design-defect theory than it would be if the client had not made changes. There is also a potential defense, the “contract specification defense, available to vendors if a defect was a result of specifications from the client.

At a minimum, vendors should explicitly document clients’ exact specifications and subsequent cooperative input to the AI system and include all relevant contractual controls, disclaimers and specific assignment of liability agreements.

Regulatory guidance

The U.S. National Institute of Standards and Technology recently published the AI Risk Management Framework, intended to serve as a voluntary standard for governance of AI systems, including more detailed implementation guidance via an AI RMF playbook. The playbook specifies third-party technology risks must be documented and highlights the need for internal risk controls for third-party technology risks. This includes supplying resources, reviewing third-party materials and ensuring procurement, security and data privacy controls for all third-party technologies. The playbook guidance also includes requirements for policies and procedures already in place and clear accountability, and explicitly recommends policies and procedures to address risks associated with third-party entities.

Directives and resolutions in the EU

The foundations of EU AI liability law rest on the 20 Oct. 2020 European Parliament resolution with recommendations to the European Commission on a civil liability regime for AI. The resolution sets out the main tenets and guidelines for AI liability within the EU, noting there is no need for a complete revision of the EU liability structure and the existing Product Liability Directive, along with current tort law, provides sufficient mechanisms to meet most concerns. The Product Liability Directive creates a regime of strict product liability for damages — both economic and noneconomic — arising from defective products.

Nevertheless, the resolution notes a regulation directly on AI liability would better accommodate the nuances of high-risk AI systems and describes what such a liability regime would entail. The resolution describes a risk-tiering system wherein high-risk AI systems should be subject to a strict strict liability regime and all other AI systems would be liable per a presumption of the AI operator’s fault-based liability. The AI operator, in turn, can exempt themselves from such by proving they abided by their duty of care.

Readers Also Like:  Secretary Mayorkas Announces Establishment of Homeland ... - Homeland Security

The resolution is clear that high-risk AI system operators cannot exonerate their liability by arguing force majeure, they acted with due diligence, or harm or damage was caused by an autonomous activity, device or process driven by their AI system. Instead they must rely on the development risk defense, as reflected in the Product Liability Directive, to fend off claims. In contrast, all other AI systems will hold a presumption of fault for any liability that can be rebutted through traditional defenses, i.e., failure to demonstrate a sufficient legal nexus between the harm caused and the AI system in question, and/or adherence to the above duty of care.

Notably, the resolution calls for the prohibition of contractual nonliability clauses between parties, including in business-to-business and business-to-administration relationships. This would prevent parties from contracting out of liability. It also supposes vendors will have a requirement to hold insurance for subsequent claims.

Summary

Where vendors are involved in developing and deploying software, reducing legal exposure will require more distinct mitigation measures than have often been necessary in other contexts. This is an expectation based on the verbiage of the pertinent laws, the political climate surrounding AI and the practicalities of the technology across various industries.

Traditional U.S. approaches to software or product liability may still apply, but will likely be insufficient to the challenges around high-risk AI systems, particularly in high-impact applications such as finance, health care, housing and education. Courts are increasingly willing to hold vendors accountable to similar standards as their enterprise customers for harms to end users, and the U.S. Federal Trade Commission has explicitly warned against behaviors — such as exaggerated marketing claims of accurate or unbiased results — that could trigger FTC enforcement as part of its overall vigilant view toward AI regulation.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.