Overview
The use of artificial intelligence (AI) touches many sectors in the United States, and the business world is no exception. While this exciting development has proven useful for many businesses, it is also a new source of potential liability.
AI enhances products such as self-driving cars and medical devices, but the current legal framework has yet to evolve with this rapidly-developing technology. Companies should be aware of potential product liability risks under the current legal framework to ensure they are implementing procedures that reduce their exposure.
Identifying AI in Product Liability
Product liability claims involving AI may arise when sellers, manufacturers, or other persons place a product that is somehow defective into the distribution chain. Those defects are evident when AI-driven systems contribute to product failures, provide faulty recommendations, or malfunction in ways that result in injury or financial loss. Common claims include design defects (e.g., flawed algorithms), failure to warn (e.g., inadequate disclosures about limitations), and negligence (e.g., lack of oversight or testing).
While AI adoption is still inconsistent and regulations vary by jurisdiction, businesses should be aware that AI systems can be categorized as either a product or a service. The classification depends on various factors, including laws that apply in the principal place of business or state where business is conducted. Likewise, businesses that are based or offer services abroad should be aware of the scope and application of international laws, including the EU Artificial Intelligence Act (AI Act), which became mandatory for businesses on February 2, 2025, and covers providers, importers, distributors, manufacturers, deployers of AI systems, and in some instances, even third-country providers.
Practical Tips to Mitigate Legal Exposure
Although it is impossible to anticipate every legal argument a potential claimant may make, or predict changes within the legal landscape, businesses may employ several best practices to reduce their exposure in the event of a lawsuit.
- Monitor—or work with an attorney to monitor—the ever-changing legal landscape regarding AI to assess the impact of local, state, federal, and/or international laws on your business;
- Implement compliance mechanisms, such as continuous monitoring and auditing of AI performance;
- Include contractual language that can protect your business and help allocate liability, such as contractual warranties, indemnities, and limitations specific to the business service or product using AI;
- Conduct a risk analysis to determine how the use of AI products or services may alter consumer use and overall safety;
- Regularly update warnings or liability disclaimers to educate consumers on the risks of AI;
- Develop a response plan, including investigation protocols and communication strategies;
- Participate in industry groups or other entities that publish guidance on AI or are involved in influencing legislation.
Conclusion
AI undoubtedly has the potential to be an incredible asset to many businesses, but it carries certain risks that the legal world may not be equipped to handle quite yet. While employing the suggestions above may not prevent a lawsuit, they can certainly contribute to an ethical use of AI, which ultimately reduces a business’s risk.