Ride the Lightning
Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.
Artificial Intelligence Litigation: A New Class Action Boom?
May 17, 2023
Law.Com reported (sub.req.) on May 16 that “the proliferation of AI-related litigation has shocked dockets across the country, and the rise of AI class actions is inevitable as consumers and companies alike look to the courts for redress from the potentially unlawful use of their protected data in training generative AI models.”
AI is currently without a governing regulatory framework. That being the case, individuals, consumers and companies look to the courts to solve perceived problems.
No comprehensive federal law or regulation currently governs the use of generative AI technologies, even as ChatGPT, Bing and Bard are everywhere. Challenges with unregulated AI range from questions of ownership of AI-generated works developed from training involving copyrighted material to the more fundamental question of what is true versus what is manufactured to appear as true.
There is no clear guidance yet. What happens when artists’ protected creative works are used without their consent to train an AI model that later produces a “new” artistic work with elements of the original works?
While there are no reported decisions yet, several IP class actions are poised to explore the ramifications of an AI process creating a potentially infringing derivative work. See, e.g., Andersen, et al. v. Stability AI Ltd., et al., 3:23-cv-00201, (N.D. Cal.) (Jan. 13, 2023).
Consumers have noted the gaps in unregulated generative AI, especially concerning data collection and security. OpenAI’s recent data breach is a prime example. On March 24, OpenAI (ChatGPT’s parent company) notified the public of a ChatGPT data breach, which impacted consumer personal and sensitive data, including potential exposure of payment information.
OpenAI explained that it took ChatGPT “offline . . . due to a bug in an open-source library which allowed some users to see titles from another active user’s chat history.”
The ChatGPT data breach serves as a caution to all companies that have used third-party generative AI tools in their businesses. Companies should know and control the confidential business information, personal and sensitive data that is input into the training models to deter regulatory scrutiny and mitigate class action risks.
Typical class-action theories that will be tested in AI-related data breach litigation include claims of negligence, fraud, breach of contract, unfair practices and violation of various state privacy laws.
Concerned companies have anticipated the potential for data breaches and implemented internal policies prohibiting the use of third-party generative AI tools for business purposes. Companies that have not yet adopted such policies should find out what types of data employees are inputting into generative AI tools, and carefully evaluate the risks and benefits of using third-party tools to innovate business processes with generative AI.
Generative AI dangers likely will touch the competitive marketplace, too. While the Federal Trade Commission (FTC) develops its AI regulatory framework, companies will likely lean on Lanham Act-type claims to stop a competitor’s misuse of generative AI.
The FTC has outlined such a roadmap by noting that “companies thinking about novel uses of generative AI, such as customizing ads to specific people or groups, should know that design elements that trick people into making harmful choices are a common element in FTC cases.”
Competitors are likely monitoring each other’s use of generative AI as a potential way to redress alleged unlawful conduct. It is also likely that consumers will pursue class actions for the publication of any false or misleading advertisements.
Companies using generative AI should understand exactly how the tool is used by its employees and for what purposes, including whether copyrighted, confidential business information, sensitive and personal information is entered into AI systems. Businesses should generally avoid using confidential consumer or confidential business information in these technologies, especially information governed by existing state laws (for consumers) and B2B agreements (for companies).
Businesses need experienced and credible employees who understand the ethical, legal and engineering behind AI. Make sure employees monitor and address impacts of AI tools throughout their lifecycle. As the FTC recently noted, given the “many concerns about the use of new AI tools, it’s perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering. … [T]hese reductions might not be a good look.”
Review and update policies regarding AI. Make sure internal policies clearly outline what data can be used in AI models, and that external policies and terms facilitate informed consumer decision-making.
Be watchful of all vendors using AI tools. Pay attention, in contracts, to liability shifting, breach notification, deletion requests and indemnification provisions. Make sure duties and obligations around the sharing of AI tools are clear.
It is important that consumers know they are communicating with a machine and not a human, and children’s programs should not contain manipulative content.
It’s important to think like a consumer and competitor. Review advertisements on websites, apps and marketing materials through the eyes of prospective customers and competitors. Make disclosures clear and indicate whether any generative AI output was involved in the creation of ads, especially content used to market goods or services to consumers.
The plaintiff’s bar and regulators will be watching. We live, as they say, in “interesting” times.
Sharon D. Nelson, Esq., President, Sensei Enterprises, Inc.
3975 University Drive, Suite 225, Fairfax, VA 22030
Email: Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology