Ride the Lightning

Cybersecurity and Future of Law Practice Blog
by Sharon D. Nelson Esq., President of Sensei Enterprises, Inc.

Can Lawyers Ethically Use AI to Write a Brief?

March 23, 2023

Holland & Knight published the following post on March 21.

“Litigation attorney Nicholas Boyd authored an article for Daily Report exploring the ethics of using artificial intelligence (AI) in legal writing. He claims that even though the results show that AI’s legal writing capabilities have improved, they are not yet on par with those of experienced attorneys. Using AI in legal writing may be a violation of ethical rules, including the duty of competence, the duty of confidentiality and assisting in the unauthorized practice of law. Mr. Boyd recommends that attorneys use AI with caution when working with court or client-facing documents and always exercise competent legal judgment in the best interests of their clients.

READ: Do Professional Ethics Rules Allow You to Have a Robot Write Your Brief?

Subscription Required.”

I wish I could report on the entire article, but I am not a subscriber. However, it occurred to me that a more general ethical question to ChatGPT might elicit some interesting answers. The following question and answer took place on March 22, 2023.

What are the ethical mistakes that lawyers are likely to make when they use ChatGPT?

“As a language model, ChatGPT is a powerful tool that lawyers can use to assist them in their work. However, there are several ethical mistakes that lawyers should be aware of when using ChatGPT:

Over-reliance on ChatGPT: Lawyers should not rely solely on ChatGPT for legal advice or decision-making. ChatGPT is a tool that can assist lawyers in their work, but it should not replace their legal judgment.

Failure to disclose the use of ChatGPT: Lawyers should disclose to their clients if they are using ChatGPT in their work. Failure to do so could be seen as a breach of the lawyer’s duty of candor and may undermine the client’s trust in the lawyer.

Lack of transparency: Lawyers should ensure that the output from ChatGPT is transparent and can be verified. The lack of transparency in the output could lead to unethical behavior, such as cherry-picking the output that supports their argument or concealing unfavorable output.

Bias in the training data: ChatGPT is trained on a large dataset of text, which could be biased towards certain perspectives or viewpoints. Lawyers should be aware of this bias and take steps to ensure that the output they receive is unbiased.

Confidentiality and data security: Lawyers should ensure that the data they input into ChatGPT is kept confidential and secure. They should also ensure that the output from ChatGPT is not shared with unauthorized individuals or entities.

Overall, lawyers should use ChatGPT with caution and ensure that they follow ethical guidelines when using the tool.”

Fascinating and instructive . . .

Sharon D. Nelson, Esq., PresidentSensei Enterprises, Inc.
3975 University Drive, Suite 225Fairfax, VA 22030
Email:   Phone: 703-359-0700
Digital Forensics/Cybersecurity/Information Technology
https://senseient.com
https://twitter.com/sharonnelsonesq
https://www.linkedin.com/in/sharondnelson
https://amazon.com/author/sharonnelson