A federal appeals court in New Orleans has proposed a new rule that would regulate the use of artificial intelligence (AI) programs by lawyers in court filings. The 5th U.S. Circuit Court of Appeals is the first federal appeals court to introduce such a rule, which targets the use of generative AI tools like OpenAI’s ChatGPT by lawyers who appear before the court.

What is the New Rule? 

The proposed rule would require lawyers and pro se litigants to certify that they did not use AI programs to draft their briefs or that they checked the accuracy of any text generated by AI in their filings. The rule would also forbid the use of AI programs that aim to deceive or mislead the court or the parties.

The court said that lawyers who lie about their compliance with the rule could face sanctions and have their filings rejected. The court is inviting public comments on the proposal until Jan. 4, 2023.

the court recognizes that lawyers and pro se litigants may use AI in the future and wants public feedback on the rule that addresses the use of this technology

Lyle Cayce, the court’s clerk

The court’s proposal comes as judges nationwide struggle with the rapid development of generative AI programs like ChatGPT and the need for safeguards for their use in courtrooms.

The Story So Far

The dangers of using AI without proper supervision were exposed in June; when two New York lawyers were sanctioned for submitting a legal brief that included six fake case citations generated by ChatGPT. The lawyers said that they used the AI program as a research tool and did not mean to deceive the court, but admitted that they did not verify the accuracy of the citations.

It is worth pointing out that the abovementioned rules are in close alignment with rules and policies adopted by some other courts. For instance, U.S. District Judge Lee Rosenthal of the Southern District of Texas issued an order in July requiring lawyers to disclose the use of AI in their filings and to ensure that the AI-generated text is accurate and reliable. Judge Rosenthal also warned lawyers that the use of AI does not exempt them from their ethical and professional duties to the court and the parties.

The use of AI by lawyers has also raised ethical and practical questions about the impact of the technology on the legal profession and the administration of justice. Some legal experts have argued that AI could improve the quality and efficiency of legal services, while others have expressed concerns about the potential for bias, error, and manipulation by AI programs.

Conclusion 

The proposed rule is bound to cause debate and discussion among lawyers, judges, and scholars about the benefits and challenges of AI in the legal field. As I personally predict, this rule most likely will serve as a model for other courts to consider as they face the emerging issues of legal professionals using Generative AI. 

Shares:
Show Comments (2)

2 Comments

  • Freddie
    Freddie
    November 26, 2023 at 3:44 pm

    The court’s emphasis on certifying the non-use of AI in drafting briefs or verifying the accuracy of AI-generated content is a necessary safeguard against potential misrepresentation. I wonder how this certification process might be enforced and monitored effectively.

    Reply
  • Andy Williams
    Andy Williams
    November 26, 2023 at 4:03 pm

    As this proposed rule sparks discussions across legal circles, it raises broader questions about the future of AI integration in various industries. The ethical and practical concerns regarding AI’s role in the legal profession are pivotal. While AI has the potential to enhance legal services, the risks of bias, errors, and manipulation demand stringent regulations. It’s a delicate balance between leveraging technology for efficiency and upholding ethical responsibilities.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *