Judges Guide Attorneys on AI Pitfalls With Standing Orders

Article
August 2, 2023
3 minutes

In the rapidly evolving artificial intelligence landscape, judges may be among the first to shape AI’s direction and use in courtroom settings. In recent months, federal judges have begun implementing checks around the use of AI in court filings by issuing standing orders, and some best practices are already emerging.

AI-generated prompts and results, though valuable when used responsibly, must be applied in a manner consistent with client confidentiality and be verified for accuracy by a human being. Over-reliance on generative AI, as noted by several judges, can potentially lead to adverse outcomes.

Additionally, depending on the jurisdiction, attorneys who use AI must consider whether notice to the court is required and plan accordingly. Finally, attorneys should be prepared for the future issuance of similar orders, rulemaking, and dedicated rules of professional conduct pertaining to AI.

There have been four such orders issued in the last several months. All members of the legal profession should take note, whether appearing in front of these judges or not, as courts continue to grapple with transparency and accuracy surrounding the use of AI.

Northern District of Texas

Judge Brantley Starr of the US District Court for the Northern District of Texas was the first to issue a standing order on AI. On May 30, Starr issued a standing order requiring attorneys and pro se litigants appearing before his court to file a certificate declaring whether any portion of their filings will be drafted using generative AI tools.

The standing order’s description cites both the danger of generative AI “mak[ing] stuff up,” and of generative AI’s lack of a sense of “duty, honor, or justice” that binds practicing lawyers. Accordingly, attorneys must certify that either none of the content in any of their filings will be drafted using generative AI, or that generative AI content will be verified for accuracy by a human being.

“These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them,” cautioned Starr.

Eastern District of Pennsylvania

On June 6—just one week after Starr’s order—Judge Michael Baylson of the US District Court for the Eastern District of Pennsylvania issued his own order requiring attorneys and pro se litigants to disclose the use of AI in drafting filings.

However, Baylson’s standing order does not limit the scope to the use of only generative AI tools. The breadth of Baylson’s order is noteworthy, as “AI” is an umbrella phrase used to describe a variety of advanced technologies capable of simulating certain aspects of human thought. Many legal practitioners have been using machine learning, a technique that falls under this umbrella, for years.

Use scenarios for such machine learning techniques have included contract analytics, library research tools, and e-discovery methods such as technology-assisted review. Baylson’s standing order is considerably broader than Starr’s and has the potential to sweep in any of the aforementioned technologies that may fall under the AI label.

Northern District of Illinois

Two days later, on June 8, Magistrate Judge Gabriel A. Fuentes of the US District Court for the Northern District of Illinois issued a revised standing order for civil cases. This order—more akin to Starr’s than Baylson’s—includes language requiring parties that use generative AI “to conduct legal research or to draft documents for filing with the Court” to disclose the specific tool and the manner in which it is used.

Fuentes reminded practitioners of the applicability of Federal Rule of Civil Procedure 11, emphasizing that this court will “presume that the Rule 11 certification is a representation by filers, as living, breathing, thinking human beings, that they themselves have read and analyzed all cited authorities to ensure that such authorities actually exist and that the filings comply with Rule 11(b)(2).”

US Court of International Trade

On the same day as Fuentes, Judge Stephen Alexander Vaden of the US Court of International Trade issued a related order requiring the identification of any generative AI program used and of all portions of text drafted with the assistance of generative AI.

Vaden also cautioned of the “novel risks” posed by using generative AI, particularly noting that potentially confidential information could be submitted through prompts given to an AI program. In light of these risks, Vaden’s order also requires certification that “the use of such program has not resulted in the disclosure of any confidential or business proprietary information to any unauthorized party.”

Although these four judges are—so far—the only ones who have released standing orders limiting the use of some form of AI in court filings, they will likely not be the last. Regardless of the proliferation of such orders, legal practitioners should take note and employ good practices when using generative AI in legal drafting.