“ChatGPT, what do you think of Judge Starr’s new order?”

June 2, 2023

A recent Texas Lawbook article considers a new standing order by Hon. Brantley Starr about reliance on language-model AI programs when writing briefs. Judge Starr’s point is well-taken and Rule 11 should probably be revised to include a specific reference to this issue, given how pervasive these programs will likely become in the near future.

I asked ChatGPT what it thought about the matter (specifically: “What are the pros and cons of requiring attorneys to certify to the accuracy of court fiings made with the assistance of language-model AI programs, such as yourself?”) and here is what it had to say:

______

Introducing a requirement for attorneys to certify the accuracy of court filings made with the assistance of AI language models like GPT-4 could have several pros and cons. Here are some:

Pros:

  1. Responsibility and Accountability: Requiring certification ensures that attorneys review the generated content thoroughly before submitting it, increasing the reliability and accuracy of the information presented. It also maintains responsibility with the attorney, discourishing the misuse of AI and ensuring ethical conduct.
  2. Increased Accuracy: By requiring attorneys to certify accuracy, any incorrect or misleading information generated by AI can be corrected, minimizing the risk of procedural or factual errors that could potentially affect a case’s outcome.
  3. Improved Understanding: Since an attorney must certify the accuracy of the filing, they’re incentivized to understand the nuances of the AI-generated content better, making sure they can defend it in court.

Cons:

  1. Additional Burden: This requirement could add to the workload of attorneys, who are already often busy. While AI systems can increase efficiency, the certification process may negate some of this advantage.
  2. Potential Liability Issues: There might be concerns about the increased legal liability for attorneys. If an error from an AI system slips through, the attorney could be held responsible, which could lead to professional consequences or disciplinary actions.
  3. Complexity of AI Outputs: AI language models can generate vast amounts of complex content. While they can be extremely useful, it might be challenging for attorneys to verify all details accurately, especially given the complexity of the legal language and the high stakes involved.
  4. Possible Overreliance: There is a risk that attorneys might rely too heavily on AI-generated content and miss important aspects of a case that the AI, which lacks a deep understanding or context, may not catch. While certification could theoretically mitigate this, the added responsibility may be insufficient.

Ultimately, the success of such a certification requirement will depend on the balance between leveraging the benefits of AI assistance and ensuring the maintained quality, accuracy, and ethical use of legal filings. As AI evolves, it’s crucial to continually reassess such policies to ensure they meet the needs and challenges of the legal profession.

Follow by Email
Twitter
Follow Me