Home | The case against ChatGPT: Warnings against an AI-generated testimony

INSIGHTS: The case against ChatGPT: Warnings against an AI-generated testimony

May 1, 2024

Author

Principal Nevena Brown
Nevena Brown
Consultant
Josh Jones Paralegal
Joshua Jones
Paralegal

The use of Artificial Intelligence (AI) is rapidly rising across the legal profession. While there are benefits in saving time and costs, the downside is that AI could adversely impact legal proceedings.

In this Insight we explore the potential impacts of AI in the trial process and in the gathering of case law research. We also highlight the need for legal representatives to apply greater caution when using any AI-related tools.

AI and the trial process

Judges are becoming increasingly wary of AI’s potential to undermine the integrity of the trial process.

AI is essentially a supercomputer with infinite brainpower. In an imagined scenario, an accused person facing criminal charges who has access to AI tools, such as ChatGPT, could use this technology to draft their own defence, with no ethical obligations to the court.

Chatbots and AI tools have already been detected in the sworn testimonies of witnesses, legal documents, and even in character references used to mitigate sentencing. As a result, defence counsels must now consider whether they are walking the court into error by tendering a witness’ statement that may have been generated by AI.

Judge warns against AI-generated testimony

In DPP v Khan[1], at [43] Justice Mossop held that any testimony, or character references, used in court proceedings that were likely to have been written with the assistance of AI chatbots must be given very little probative value to the extent that they cannot influence a trial.

His Honour opined that while it is difficult to detect AI in written statements, counsel must, at the very least, make enquiries to understand how a statement was prepared if it appears to have be written by, or with the assistance of, AI chatbots[2]:

“In my view, counsel appearing on a sentence should make appropriate enquiries and be in a position to inform the court as to whether or not any reference that is being tendered has been written or rewritten with the assistance of a large language model.”

The obligation also extends to advise the Court whether the statement was written with the assistance of an automated translation program.

AI and case law research

AI is not limited to affecting the trial process. Professor Lyria Bennett Moses of UNSW Law & Justice, argues that legal practitioners should also exercise caution when using AI to conduct case law research[3]. AI systems can summarise case law at lightning speed, but they are less reliable at applying novel facts to case law – one of the key tools in a practitioner’s legal arsenal. Moreover, AI systems are built on quantitative data that can predict outcomes according to what is essentially mathematical logic, not the normative standards that the law requires, such as considerations of reasonableness, fairness, and justice.

However, as articulated by the Chief Justice of the Federal Court of Australia, Allsop J[4]:

“Artificial intelligence and its effect on Courts, the profession and the law will change the landscape in ways we cannot predict.”

Nonetheless, lawyers are advised to ensure that their clients and themselves apply great caution when using AI systems in any legal proceedings. For legal representatives, their primary duty is to the court or tribunal and includes not misleading the court but acting with honesty.

This Insight was written by Principal Nevena Brown and Paralegal Joshua Jones. For further advice, please contact Nevena.

Disclaimer: This information is current as of May 2024. This article does not constitute legal advice and does not give rise to any solicitor/client relationship between Meridian Lawyers and the reader. Professional legal advice should be sought before acting or relying upon the content of this article.
Share this: