The increased growth and popularity of Generative Artificial Intelligence has made its way into the courtrooms as both self-represented litigants and lawyers begin relying on this tool to prepare their submissions and evidence. However, this has raised concerns about the accuracy and quality of submissions being presented to the court.
For example, in a defamation case, Dr Natasha Lakaev cited a case, 'Hewitt v Omari [2015] NSWCA 175,' but in reality, no such case existed. In his judgment, Justice Blow stated that this error was likely the product of Generative AI, highlighting that "when artificial intelligence is used to generate submissions for use in court proceedings, there is a risk that the submissions that are produced will be affected by a phenomenon known as ‘hallucination’". These 'hallucinations' occur when Generative AI perceives patterns or objects that are non-existent, creating outputs that are inaccurate or nonsensical. Additionally, in another case, a legal agent at the Queensland Civil Administrative Tribunal had submitted 600 pages of "rambling, repetitive, nonsensical" material, which was suspected to have been written by AI.
Given the lack of guidelines and rules surrounding the use of Generative AI, "Judges of the Federal Court have been considering and discussing the development of either Guidelines or a Practice Note in relation to the use of Generative Artificial Intelligence"(Media statement from the Chief Justice of the Federal Court, March 2025). This reflects the law's attempt to appropriately balance the interests of the administration of justice along with fairness and efficiency for each individual who comes before the court. The Chief Justice has stated that the court's AI Project Group will seek submissions from legal professionals, self-represented litigants, and the public starting mid-June 2025.
Submissions can be emailed to AI_Consultation@fedcourt.gov.au until June 13, 2025. This public consultation aims to inform the Federal Court's approach to regulating AI in legal proceedings, balancing efficiency with accuracy and ethical considerations. Recent incidents globally, including cases in the US and Australia involving fabricated AI-generated legal information, underscore the urgency of establishing clear guidelines for AI use in the legal profession.