In an ever-changing world the use of AI is having a significant impact on our everyday lives. The hot topic on the agenda today is:
Should the paths of legal proceedings and AI use be allowed to cross?
In our modern day more and more lawyers are beginning to rely on artificial intelligence to prepare their submissions and evidence to court. But this tool has raised concerns amongst the court, due to issues surrounding the quality and accuracy of these submissions from artificial intelligence. This is evidently shown through the defamation case which Dr Natasha Lakaev cited, Hewitt v Omari (2015).
But in reality, this case did not exist. In his judgment, Justice Blow stated the cause of this error was most likely due to generative AI. This mistake from AI was caused by a phenomenon called ‘hallucinations’. These hallucinations occur when generative AI perceives pattens or objects that are non-existent, therefore creating outputs that are inaccurate or nonsensical.
This isn’t the only case where AI was misused. In another case, a legal agent at the Queensland Administrative Tribunal had submitted 600 pages of repetitive, rambling, and nonsensical material which the court had suspected to be the use of artificial intelligence.
With the lack of laws and guidelines surrounding the use of generative AI in legal proceedings, this issue has emerged as a spark of debate and discussion in court systems.
If you or someone you know wish to discuss this issue further, then please do not hesitate to contact us on 02 8999 9809.