AI, Mata v. Avianca, & the future of legal practice
3/27/20244 min read


Generative artificial intelligence (AI) capabilities are impressing legal practitioners around the world. We explore current use of AI in law and how it might impact the future of legal practice. Before you ask - this article was written by a real human being.
Before we dive in, if you don't really understand what AI is or how it works, check out our recent post in which we explain, in simple and high-level terms, what it all means.
There's huge hype around the potential transformative and disruptive effects of AI on our personal and professional lives. 2024 is shaping up to be a significant year for AI advancement. The rumour mill spins with suggestions of large language models (LLMs) being integrated into devices like smartphones. By the end of 2024, we may all have an AI-powered device in our pocket.
It's crazy to think that a few years ago, no one really knew what AI was or how it could be used. In fact, while now a household name, OpenAI's ChatGPT was only publicly released 16 months ago.
Liar, liar, briefs on fire
With the rapid pace of technological advancement, it's no wonder some have been caught out. Perhaps the most well-known examples come from the United States, where yet another lawyer is facing possible discipline for citing a non-existent case generated by AI.
Mata v. Avianca, Inc.
One of the most cited instances that demonstrate the current limitations of AI occurred in a case before the U.S. district court of the Southern District of New York, in which the plaintiff sued the airline Avianca for personal injury.
The plaintiff was on an overnight flight in August 2019 from El Salvador to JFK Airport in New York. He alleged he was struck in the left knee with a metal serving cart resulting in grievous and painful injuries, including damage to his nervous system requiring medical treatment that prevented him from working.
The Court found the plaintiff's claim arose under the Montreal Convention, a multilateral treaty that applies to all international carriage of persons, baggage or cargo performed by an airline. The Convention provides that a carrier is liable for damages if an accident occurs on board their aircraft that results in bodily injury to a passenger. However, the Convention imposed a two-year limitation period for bringing such an action.
And this is where things got interesting.
Understandably, Avianca asked the court to dismiss the case against it as the limitations period had expired. In response, the plaintiff's legal team argued that the claim had been filed in time and provided case citations in support.
Unfortunately, it turned out one of the lawyers for the plaintiff used ChatGPT which had returned six citations for cases the court later found to be non-existent. In explaining their reasoning to the court, the lawyer explained they chose to rely upon ChatGPT for legal research because their firm did not maintain Westlaw or LexisNexis accounts.
In the Court's opinion and order on sanctions, it set out how the lawyer had apparently used ChatGPT: "His first prompt stated, "argue that the statute of limitations is tolled by bankruptcy of defendant pursuant to montreal convention" to which ChatGPT responded with broad descriptions. Clearly, the lawyer wanted something more specific so further prompted "provide case law", "show me specific holdings", "show me more cases" and "give me some cases". The chatbot complied by making them up!
The Court confirmed "there is nothing inherently improper about using a reliable artificial intelligence tool for assistance", but it noted the rules that impose a gatekeeping function on lawyers to ensure the accuracy of their submissions. Failure to properly verify and check the accuracy of legal arguments and their bases results in wasted time and money "in exposing the deception".
The key here is reliability. Courts must be able to rely on counsel, whose first duty is one of candour to the court, to act with integrity and due care. Clients need to be able to rely on their lawyers to base their arguments on authentic legal precedent and law. Submitting fake opinions, as the Court in Mata notes, results in "potential harm to the reputation of judges and courts whose names are falsely invoked as authors of the bogus opinions and to the reputation of a party attributed with fictional conduct. It promotes cynicism about the legal profession... [a]nd a future litigant may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity."
Ultimately, the Court dismissed the plaintiff's claim as time-barred and required the lawyers pay a fine of $5,000 for, effectively, misleading the court.
The future of legal practice
As the New York Times put it, the legal profession is "perhaps most at risk from the recent advances in A.I. because lawyers are essentially word merchants. And new technology can recognize and analyze words and generate text in an instant. It seems ready and able to perform tasks that are the bread and butter of lawyers."
Currently, the most favourable use cases for AI-powered tools appear to be in the automation of tedious, time-consuming or repetitive tasks. The New Zealand Law Society describes some of the possible use cases of AI in legal work as follows:
Undertaking e-discovery;
Analysing contracts;
Generating templates and drafting documentation;
Conducting legal research and summarising large quantities of information;
Predicting case outcomes and analysing risks.
Lawyers who use AI as a complementary tool to automate the repetitive, formulaic and boring aspects of legal work can reduce workload, increase efficiency and free-up time to focus on higher-value, more engaging work, and participate in the more human and strategic aspects of legal practice.
As the New Zealand Law Society notes, "many industries have had a glimpse of what AI can offer, and the legal profession is no exception... However, there are also risks and ethical issues that need to be carefully managed." The future of legal practice is most likely to be one of gradual experimentation as opposed to an overnight transformation. Not only is the legal profession traditionally slow to change, Mata should serve as cautionary tales for those who use AI-powered tools that the technology is far from perfect.
We don't think AI poses a massive existential threat to lawyers. It's more likely that, as history demonstrates with the uptake of any technology, some jobs will change, some jobs will go and some jobs will be created. The most likely existential threat, in our view, will come from lawyers who use AI replacing lawyers who don't.
Disclaimer: This article is for general information purposes only and is not legal advice.
Support us by subscribing to our mailing list.
Enter your email address to be notified about more articles like this.