In our latest guest post, Elizabeth Chan (Tanner De Witt), Kiran Nasir Gore (Kiran N Gore PLLC), and Eliza Jiang (Lawdify AI) explore the latest developments in the evolving world of Generative AI, with a focus on the need for pragmatic regulation.
With AI litigation continuing to develop and evolve, it is becoming increasingly common. But what are the latest developments and how can Generative AI in litigation be regulated? Elizabth Chan, Kiran Nasir Gore, and Eliza Jiang investigate in this post.
Over the past year, the power of generative artificial intelligence (Generative AI) has taken the world by storm. Today, nearly every digital tool and platform is advertising a new Generative AI feature to help users to better organise their tasks, streamline and process information, conduct research and learn more about various topics, and write more efficiently and effectively.
Legal processes are not immune from these developments, and a global debate has emerged on whether and what role Generative AI-powered tools should play in the legal work performed by dispute resolution specialists.
As this blog post demonstrates, the devil lies in the details. While Generative AI-powered tools can make litigation and arbitration teams more efficient and effective, regulations, such as disclosure or certification requirements, can help (or hinder!) ethical, fair, and responsible use of these tools and a level-playing field for all parties participating in these proceedings. This post explores these latest developments.
Using Generative AI-powered tools in the work of dispute resolution specialists presents many challenges and risks.
These tools can be opaque, and it may be challenging for users to understand precisely what they do, how they work, and what happens to the information and data users input. These circumstances create the potential for severe consequences for misinformed or underinformed users, including professional conduct violations or breaches of confidentiality and/or attorney-client privilege.
Even more, where disputes, such as international arbitration cases, involve cross-border elements, the laws and regulations of multiple jurisdictions may apply. Indeed, in the multi-jurisdictional context, it may be even more urgent to either harmonise or regulate standards of use for Generative AI-powered tools to help ensure procedural fairness.
The BCLP 2023 survey of 221 arbitration professionals revealed that a significant majority (63%) support regulating disputing parties’ use of Generative AI-powered tools in international arbitration proceedings. This consensus suggests that there are risks associated with non-regulation.
This is underscored when one considers the importance of the documents that international arbitration practitioners may work on, including legal submissions, expert reports, and arbitral awards – each of which must be precise, accurate, and coherent. However, while baseline regulation itself is an important first step to engaging with this technology, it is equally vital that the developed regulatory framework is adaptable and forward-looking.
The Silicon Valley Arbitration and Mediation Center’s (SVAMC) Draft Guidelines on the Use of AI in Arbitration (Draft Guidelines) stand out as the only cross-institutional guidelines (to date) tailored explicitly for international arbitration contexts.
The SVAMC Draft Guidelines were prepared with contributions from a committee (including Elizabeth, a co-author of this blog post) and propose a nuanced approach to the disclosure of when AI has assisted in preparing legal work product.
It is important to note that the SVMAC Draft Guidelines define “AI” broadly. While their immediate focus is on the Generative AI-powered tools that are also the focus of this blog post, the Draft Guidelines refer to “AI” generally and aim to go even further in hopes of remaining evergreen and thereby capturing the regulation of AI-based technologies and tools that may not yet be developed.
The SVAMC Draft Guidelines recognise that the need for disclosure may vary, suggesting that, in some instances, the AI technology being used may be straightforward and uncontroversial (e.g., technology-aided document review (TAR)), thus not requiring explicit disclosure.
However, the Draft Guidelines also allow for the possibility that arbitral tribunals, parties, or administering institutions might demand disclosure of the use of Generative AI-powered tools, especially when such use could significantly influence the integrity of the arbitration proceedings or the evidence presented within it.
The AAA-ICDR Principles for AI in ADR (AAA-ICDR Principles) and the MIT Task Force on the Responsible Use of AI in Law (MIT Principles) provide additional sets of guidelines and principles on the use of AI in legal practice. The AAA-ICDR Principles emphasise that AI should be used in alternative dispute resolution (ADR) cases, including arbitrations, in a manner that upholds the profession’s integrity, competence, and confidentiality. They do not specifically address disclosure requirements.
Meanwhile, the MIT Principles, which are applicable more broadly within legal contexts, highlight the importance of ethical standards, including confidentiality, fiduciary care, and the necessity for client notice and consent, indirectly suggesting a framework where disclosure of AI use might be required under certain conditions to maintain transparency and trust.
These various guidelines and principles collectively underscore the evolving landscape of AI in legal practice and emphasise the need for careful consideration of when and how AI-powered assistance should be disclosed. These guidelines and principles also share the core tenet that the integrity of legal work and fairness in the dispute resolution process must be upheld.
Different jurisdictions are approaching the need to disclose the use of AI assistance in preparing legal work products differently, and a spectrum of regulatory philosophies and practical considerations is emerging.
For example, in the United States, a Texas federal judge has added a judge-specific requirement for attorneys to not only certify that their court filings if drafted with the assistance of a Generative AI-powered tool, were also verified for accuracy by a human but also take full responsibility for any sanction or discipline that may result from improper submissions to the court.
This approach demonstrates a policy-based choice. The objective is not to prevent the use of Generative AI-powered tools in litigation practice but rather to allocate risk, maintain the integrity of the materials put before the court, and ensure that attorneys remain ultimately responsible for those materials.
Interestingly, the template certification provided by the judge does not necessarily require an attorney to disclose whether they have used Generative AI-powered tools to prepare their legal submissions, only that, in case such tools were used, a human attorney has verified the submission and the attorney takes full responsibility for its contents.
As such, the certification requirement is not very different from the already existing obligation on the attorney of record to diligently oversee that all submissions presented to the court are of the appropriate quality.
Meanwhile, the Court of King’s Bench in Manitoba, Canada, has adopted a more prescriptive disclosure practice, mandating that legal submissions presented to the court also provide disclosure of whether and how AI was used in their preparation. However, it does not mandate the disclosure of use of AI to generate work products often used to analyse cases, such as chronologies, lists of issues, and dramatis personae, upon which legal submissions may rely.
On the other hand, New Zealand and Dubai represent contrasting models of disclosure obligations. New Zealand’s guidelines for lawyers do not necessitate upfront disclosure of AI use in legal work. Rather, they focus on the lawyer’s responsibility to ensure accuracy and ethical compliance, and disclosure of specific use of AI-powered tools is required only upon direct inquiry by the court. This approach prioritises the self-regulation of legal practitioners while maintaining flexibility in how AI-powered tools are integrated into legal practice.
In contrast, the Dubai International Financial Centre (DIFC) Courts recommend early disclosure of AI-generated content to both the court and opposing parties. Such proactive disclosure is viewed, in that context, as essential for effective case management and upholding the integrity of the judicial process.
On the other side of the bench, some jurisdictions have unveiled guidelines for using Generative AI-powered tools by courts and tribunals. New Zealand and the UK now provide frameworks for judges and judicial officers. These guidelines emphasise the importance of understanding Generative AI’s capabilities and limitations, upholding confidentiality, and verifying the accuracy of AI-generated information. In principle, neither jurisdiction’s guidelines require judges to disclose the use of AI in preparatory work for a judgment.
The drafting of legal submissions and arbitral awards are not the only areas where AI-powered tools may be integrated into an international disputes practice. AI-powered tools may also play a role in identifying and shortlisting arbitrators. This application carries potential implications for diversity and fairness in arbitrator selection.
Typically, neither parties nor institutions must disclose their reasons for appointing particular arbitrators or the process they undertook to shortlist candidates. However, disclosure may be relevant where AI-powered tools are used to identify and potentially select arbitrators, given the biases and risks inherent in AI training tools and datasets.
Indeed, there are relevant parallels between the arbitrator selection process and general recruitment processes, as both involve evaluating and selecting candidates for specific roles. Legislative steps, such as New York City Local Law 144 (New York Law 144), regulate the use of AI-powered tools in recruitment, highlighting the importance of transparency and accountability in AI-assisted candidate selection processes.
New York Law 144 requires Automated Employment Decision Tools (AEDT) to undergo annual bias audits to ensure fairness and transparency. Similarly, the European Union’s concerns, as expressed by the Permanent Representatives Committee, underscore the need for careful regulation of AI in selection processes to protect individuals’ career prospects.
While audits of AI databases and algorithms can help identify and rectify any inadvertent biases, completely eliminating diversity-related biases remains a significant challenge. For instance, Google’s recent efforts to subvert racial and gender stereotypes in its Gemini bot encountered backlash, illustrating the complexity of addressing biases without introducing new issues.
Integrating Generative AI-powered tools into the work of litigation and arbitration teams has prompted new conversations on regulatory measures, including disclosure and certification requirements, to ensure their ethical and fair application.
The SVAMC Draft Guidelines, the AAA-ICDR Principles, and the MIT Principles each present exemplar frameworks for the responsible use of Generative AI-powered tools, emphasising transparency, accountability, and ethical standards. Moreover, various jurisdictions have adopted different approaches to disclosure or certification requirements, thereby demonstrating a range of policy-driven priorities.
However, collectively, these recent developments signal a critical juncture in the legal profession’s engagement with Generative AI, stressing the need for adaptable, forward-looking regulatory frameworks that uphold the integrity and fairness of legal processes.
For additional information about the use of AI in the legal profession, including AI litigation and Generative AI in dispute resolution, contact TrialView to learn about our award-winning AI powered platform
Trusted by law firms around the world, our Open AI offering enables you to ask questions, build timelines, detect patterns, and make connections at speed so you can compare statements, depositions, and case documents, as well as uncover inconsistences.
Plus, with eBundling, case preparation, and hearing services, TrialView empowers you to work with speed and efficiency so you can work smarter, not harder. If you would like to learn more about how litigation AI can support you, reach out to info@trialview.com or book a tailored demo to see TrialView in action.
*The opinions and insights presented in this post solely represent the authors’ views. They are not endorsed by or reflective of the policies or positions of their affiliated firms or organisations.
If you’re wondering about how we can help you focus more on outcomes, and worry less about hearing prep, book a tailored demo or give us a call.
Want to find out more? Get in touch to find out why TrialView is the platform of choice for dispute resolution.