SVAMC Guidelines on the use of AI in Arbitration

The Silicon Valley Arbitration and Mediation Center is working towards publishing Guidelines on the use of AI in international arbitration. George Agnew provides an overview below.

Generative AI continues to gain traction in the legal industry, including in arbitration.

Despite the numerous benefits of arbitration software and AI, the rise of artificial intelligence brings with it challenges in addition to opportunities, which is why the Silicon Valley Arbitration and Mediation Center is planning to publish guidelines on the use of AI in international arbitration.

The guidelines seek to establish a set of general principles for the use of AI in arbitration, and are intended to guide rather than dictate. They do not intend to replace or override local AI laws of regulations.

An overview of the SVAMC Guidelines

The SVAMC Guideline are split into three chapters:

1) Guidelines applicable to all participants in international arbitration
2) Guidelines for parties and party representatives
3) Guidelines for arbitrators

Chapter 1: Guidelines applicable to all participants in international arbitration

Guideline 1: Understanding the uses, limitations and risks of AI applications

The AI tool’s terms of use and data handling policies need to be reviewed by participants to understand if the tool’s data treatment is consistent with applicable confidentiality, privacy or data security obligations.

Participants should make reasonable efforts to understand the functionality, limitations and risks of the AI tools used in preparation for or during the course of an arbitration proceeding. This includes the following:

  • “Black-box” problem: Text produced by Generative AI is a product of complex probabilistic calculations rather than intelligible “reasoning”. AI lacks ability to explain their own algorithms. Where possible, participants should therefore use AI tools and applications that allow them to understand how a particular output was generated (“Explainable AI”).
  • AI tools may not be well-suited for tasks requiring specialised knowledge or case-specific information unless they are fine-tuned or provided with more relevant data.
  • Errors or “hallucinations”: This will occur when AI lacks information to provide an accurate response to a particular query. Errors can be reduced through “prompt engineering” and “retrieval-augmented generation”.
  • Augmentation of biases: Biases may occur when the underrepresentation of certain groups of individuals is carried over to the training data used by the AI tool to make selections or assessments. Participants are urged to exercise extreme caution when using AI tools for this purpose.

Compliant: Using AI to conduct research on potential arbitrators or experts for a case
Non-Compliant: Using it to select arbitrators or experts for a case without human input

Guideline 2: Safeguarding confidentiality

Need to ensure use of AI tools is consistent with obligations to safeguard confidential information. Confidential information should not be submitted to any AI tools without appropriate vetting and authorisation.

Participants should review the date use and retention policies offered by the relevant AI tools.

Compliant: Using AI for routine non-confidential tasks e.g., meeting scheduling or to research/summarise legal authorities in a third-party database.
Non-compliant: Submitting confidential information to a third-party AI tool as described above.

Guideline 3: Disclosure and protection of records

Some uses of AI by parties, experts, and arbitrators may be uncontroversial and would not ordinarily warrant disclosure. There are certain circumstances where disclosing the use of AI tools may be warranted to preserve the integrity of the proceedings or the evidence.

A party seeking disclosure from another party should explain both why it believes that an AI tool was relied upon in the proceedings and how it would materially impact the proceedings and/or their outcome.

It is ultimately up to the parties and/or tribunal to specify the level of disclosure they want to institute for the proceedings.

Compliant: Using AI to generate document summaries for internal use or to identify and select the documents relevant and responsive to document production requests.
Non-compliant: Using AI to calculate damages without disclosing it. For an arbitrator to use AI to compare persuasiveness of parties’ submissions without disclosing it.

Chapter 2: Guidelines for parties and party representatives

Guideline 4: Duty of competence or diligence in the use of AI

Parties and party representatives on record shall be deemed responsible for any uncorrected errors or inaccuracies in any output produced by an AI tool they use in an arbitration.

Compliant: Using AI to assist with drafting language for pleadings/written submissions or to assist in preparation for cross-examination or find inconsistencies in witness statements.

Guideline 5: Respect for the integrity of the proceedings and the evidence

Parties, party representatives, and experts shall not use any form of AI to falsify evidence, compromise the authenticity of evidence or otherwise mislead the arbitral tribunal and/or opposing party or parties.

Advancements in Generative AI and deep fakes can heighten the risks of manipulated or false evidence and can make it more costly or difficult to detect any such manipulation through forensic and other means.

Compliant: Using AI to produce demonstratives where the accuracy of the representation can be challenged by the opposing party by accessing the referenced source data.

Chapter 3: Guidelines for arbitrators

Guideline 6: Non-delegation of decision-making responsibilities

An arbitrator shall not delegate any part of their personal mandate to any AI tool.

This Guideline does not forbid the use of AI tools by arbitrators as an aid to discharge their duty to personally analyse the facts, arguments, evidence and the law and issue a reasoned decision.

If an arbitrator used a Generative AI tool to assist in the analysis of the arguments or the drafting of a decision or award, the arbitrator cannot reproduce the AI’s output without making sure it adequately reflects the arbitrator’s personal and independent analysis of the issues and evidence at hand.

Compliant: Using AI to provide accurate summaries and citations to create a first draft of the procedural history of a case or generate timelines of key facts.

Guideline 7: Respect for due process

An arbitrator shall not rely on AI-generated information outside the record without making appropriate disclosure to the parties and allowing the parties to comment on it.

Where an AI tool cannot cite sources than can be independently verified, an arbitrator shall not assume that such sources exist or are characterised accurately by the AI tool.

Compliant: Using AI to distil or simplify technical concepts to come up with accurate and relevant questions for the hearing.
Non-compliant: Using AI to conduct independent research into substance of the dispute and base decision on such generated outputs without disclosing it to the parties.

Need further advice about the use of AI in arbitration?

Although there is much to consider when it comes to the use of AI in arbitration, there are numerous benefits to using AI tools and there is no denying that it will have an increasingly important role in the future.

TrialView’s arbitration software is leveraged by leading law firms and arbitrators for large scale international arbitrations. It is trusted by the ICC, IAC, and other leading arbitral bodies and venues, and can have a significant impact on efficiency.

The software enables you to manage documents, conduct remote hearings, integrate transcription, and present evidence – all within one centralised workspace, so you can work smarter, not harder.

If you’d like to unlock the power of AI technologies, learn more about the benefits of AI in arbitration or are interested in finding out more about arbitration software, you can book a tailored demo today.

Alternatively, contact our team to learn more, or read our case studies to see our AI tools in action and learn why we are the platform of choice for lawyers, counsels, judges, and arbitrators around the world.

AI, The Opportunities and Threats for Disputes Practitioners

Join us for an online discussion on AI, The Opportunities and Threats for Disputes Practitioners, on the 3rd November at 12pm.

Delving into the discussion in further detail, the panel will evaluate the current AI offering for disputes lawyers, considering areas where AI can be used to value engineer the resolution of complex disputes as part of the professional offering.

The panel will consider the risks and threats of AI, including the risks of breach of duty and adverse costs due to misapplication of AI tools.

Find out more here.

Free to attend: click here to register.

Speakers include Luke Tucker Harrison, David Blayney, Alex Akin, Eimear McCann, Stephen Dowling.

LawTech UK, Manchester

TrialView is delighted to take part in an upcoming LawTech UK event, looking at the latest trends and challenges of Manchester’s legal tech ecosystem.

This event is free, and open to all.

Event details

When – Wednesday 1 November 2023

Timings – In-person registration 11:00am-11:30am. Main event from 11:30pm-1:30pm. Lunch and networking until 2:30pm

Where – Greater Manchester Digital Security Hub (DiSH) – 47 Lloyd St, Manchester M2 5LE

Dress code – No dress code, come as you feel comfortable

Hybrid – Online attendance is available, but in-person is preferred

Sign up for free here.

AI: Addressing the challenge of Access to Justice.

Savannah Seymour looks at the potential of AI litigation beyond the law firm lens to explore how it can help close the gap between access and justice.

Inaccessibility of legal services is a deepening societal challenge due to escalating costs and growingly complex legal issues.

This means that access to justice for the majority is, if not becoming, unattainable. This issue becomes far more prevalent for the most vulnerable individuals in society. In this sense, those who deserve and require the highest protections for accessibility to justice are at the greatest risk of being excluded.

Can AI address the challenge of access to justice?

What can technology, and specifically AI, do about this? The topic of AI in litigation is continuing to gain traction, and there is no denying that it will have an increasingly important role in the coming years.

Traditionally the legal industry is understood as a high-cost service industry. This is due to legal knowledge, contextual analysis and the application of principles, which are the key elements for delivering bespoke solutions to complicated, nuanced issues.

In this sense, you would expect (and hope) that innovative technology would drive down the costs of such services, especially with the use of AI in litigation being proven to increase efficiency and save legal professionals a significant amount of time and money.

However, in some cases, we have seen the opposite effect: the introduction of new technologies has been seen to drive up legal costs due to the specialised labour required to leverage these cutting-edge solutions effectively.

New Judicial Guidance on the use of AI in litigation has been released to help mitigate such risks and issues. The full guidance can be accessed here. However, to understand whether AI will help or hinder access to justice, it’s important to explore whether we can reasonably expect these costs to come down.

AI is quickly becoming widely used across various industries and societal functions. Developments in machine learning and natural language processing are already showing promise in solving complex issues using context-based reasoning.

To this effect, these solutions are entering the sphere of completing complex tasks previously only capable of being tackled and solved by humans. That is not to say this technology is replacing human work product, but perhaps rivalling certain human skill sets to better the overall service offering, as well as deliver solutions faster.

Will AI replace the intellectual activities performed by lawyers?

The idea of AI replacing the intellectual activities performed by a qualified lawyer are, at best, controversial (and already a heated topic of debate).

There are still many instances where AI may struggle to reach sensible outcomes, particularly in areas of the law which lack clear statute and straightforward legal principle application – you only have to consider the concepts of unconscionability or foreseeability to illustrate the spectrum of grey in this area. In fact, the Bar Council has recently issued new guidance surrounding the use of GenAI for barristers.

However, there are capabilities of AI which we know are already chipping away at the legal process and are becoming quickly adopted and accepted by both lawyers and the courts.

For example, here at TrialView, our AI-powered platform for litigators offers intelligent learning to build timelines and spot patterns in large datasets, as well as leveraging GPT technology to ask questions or prompts about your datasets to quickly find relevant insights.

The hope with these technologies is that they become increasingly self-serving and user-friendly. Whilst these tools still require a platform-certified legal technologist to use and benefit from these solutions, the costs will remain out of reach for the ordinary person. How intuitive a platform is (and who it is built to serve) can help with reducing labour costs associated with these solutions, at least.

However, there are more accessibility factors: (i) the cost of the product itself and (ii) how well marketed it is; even if an AI product is intuitive to use and affordable, how does a layperson come to know this is an option for their legal matter?

Key considerations in the challenge of access to justice

It’s clear that access to justice is still a crucial aim and we still have far to go to achieving true accessibility. The means to do so are multifaceted, and there are multiple factors which contribute to the inaccessibility of justice currently as explored.

Technology providers should consider not only the ease of use of their solutions (in an attempt to solve the information asymmetry between technologists, lawyers and clients), but also explore how they can competitively price and market their products in a way which expands the accessibility of their offerings to the wider market.

Conclusions on the use of AI in litigation

Although there is still a long way to go to achieving accessibility to justice, embracing AI tools will have a significant impact in reaching the goal of true justice, as long as these tools are accessible to all.

If you’d like to learn more about the use of AI in litigation and how tools such as TrialView can increase speed and efficiency by enhancing collaboration, compliance, and the overall strategic approach to legal proceedings, you can book a tailored demo to find out more about the benefits.

Alternatively, contact our team to learn more.

Confidentiality Rings: the Tech behind the Privacy

Confidentiality Rings are increasingly common in competition damages cases in the English jurisdiction, permitting parties in litigation to exchange confidential information in a safe space, with limited visibility to others.

As litigation becomes increasingly digitised, the management of confidential data inexorably needs to be managed in a digital form.

TrialView’s software is specifically designed to facilitate compliance with Confidentiality Orders or Undertakings, whilst providing access to all registered users in a single, combined workspace.

 

How does this work?

 

  • Sensitive information is only visible to parties within an inner ring (or multiple rings);

 

  • Permission settings on TrialView allocates users into teams and specific roles, with each role aligned to a level of privilege;

 

  • Special controls mean a user only gets access to the version of the document they are entitled to see;

 

  • Evidence presentation is designed so that different users can see different versions of the same document at the same time;

 

  • Version control facilitates consistency of pagination, structure and tabbing;

 

  • Watermarking ensures exports and bundles contain the correction confidentiality designation.

 

If you have an upcoming competition case, our consultants are happy to talk you through the process, from compliance to the tech.

 

TrialView & CIArb: AI in Practice

Moving from the hype to the practical application of AI in Arbitration, this collaborative session will offer an opportunity to explore how AI could/should be managed internally, and externally, by law firms and legal teams. Should law firms and arbitrators convene AI working groups to manage ethics & security concerns for all stakeholders? How can we work together to manage the privacy paradox, and do we need to be more concerned about AI in practice than any other technology? We will also look at concrete use cases for AI in disputes, with tips on good practice, laying a foundation for the future of the arbitral landscape.

Moderators: Elizabeth Chan, Stephen Dowling, Katrina Limond, Sophie Nappert, Pavan Paw, Paul Sills FCIArb and Lizzie Williams.

TrialView clients and followers can register for FREE on the following link.