When AI gets it wrong

01 October 2025 4
Quite recently, another set of legal representatives were caught out when an Artificial Intelligence (AI) tool fabricated case law, which was then included in heads of argument. The Association of Arbitrators (Southern Africa) NPC also recently published guidelines on the use of AI in arbitrations and adjudications. In this article, we look at what these guidelines aim to address.

In June 2025, in the case of Northbound Processing (Pty) Ltd v South African Diamond and Precious Metals Regulator and Others (2025/072038) [2025] ZAGPJHC 661, a junior advocate used an AI tool called Legal Genius and the issue was referred to the Legal Practice Council for an investigation after the chatbot fabricated case law.

This is one of many examples where legal representatives have been caught out by AI tools providing them with fictitious case law and precedents, resulting in not only embarrassment, but possibly disciplinary actions against the attorneys and advocates involved.

It has become increasingly important for governing bodies of proceedings to provide legal practitioners with guidelines on the use of AI tools.

The Association of Arbitrators (Southern Africa) NPC (“Association”) published guidelines on the use of AI in arbitrations and adjudications, which are intended to serve as a guide to assist parties and tribunals using AI in adjudication or ad hoc arbitrations.

The Association indicated that these guidelines are important to the members for the following reasons:
 
  • to harness AI benefits responsibly.
  • to safeguard the integrity of proceedings.
  • to address ethical and legal risks.
  • to provide international and local consistency; and
  • to promote confidence and trust.

The Association indicated in the guidelines that, in every case, an agreement should be reached between the parties on the use of AI in proceedings and whether the Tribunal should have the power to issue directives regarding the use of AI in proceedings.

AI can be used in several ways, including: 

  • to analyse and compile facts;
  • to conduct research and analysis;
  • to review and manage documentation;
  • to speed up decision making;
  • to generate text;
  • to facilitate proceedings; and
  • to automate administrative tasks.

AI can also be used as a tool to reduce costs, to empower parties with fewer resources and assist unrepresented parties.

The core principles for the use of AI published in the guidelines are the following: 

  • Accountability: Tribunals are accountable for all aspects of proceedings, including the outcome.  AI tools should not substitute human judgment and analysis.
  • Confidentiality and security: AI tools present challenges to maintaining confidentiality as they collect, store, or even train on user data. Parties should therefore be careful when using AI and consider, amongst other things, the privacy policies of the tools before use.
  • Transparency and disclosure: AI usage should be transparent to all parties involved.
  • Fair decision-making: AI supports fair decision-making, but risks bias due to reflecting training data biases. Tribunals and parties must monitor AI outputs and verify information before relying on it.

One of the big risks of AI tools is that, in some cases, they have generated factually incorrect information, including fabricated legal authorities or cases known as hallucinations. For this reason, parties involved in arbitrations or adjudications must sign a formal agreement adopting a written agreement to address issues such as permitted use of AI tools, limitations of the use of AI tools, disclosure obligations, confidentiality and other safeguards and the Tribunal’s rights of investigation and direction in respect of the use of AI tools.

The Tribunal should also ensure that the use of AI tools at the hearing does not compromise the integrity of the proceedings. This may include the testing and approval of AI translation systems, using AI transcriptions with the party’s consent and verifying the accuracy and confidentiality of the transcriptions, and ensuring equal access and technical capability of all parties involved.

Lastly, the Association indicated that during the decision-making and award process, the use of AI must strictly be controlled by the Tribunals. Delegation of decision-making should strictly be prohibited, including delegation to AI tools.

Tribunals should ensure, as far as possible, that the award will withstand challenge. To this end, consideration should be given to the extent to which AI has been used during the preparation of the award, the extent of which disclosure within the award of its use should be made, and any other aspects to demonstrate that the award is essentially the work of the Tribunal, only assisted where indicated, by the use of AI tools.

It is therefore clear that the Association feels that the use of AI tools can be beneficial to arbitration and adjudication proceedings, but that the human factor in specific decision-making and awards should never be removed, and it remains vital that all information provided by AI tools should be verified to ensure that it is correct and reliable.

 

Disclaimer: This article is the personal opinion/view of the author(s) and does not necessarily present the views of the firm. The content is provided for information only and should not be seen as an exact or complete exposition of the law. Accordingly, no reliance should be placed on the content for any reason whatsoever, and no action should be taken on the basis thereof unless its application and accuracy have been confirmed by a legal advisor. The firm and author(s) cannot be held liable for any prejudice or damage resulting from action taken based on this content without further written confirmation by the author(s). 

Related Expertise: Dispute Resolution
Share: