AI in Arbitration: Will the EU AI Act Stand in the Way of Enforcement?
This guest post was written by Ezzatollah Pabakhsh, Master’s Student at the University of Antwerp
The European Union has taken an unprecedented step by regulating artificial intelligence (AI) through the EU AI Act, which is the world’s first comprehensive legal framework for AI governance. According to Recital 61, Article 6(2) and Annex III, 8(a), AI tools used in legal or administrative decision-making processes—including alternative dispute resolution (ADR), when used similarly to courts and producing legal effects—are considered high risk. These tools must comply with the strict requirements outlined in Articles 8 through 27.
These provisions are designed to ensure transparency, accountability, and respect for fundamental rights. This obligation will take effect on August 2, 2026, according to Article 113. Notably, the Act’s extraterritorial scope, as outlined in Articles 2(1)(c) and (g), applies to any AI system that affects individuals within the European Union. This applies regardless of where the system is developed or used. It also applies to providers and deployers outside the EU whose output is used within the Union. This raises a critical question: can non?compliance with the EU AI Act serve as a basis for courts in EU Member States to refuse recognition or enforcement of an arbitral award on procedural or public?policy grounds?[1]
Consider the following scenario: Two EU-based technology companies, one Belgian and one German, agree to resolve their disputes through US-seated arbitration. Suppose the ADR center uses AI-powered tools that do not comply with the EU AI Act‘s high-risk system requirements. How would enforcement of the resulting award play out before national courts in the EU?
This scenario presents a direct legal conflict. If the winning party seeks to enforce the award in a national court of an EU Member State, two well-established legal grounds for refusing enforcement may arise.[2] First, the losing party may invoke Article V(1)(d) of the 1958 New York Convention, together with the applicable national arbitration law. They could argue that reliance on AI systems that do not comply with the EU AI Act constitutes a procedural irregularity, as it departs from the parties’ agreed arbitration procedure and undermines the integrity of the arbitral process.[3] Second, under Article V(2)(b) of the Convention, the enforcing court may refuse recognition on its own motion if it finds that using non-compliant AI violates the forum’s public policy, especially when fundamental rights or procedural fairness are at stake.[4] The following section will examine these two scenarios in more detail.
Scenario 1: Procedural Irregularity under Article V(1)
Imagine that the ADR center uses an AI tool to assist the tribunal in drafting the award during the proceedings. This AI system uses complex algorithms that cannot produce transparent, human-readable explanations of how key conclusions were reached. The final award relies on these outputs, yet it offers no meaningful reasoning or justification for several significant findings. Furthermore, the tribunal does not disclose the extent to which it relies on the AI system, nor is there any clear evidence of human oversight in the deliberation process.
When the losing party in Belgium contests enforcement of the award, they invoke Article V(1)(d) of the New York Convention, arguing that the arbitral procedure did not align with the parties’ expectations or the applicable law. This objection is also found in Article 1721 of the Belgian Judicial Code (BJC), inspired by Article 36 of the UNCITRAL Model Law and, to a large extent, mirroring the grounds of Article V of the New York Convention. Among these, two are especially relevant to the use of AI in the arbitral process and are central to the objection in this case.
First, under Article 1721(1)(d), a party may argue that the award lacks proper reasoning[5], which violates a core procedural guarantee under Belgian law.[6] This requirement ensures that parties can understand the legal and factual basis for the tribunal’s decision and respond accordingly.[7] In this case, however, the award’s reliance on opaque, AI-generated conclusions, particularly those produced by “black box” systems, renders the reasoning inaccessible and legally inadequate.[8] The EU AI Act further reinforces this objection. Articles 13, 16, and 17 require transparency, traceability, and documentation for high-risk AI systems. Meanwhile, Article 86 grants limited right to explanation for affected persons where a deployer’s decision is based on Annex III systems and produces legal effects. If an award fails to meet these standards, it may not align with Belgian procedural norms.
Second, under Article 1721(1)(e), a party may argue that the tribunal’s composition or procedure deviated from the parties’ agreement or the law of the seat. For example, if the arbitration agreement contemplated adjudication by human arbitrators and the tribunal instead relied on AI tools that materially influenced its reasoning without disclosure or consent, this could constitute a procedural irregularity. According to Article 14 of the EU AI Act, there must be effective human oversight of high-risk AI systems. Where such oversight is lacking or merely formal and AI outputs are adopted without critical human assessment, the legitimacy of the proceedings may be seriously undermined. Belgian courts have consistently held that procedural deviations capable of affecting the outcome may justify refusal of recognition and enforcement.[9]
Scenario 2: Public Policy under Article V(2)(b)
In this scenario, the court may refuse to enforce the award on its own initiative if it is found to be contrary to public policy[10] under Article V(2)(b) of the New York Convention, Article 34(2)(b)(ii) of the UNCITRAL Model Law, or Article 1721(3) of the Belgian Judicial Code (BJC). These provisions allow courts to deny recognition and enforcement if the underlying procedure or outcome conflicts with fundamental principles of justice in national and European legal systems.[11]
In comparative international practice, public policy has both substantive and procedural dimensions. When a breach of fundamental and widely recognized procedural principles renders an arbitral decision incompatible with the core values and legal order of a state governed by the rule of law, procedural public policy is engaged. Examples include violations of due process, lack of tribunal independence, breach of equality of arms, and other essential guarantees of fair adjudication.[12]
In this case, the use of non-transparent AI systems may fall within this category.[13] If a tribunal relies on these tools without disclosing their use or without providing understandable justifications, the process could violate Article 47 of the Charter of Fundamental Rights of the European Union. This article guarantees the right to a fair and public hearing before an independent and impartial tribunal. This issue, along with case law, could provide a reasonable basis for refusal based on public policy.[14] When applying EU-relevant norms, Belgian courts are bound to interpret procedural guarantees in accordance with the Charter. [15]
Comparative case law provides additional support. In Dutco, for example, the French Cour de cassation annulled an arbitral award for violating the equality of arms in the tribunal’s constitution, which is an archetypal breach of procedural public policy.[16] Similarly, in a 2016 decision under § 611(2)(5) ZPO, the Austrian Supreme Court annulled an award where the arbitral procedure was found to be incompatible[17] with Austria’s fundamental legal values.[18] These rulings confirm that courts may deny enforcement when arbitral mechanisms, especially those that affect the outcome, compromise procedural integrity.
Belgian courts have consistently held that recognition and enforcement must be refused where the underlying proceedings are incompatible with ordre public international belge, particularly where fundamental principles such as transparency, reasoned decision-making, and party equality are undermined.[19] In this context, reliance on non-transparent AI—without adequate procedural safeguards—may constitute a violation of procedural public policy. As a result, enforcement may lawfully be denied ex officio under Article V(2)(b) of the New York Convention and Article 1721(3) of the Belgian Judicial Code, thereby preserving the integrity of both the Belgian and broader EU legal frameworks. Ultimately, courts retain wide discretion under public policy grounds to decide with real control whether or not to enforce AI-assisted awards.[20]
These potential refusals of enforcement within the EU highlight a broader trend, as domestic procedural safeguards are increasingly influenced by global regulatory developments, prompting questions about whether the EU’s approach to AI in arbitration will remain a regional standard or evolve into an international benchmark.
The EU AI Act as a Global Regulatory Model?
The EU has a proven history of establishing global legal benchmarks—rules that, while originating in Europe, shape laws and practices far beyond its borders.[21] The GDPR is the clearest example of this. Its extraterritorial scope, strict compliance obligations, and enforcement mechanisms have prompted countries ranging from Brazil to Japan to adopt similar data protection frameworks.[22]
In arbitration, a comparable pattern could emerge. If EU courts apply the EU AI Act’s high-risk requirements when deciding on the recognition and enforcement of arbitral awards, other jurisdictions may adopt comparable standards, encouraging convergence in AI governance across dispute resolution systems. Conversely, inconsistent enforcement approaches could foster fragmentation rather than harmonisation. In any case, the Act’s influence is already being felt beyond Europe, prompting arbitration stakeholders to address new questions regarding procedural legitimacy, technological oversight, and cross-border enforceability.
Conclusion
The interplay between the EU AI Act and the enforcement of arbitral awards highlights how technological regulation is shaping the concept of procedural fairness in cross-border dispute resolution. Whether the Act becomes a catalyst for global standards or a source of jurisdictional friction, parties and institutions cannot ignore its requirements. As AI tools move deeper into arbitral practice, compliance will become not just a regulatory obligation but a strategic necessity for ensuring the enforceability of awards in key jurisdictions.
[1] Tariq K Alhasan, ‘Integrating AI Into Arbitration: Balancing Efficiency With Fairness and Legal Compliance’ (2025) 42 Conflict Resolution Quarterly 523, 524.
[2] ibid 525.
[3] Jordan Bakst and others, ‘Artificial Intelligence and Arbitration: A US Perspective’ (2022) 16 Dispute Resolution International 7, 23; Sanjana Reddy Jeeri and Vinita Singh, ‘Soft Law, Hard Justice: Regulating Artificial Intelligence in Arbitration’ (2024) 17 Contemporary Asia Arbitration Journal 191, 222.
[4] Sean Shih and Eric Chin-Ru Chang, ‘The Application of AI in Arbitration: How Far Away Are We from AI Arbitrators?’ (2024) 17 Contemporary Asia Arbitration Journal 69, 81.
[5] Horst Eidenmuller and Faidon Varesis, ‘What Is an Arbitration? Artificial Intelligence and the Vanishing Human Arbitrator’ (2020) 17 New York University Journal of Law and Business 49, 72.
[6] Dilyara Nigmatullina and Beatrix Vanlerberghe, ‘Arbitration Related Lessons: Insights from the Supreme Courts around the World’ (2020) 2020 b-Arbitra | Belgian Review of Arbitration 307, 354.
[7] Gizem Kasap, ‘Can Artificial Intelligence (“AI”) Replace Human Arbitrators? Technological Concerns and Legal Implications’ (2021) 2021 Journal of Dispute Resolution 209, 230, 249.
[8] Shih and Chang (n 4) 79.
[9] Koen De Winter and Michaël De Vroey, ‘Belgium’ in Baker McKenzie International Arbitration Yearbook: 10th Anniversary Edition 2016–2017 (Baker McKenzie 2017), 81, 82, 85.
[10] Eidenmuller and Varesis (n 5) 80–81; Bernard Hanotiau, ‘Arbitrability; Due Process; and Public Policy Under Article V of the New York Convention Belgian and French Perspectives’ (2008) 25 Journal of International Arbitration 721, 729–730.
[11] Kasap (n 7) 252; Annabelle O Onyefulu, ‘Artificial Intelligence in International Arbitration: A Step Too Far?’ (2023) 89 Arbitration: The International Journal of Arbitration, Mediation and Dispute Management 56, 63.
[12] Nigmatullina and Vanlerberghe (n 6) 351–352.
[13] Shih and Chang (n 4) 86.
[14] Nigmatullina and Vanlerberghe (n 6) 353.
[15] A de Zitter, ‘The Impact of EU Public Policy on Annulment, Recognition and Enforcement of Arbitral Awards in International Commercial Arbitration’ (University of Oxford 2019) 5, 251–253.
[16] Stefan Kröll, ‘Siemens – Dutco Revisited? Balancing Party Autonomy and Equality of the Parties in the Appointment Process in Multiparty Cases | Kluwer Arbitration Blog’ <https://legalblogs.wolterskluwer.com/arbitration-blog/siemens-dutco-revisited-balancing-party-autonomy-and-equality-of-the-parties-in-the-appointment-process-in-multiparty-cases/> accessed 18 August 2025; Nigmatullina and Vanlerberghe (n 6) 351.
[17] Alexander Zollner, ‘Austrian Supreme Court Set aside an Arbitral Award Due to a Violation of the Procedural Ordre Public’ (Global Arbitration News, 21 June 2017) <https://www.globalarbitrationnews.com/2017/06/21/austrian-supreme-court-set-aside-arbitral-award-for-violation-of-public-policy/> accessed 18 August 2025. ; Franz Schwarz and Helmut Ortner, ‘Austria’ in Giacomo Rojas Elgueta, James Hosking and Yasmine Lahlou (eds), Does a Right to a Physical Hearing Exist in International Arbitration? (ICCA Reports, 2020) 26, https://www.arbitration-icca.org/right-to-a-physical-hearing-international-arbitration accessed 5 August 2025
[18] Nigmatullina and Vanlerberghe (n 6) 351.
[19] Alhasan (n 1) 5–6.
[20] Shih and Chang (n 4) 87; Hanotiau (n 10) 737.
[21] Arturo J Carrillo and Matías Jackson, ‘Follow the Leader? A Comparative Law Study of the EU’s General Data Protection Regulation’s Impact in Latin America’ (2022) 16 ICL Journal 177, 178; Michelle Goddard, ‘The EU General Data Protection Regulation (GDPR): European Regulation That Has a Global Impact’ (2017) 59 International Journal of Market Research 703, 703–704.
[22] Carrillo and Jackson (n 21) 242–245.
Leave a Reply
Want to join the discussion?Feel free to contribute!