AI and the Global Financial System: Innovative Risks and Regulatory Challenges

Lucy Walsh, University College Cork, BCL Law Clinical (4th Year)

Lucy is a final-year BCL Law (Clinical) student at University College Cork. She will be graduating in 2025, and has a training contract with Matheson LLP to commence in 2027. During her third year placement in Matheson’s Cork Office, working as part of the Technology and Innovation team, she developed a keen interest in Data Protection and Artificial Intelligence. Her passion for this area of law deepened further when she undertook a module on Privacy and Data Protection.

Having completed a Financial Services module this year, Lucy became interested in the unique intersection between frontier technologies such as AI, and the financial services sector. Particularly the risks and rewards such innovation brings about in this context. 

19–29 minutes


I. INTRODUCTION

The evolution of international finance has been characterised by a dynamic interplay between crises and regulatory responses, with each significant disruption catalysing the development of more robust legal and policy frameworks.[1] This article examines the ‘transformative force’ that artificial intelligence (“AI”) represents in the Global Financial System (“GFS”),[2] with its ‘arsenal’ of cutting-edge technological advancements.[3] While such advancements offer profound merits, they also have the potential to exacerbate risks to the stability of the financial services industry.[4] The phrase ‘regulatory tightrope’ aptly describes the delicate balance that regulators must strike between encouraging innovation and imposing necessary regulations to ameliorate the risks generated by these developments.[5] The potential impacts of AI remain difficult to quantify and will depend on the scale and mechanisms of its adoption.[6] Nevertheless, a robust governance framework and risk management approach tailored to AI-specific challenges will be essential to mitigate these risks.[7] This article will examine AI’s impact on financial services, including the risks and rewards of such innovation. Following this, the regulatory landscape will be critically analysed in light of the aforementioned balancing procedure.


II. AI-DRIVEN INNOVATION:
THE RISKS AND REWARDS IN FINANCIAL SERVICES

Identifying the challenges which AI introduces to the GFS will effectively set the scene for a discussion on how its risks can be minimised and rewards maximised through effective regulation.[8] To contextualise this analysis, it is necessary to first establish an understanding of the concept of AI and its applications.

a. Defining AI

AI is an area of computer science that focuses on the creation of intelligent machines capable of performing tasks that typically require human intelligence.[9] By utilising algorithmic decision-making, AI can enhance human cognitive capacity, allowing decision-makers to concentrate on other important deliberations.[10] AI systems infer from the input they receive      how to generate outputs such as predictions, content, recommendations, or decisions. AI systems vary in their levels of autonomy and adaptiveness.[11] Although AI has been in existence for many years, increases in computing power coupled with increases in the availability and quantity of data have led to a resurgence of interest in its potential applications.[12] Consequently, AI has become deeply embedded into our global social fabric,[13] with its algorithms and predictive capabilities permeating every aspect of the financial ecosystem, from personal banking to global market analysis.[14]

b. Benefits in the Financial Services Context

AI has brought about unparalleled innovation in the financial sector, and is propelling the GFS towards a future ripe with innovation and transformative growth opportunities. In financial forecasting and analysis, AI has demonstrated significant value across budgeting, derivatives pricing, portfolio optimization, and robust risk modelling frameworks.[15] It markedly improves transaction security through real-time monitoring and analysis of transaction patterns to identify unusual activity.[16] AI facilitates the safeguarding of sensitive information through data protection using cutting-edge techniques for encryption.[17] Moreover, the consumer-facing aspects of financial services have been equally transformed by AI. Innovative fintech applications use AI to recommend tailored investment strategies, savings goals, and insurance plans aligned to each user’s risk preference and personal circumstances.[18] Whilst there are a plethora of significant benefits of AI, such as operational efficiency and faster decision making,[19] the main purpose of this article is to assess risks that accompany these developments and how they are addressed.

c. Risks in the Financial Services Context

With significant rewards come inherent risks,[20] and pursuing innovation is dangerous if risk-avoidance is not baked into  the regulatory approach.[21] The world’s financial markets are fragile at the best of times, and there are several systemic risks presented by the use of AI in financial services.[22]

Firstly, the reliance on AI heightens concerns about cyber-attacks, which rank among the top worries in the financial industry.[23] Sharing data with third-party vendors can expose sensitive information,[24] and introduce severe consequences such as phishing, ‘data poisoning attacks’,[25] encryption of company files for ransom, or prompt injection attacks which aim to alter the model behaviour to serve the attackers purpose.[26] Cyber threats can bring down entire corporate networks and access confidential data even if high-tech security systems protect those networks.[27] Financial companies which integrate AI systems without first updating their cybersecurity protocols create dangerous fractures in outdated systems that leave wide gaps for exploitation.

Additionally, bias and discrimination pose significant risks. The idea that AI can be truly impartial is a myth, as bias is often embedded both in the algorithms themselves and in the data they consume.[28] The consequences can include confabulation of facts,[29] the spread of misinformation and deepfakes,[30] and risks of unjust decisions, excluding some groups from socially desirable insurance markets, perpetuating disparities in access to credit through algorithmic discrimination.[31] A machine with fixed objectives, let loose on a highly complex environment, will have unexpected behaviour.[32] The ‘unknown-unknowns’, inherent in such an environment and the inability to specify fixed, comprehensive, and immutable objectives raise fundamental questions of trust for any financial authority, human or AI.[33] Further amplifying such issues is the fact that Machine Learning models often function like ‘black boxes’, as their decision process can be opaque and hard to decipher in terms of process, steps and methodology followed.[34] This is particularly problematic in relation to credit scoring applications, where a decision to reject a credit application cannot be explained. This risks undermining the transparency and accountability of the financial sector. [35] On top of this, the tendency of AI models to hallucinate – producing false or misleading outputs –  amplifies these risks.[36]

Inflating these concerns, one of the most insidious threats to the financial sector brought about by AI is operational failures due to over-reliance on automated systems.[37] AI technology can give individuals the ‘illusion of omniscience’, and as Plato forewarned, those who rely too heavily on external aids ‘seem to know many things, when they are for the most part ignorant’.[38] Therefore, careful regulation is crucial to mitigate these risks and ensure that AI technologies enhance rather than undermine the stability of the financial system. As Gary Gensler, Chair of the US Securities and Exchange Commission warns, the immediate threat from AI is not a ‘robot takeover’, but rather the potential for an AI-driven financial crisis within a decade if there is no regulatory intervention.[39]


III. GLOBAL REGULATORY LANDSCAPE

The following section will critically analyse various regulatory approaches globally and how regulatory challenges are being tackled. This discussion is underpinned by the balancing procedure which exists between risk and innovation which has been discussed. Financial markets and intermediaries have been likened to ‘the brain of a living organism’, with financial policy and regulation stepping in when there is a ‘brain malfunction’.[40] This analysis shall examine the varied AI regulatory approaches, and highlight how regulators can benefit from adopting similar principles and objectives. A more globally coordinated approach to AI regulation, rather than a competitive race for the most advanced framework, would more effectively keep pace with rapidly evolving technology while safeguarding the stability of the financial system.

a. Regulatory Regimes

Though the regulatory landscape for AI is still taking shape, three major approaches are emerging, in competition for global dominance: rights driven, market driven, and state driven regulatory models.[41]

The European Union is seen as a forerunner in AI regulation, setting the global tone for addressing the risks posed by AI.[42] The EU’s ‘rights driven’ model emphasises protection of societal and individual rights and the equitable distribution of digital transformation gains.[43] The EU AI Act, approved in February 2024, signifies the EU’s ambition to spearhead global AI regulation and establish a universal standard.[44] This legislation introduces a risk-based approach that categorises AI systems according to four risk categories: unacceptable, high, limited, and minimal risk. The most egregious, such as social scoring tools or manipulative biometric AI tools, are banned outright.[45] This, alongside the EU’s NIS2 Directive,[46] addresses cybersecurity risks in the financial sector by imposing stringent requirements on high-risk AI systems, which encompasses the majority of AI tools currently deployed in financial corporations.

These high-risk systems, because they automate consequential decisions, are on fertile ground for bias, opacity, and error if they are left unchecked. The compliance requirements include strict standards as to data quality, documentation, oversight, transparency, and accountability.[47] Black-box opacity is met with legal demands for explainability and data traceability.[48] Companies must be able to articulate how a model was designed, what data shaped it, and why it delivers particular outcomes.[49]  This transparency imperative is essential as it ensures that AI remains subject to necessary scrutiny, alleviating risks of distrust that lack of opacity can bring to the financial sector. Further, the AI Act tackles the risk of algorithmic bias by requiring that high-risk AI systems undergo bias testing before deployment and mandating that the datasets used to train AI models are documented to ensure that they are free from bias.[50] The EU AI Act sets a high bar for AI regulation and is likely to shape global best practices. Brazil,[51] alongside other countries globally, has already enacted or proposed AI-specific regulatory frameworks that align with the EU’s risk-based approach.[52]

Similarly to the EU, China has established binding regulatory frameworks, such as the      National AI Development Plan which outlines the country’s strategic objectives for AI development and deployment.[53] However, China has taken a more centralised state driven approach, with AI regulation integrated into broader economic and digital strategies.[54] This approach prioritises the development of AI as a national strategic asset and utilises technology infrastructure for political objectives, including becoming the world leader in AI by 2030. The Cyberspace Administration of China is in charge of enforcing data privacy and cybersecurity regulations, such as the People’s Republic of China’s Cybersecurity Law.[55]

The US, on the other hand, has adopted a more ‘soft law’ approach, primarily relying on a sectoral ‘market driven’ regulatory model, with individual agencies providing oversight in their respective domains.[56] This approach emphasises self-regulation and derives from scepticism of government intervention, centred on the belief that too much regulation will curb innovation and leave the US trailing in the high-stakes AI technology race.[57] The US has taken steps in addressing the risk posed to the financial sector by AI, such as introducing the      National Institute of Standards and Technology (NIST) guidelines on preventing and mitigating cyberattacks on AI systems and models.[58] However, this highly distributed,patchwork regulatory environment raises concerns over potential gaps in the governance of AI, which is of particular concern given how rapidly AI is evolving.[59]

Each of these regulatory regimes provides valuable insights into different strategies for mitigating the significant risks posed by AI. However, the implementation of these strategies varies, reflecting the unique objectives and priorities of each country. This variance reveals the challenges that arise when attempting to create an effective, unified regulatory framework for AI. While the fragmented approach may work in the short term, it risks creating gaps in AI oversight that could leave the financial system vulnerable to unforeseen, AI-driven risks.

b. Regulatory Challenges

Governing AI has been likened by academics to the challenge of ‘herding cats’, in that it has a mind of its own.[60] One of the key challenges for regulators is that the speed and scale of the adoption of AI threatens to outpace the regulatory responses to address risks.[61] This problem is compounded by the fact that AI technologies, especially machine learning models, operate in highly complex and often unpredictable ways.[62] In this fast-paced AI era, it seems regulators are continuously playing catch up: ‘[I]t takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run twice as fast as that’.[63] A global race to set the standards that should govern and regulate AI is underway, with a multitude of regulations and a wide range of differing priorities worldwide.[64]

The fragmented approaches across the globe to AI regulation generally and in the financial sector pose a considerable challenge.  AI applications often transcend borders, and this creates complexities in navigating different regulatory approaches. Certain jurisdictions have implemented comprehensive AI governance frameworks, whereas others have yet to take significant steps to enhance their governance structure. National Competent Authorities are under pressure to ensure that financial institutions comply with AI governance and risk management standards.[65] However, this responsibility is complicated by the fact that governments are still experimenting with AI rules and not systematically coordinating with each other to harmonise these efforts.[66] 

Boards and senior management of financial institutions must possess a sufficient level of AI expertise or familiarity to effectively fulfil their governance responsibilities.[67] This includes critically evaluating AI-driven decisions and assessing their broader impact on the institution’s business strategy.[68] In the absence of this expertise, maintaining high-performing AI systems that are transparent and comprehensible to internal stakeholders becomes a significant challenge.[69] Such knowledge gaps also hinder an institution’s ability to identify and manage AI-related risks. This highlights the need for a coordinated global effort to provide regulatory guidance and professional training. Without this, risks such as regulatory arbitrage, where firms relocate operations to jurisdictions with more lenient regulations to circumvent stricter compliance obligations could worsen, potentially undermining the effectiveness of global AI oversight.[70]


IV. HARMONISATION AND THE FUTURE OF REGULATION

Klaas Knot, the Chair of the Financial Stability Board notes that AI is ‘neither the great villain nor the great saviour of our time. It is a technology that we can use to our benefit, but only if we implement the right policies and regulation’.[71] In an industry where financial transactions transcend borders, the imperative of international regulatory harmonisation cannot be overstated.[72] Relying solely on national approaches could lead to fragmented global regulations, as nations compete for AI development and potentially implement inadequate rules.[73] Without better alignment of international objectives, the benefits of AI may be unevenly distributed, and efforts to mitigate risks could be rendered ineffective.

a. Towards a Unified Global Approach

Continued dialogue and collaborative efforts amongst industry leaders, regulators, and international organisations will be key to achieving a more unified approach to AI regulation.[74] As AI becomes a cornerstone of the global economy, the need for interoperable regulatory standards will grow and become imperative.[75] One possible approach is the establishment of international bodies tasked with setting AI governance standards, drawing from the experience and insights of various jurisdictions.[76]

It is submitted that the US and EU should jointly invest in responsible AI research and open-source tools that better enable responsible AI implementation. This collaboration could also help to create a global standard that benefits both advanced economies and emerging markets. If the US and EU can deepen their policy collaboration on AI risk management, this will enable greater cooperation by regulation agencies.[77]

The approaches could become more harmonised if the EU AI Act allows for more flexible sectoral application to enable future EU-US cooperation.[78] Because of AI’s borderless nature, by aligning AI governance frameworks across jurisdictions, regulatory compliance could become easier for multinational companies.

From a US perspective, the AI regulatory plans of federal agencies should be executed and used to design strategic AI governance with an eye towards alignment with the EU.[79] However, in the aftermath of the 2024 US Presidential Election, the direction of AI regulation remains fraught with ambiguity, as President Trump’s administration is not traditionally associated with regulatory expansion.[80]

b. Regulatory Adaptability and Global Financial Stability

Innovation and financial stability need not be mutually exclusive. In fact, they can be mutually reinforcing. By embracing adaptive financial regulatory frameworks and increasing dynamic global collaboration, technological advancement can continue while minimising the risk of crisis.[81] Financial regulatory frameworks must not only be adaptable but also anticipatory, enabling regulators to foresee and mitigate ‘events like no other’ before they impact the financial system, through stress tests and sufficient human oversight to prevent cascading breakdowns.[82] Regulators should prioritise agility in their oversight to keep pace with the rapid evolution of technology and quickly respond to new developments.[83]

A key aspect of ensuring effective financial regulation in the age of AI is cross-border sharing of regulatory strategies, supervisory insights, technical standards, and incident data related to AI use in financial services.[84] As AI technology evolves, financial regulators will be better equipped to mitigate AI-amplified risks by collaborating internationally. Insights shared across borders will allow regulators to address emerging threats more comprehensively, ensuring that innovation proceeds without compromising the stability of the GFS. The future effectiveness of global AI governance in financial services will depend on deepened knowledge sharing on a number of levels. This should not be limited to major economies like the EU and US but should be expanded across the globe, emphasising inclusion of emerging markets, where AI regulatory frameworks are often less developed and AI could be challenging to implement.[85] Global governance must ensure that developed and developing economies work together so the innovative benefits of AI can be shared, and less advanced economies are not exposed to disproportionate risks.[86]

This knowledge sharing could be prioritised on a wide range of levels, for instance, standards development, AI sandboxes, large public AI research projects, and regulator-to-regulator exchanges.[87] Regulatory sandboxes foster adaptability as they allow companies to test AI solutions in a controlled environment, with regulatory oversight, enabling experimentation with new AI technologies in real-world settings. This enables innovative solutions without the risk of immediate compliance failure.[88] Notably, in the EU, Singapore, and the UK, AI regulatory sandboxes are being utilised to navigate the rapid growth of AI and address regulators’ lack of familiarity with the technology.[89] Such sandboxes should be expanded globally, creating collaborative spaces for cross-border experimentation. Static regulatory models are insufficient considering the disruptive potential which AI holds, there is a need for adaptive policies to be implemented to ensure that the risks posed to the stability of the GFS are reduced.[90]


V. CONCLUSION

In the words of the Chinese Premier Li Quang, ‘AI is everywhere, it seems omnipotent, but like other technologies, AI is a double-edged sword’.[91]  It is vital that in the ‘tale of the two intertwined narratives’ of innovation and regulation, the march of progress is matched by the stride of security and ethical responsibility.[92] Through added efficiency, opportunity, and speed, AI can revolutionise the GFS. However, these rewards can also generate significant threats to the stability and functioning of the GFS. This piece examined the current global regulatory architecture for AI and the various approaches which have been taken globally to minimise such risks while promoting innovation. While there are aspirations for a stricter global approach towards regulation, this piece argues that we should strive for a flexible, harmonised approach, guided by common objectives of financial stability, consumer protection, and AI governance.

The future of AI regulation in financial services lies in finding this balanced approach that encourages innovation while safeguarding the future stability of the financial system. Ongoing collaboration between regulators, industry stakeholder, and international bodies will be essential to ensure that AI technologies are deployed ethically, transparently, and in a manner that benefits society as a whole. In a dynamic financial system, all participants must continually focus on improving resilience and identifying vulnerabilities.[93] As history has shown, risks are often the greatest when they appear the least, and financial market participants are most vulnerable when they believe they know all the answers.[94]


[1] Paul Egan, Kieran McQuinn and Conor O’Toole. ‘17 Years After the Start of the Global Financial Crisis (GFC), Where are We Now With Credit and House Prices in the Irish Residential Market?’ (2024) ESRI Working Paper No 779 1.

[2] Haider Ali Javaid, ‘The Future of Financial Services: Integrating AI for Smarter, More Efficient Operations’ (2024) 1(2) MZ Journal of Artificial Intelligence 3.

[3] Miquel Noguer i Alonso, Foteini Samara Chatzianastasiou, ‘The Case for Artificial Intelligence Regulation in the Financial Industry’ (2024), accessed <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4831147> (7 November 2024).

[4] Inaki Aldasoro, Leonardo Gambacorta, Anton Korinek, Vatsala Shreeti and Merlin Stein, ‘Intelligent Financial System: How AI is Transforming Finance’ (2024) BIS Working Papers No 1194, Monetary and Economic Department 15, accessed < https://www.bis.org/publ/work1194.pdf> (8 November 2024).

[5] Douglas Lippoldt, ‘AI Innovation Concentration and the Governance Challenge’ (2024) CIGI Working Paper, No 292, 8, accessed <https://www.econstor.eu/bitstream/10419/299989/1/no292.pdf> (8 November 2024).

[6] Claire Brousse, Oliver Fliche, Jules Joyez and Julien Uri, ‘Artificial Intelligence Challenges for the Financial System’ (2024) Financial Stability Report, Banque De France, accessed <https://www.banque-france.fr/system/files/2024-10/FINANCIAL_STABILITY_REPORT_june_2024.pdf> (7 November 2024).

[7] John Giordani, ‘Understanding and Managing AI/ML Risks in Financial Services’ (2024) accessed <https://www.linkedin.com/pulse/understanding-managing-aiml-risks-financial-services-giordani-dia-hutpe/> (8 November 2024).

[8] World Economic Forum, ‘How to Manage AI’s Risks and Rewards’ (2018) accessed <https://www.weforum.org/stories/2018/01/how-to-manage-ais-risks-and-benefits/> (18 December 2024).

[9] Osonde A Osoba, William Welser IV, An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence (Rand Corporation, 2017) 3.

[10] ibid.

[11] Marko Grobelnik, Karine Perset and Stuart Russell, ‘What is AI? Can You Make a Clear Distinction Between AI and Non-AI Systems?’ (2024) OECD AI Policy Observatory, accessed < https://oecd.ai/en/wonk/definition> (18 December 2024).

[12] Financial Stability Board, ‘Artificial Intelligence and Machine Learning in Financial Services: Market Developments and Financial Stability Implications’ (2017) accessed <https://www.fsb.org/uploads/P011117.pdf> (18 December 2024).

[13] Sep Pashang and Olaf Weber, The Ethics of Artificial Intelligence for the Sustainable Development Goals (Springer, 2023).

[14] Noguer i Alonso (n 3) 22.

[15] Dr A Shaji George, ‘Finance 4.0: The Transformation of Financial Services in the Digital Age’ (2024) 2(3) PUIRP 109.

[16] Nurhadhinah Nadiah Ridzuan and Others, ‘AI in the Financial Sector: The Line between Innovation, Regulation and Ethical Responsibility’ (2024) 15(8) Information 432.

[17] ibid.

[18] Shaji George (n 15).

[19] Ridzuan (n 16).

[20] Clifford Chance, ‘AI and Risk for Financial Institutions’ (March 2019) Thought Leadership, accessed <https://www.cliffordchance.com/content/dam/cliffordchance/briefings/2019/03/ai-and-risk-for-financial-institutions.pdf> (18 December 2024).

[21] David Fagleman, Jesse Griffiths and Mick Mcateer, ‘AI in Financial Services: Avoiding The Big Risks’ (2023) accessed <https://financeinnovationlab.org/wp-content/uploads/2023/12/AI-in-Financial-Services-November-2023_Final-1.pdf> (18 December 2024).

[22] ibid.

[23] Hyun Song Shin, ‘Artificial Intelligence and the Economy: Implications for Central Banks’ (2024) BIS Annual Economic Report, accessed <https://www.bis.org/publ/arpdf/ar2024e3.htm> (18 December 2024).

[24] Aldasoro (n 4).

[25] Fagleman (n 21), ‘data poisoning’ is where the manipulation of data at some stage of the lifecycle allows attacks to avoid detection and prompts AI to make harmful decisions or extract information.

[26] Shin (n 23).

[27] Shailendra Mishra, ‘Exploring the Impact of AI-Based Cyber Security Financial Sector Management’ (2023) 13(10) Applied Sciences, accessed <https://www.mdpi.com/2076-3417/13/10/5875> (18 December 2024).

[28] Jackie Kay, Atoosa Kasirzadeh and Shakir Mohamed, ‘Epistemic Injustice in Generative AI’ (2024) <https://arxiv.org/pdf/2408.11441v1> accessed 4 April 2025.

[29] Ziwei Ji and Others, ‘Survey of Hallucination in Natural Language Generation’ (2024) <https://arxiv.org/pdf/2202.03629> accessed 4 April 2025.

[30] Cristian Vaccari and Andrew Chadwick, ‘Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News’ (2020) <https://journals.sagepub.com/doi/10.1177/2056305120903408> accessed 3 April 2025.

[31] Aparna Calderón, ‘Regulatory Compliance and Supervision in AI Regime: Banks and FinTech’ (2020), accessed <https://helda.helsinki.fi/server/api/core/bitstreams/c529b572-94d5-4080-a578-31d36e0cfcfa/content> (7 November 2024).

[32] Jon Danielsson, Robert Macrae and Andreas Uthemann, ‘Artificial Intelligence and Systemic Risk’ (2022) 140 Journal of Banking and Finance 106290.

[33] ibid.

Donald Rumsfeld, the former US Secretary of Defence, classified unknown-unknowns as events that are a complete surprise. Human regulators cannot foresee the unknown-unknowns, but they are reasonably well-equipped to respond to them.

[34] Calderon (n 28).

[35] ibid.

[36] Shin (n 23).

[37] ICAW Insights, ‘Over-Reliance on Automation: A Cautionary Tale from Plato’ (2024) accessed <https://www.icaew.com/insights/viewpoints-on-the-news/2024/may-2024/overreliance-on-automation-a-cautionary-tale-from-plato> (18 December 2024).

[38] ibid.

[39] Politico, ‘Gensler’s Warning: Unchecked AI Could Spark Future Financial Meltdown’ (2024) accessed <https://www.politico.com/news/2024/03/19/sec-gensler-artificial-intelligence-00147665> (18 December 2024).

[40] Aldasoro (n 4).

[41] APCO, ‘Three Approaches to AI Governance’ (2023) accessed <https://apcoworldwide.com/blog/three-approaches-to-ai-governance/> (19 December 2024).

[42] Benjamin Cedric Larsen and Sabrina Kuespert, ‘Regulating General-Purpose AI: Areas of Convergence and Divergence Across the EU and the US’ (2024) Brookings, accessed <https://www.brookings.edu/articles/regulating-general-purpose-ai-areas-of-convergence-and-divergence-across-the-eu-and-the-us/> (19 December 2024).

[43] ibid.

[44] Noguer i Alonso (n 3).

[45] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance).

[46] Consolidated text: Directive (EU) 2022/2555 of the European Parliament and of the Council of 14 December 2022 on measures for a high common level of cybersecurity across the Union, amending Regulation (EU) No 910/2014 and Directive (EU) 2018/1972, and repealing Directive (EU) 2016/1148 (NIS 2 Directive) (Text with EEA relevance).

[47] AI Act (n 42), Article 16.

[48] Mishcon de Reya, ‘Addressing Bias in AI Systems Through the AI Act’ (2023) <https://www.mishcon.com/news/addressing-bias-in-ai-systems-through-the-ai-act> accessed 4 April 2025.

[49] ibid.

[50] AI Act (n 42).

[51] Bill No. 2338/2023 Brazil AI Act.

[52] Kennedys Law, ‘Looking Ahead: Global AI Governance Takes Shape – What to Expect from the EU and US’ (2024) accessed <https://kennedyslaw.com/en/thought-leadership/article/2024/2025-global-ai-governance-takes-shape-what-to-expect-from-the-eu-and-us/> (19 December 2024).

[53] China’s New Generation Artificial Intelligence Development Plan (2017); Victor Smart, ‘Global Approach to AI Governance is a ‘Big, Fragmented Mess’ (2024), accessed <https://www.bankingriskandregulation.com/global-approach-to-ai-governance-is-a-big-fragmented-mess/> (20 December 2024).

[54] ibid.

[55] Cybersecurity Law of the People’s Republic of China, Passed November 7, 2017, Effective June 1, 2017; PWC, ‘A Comparison of Cybersecurity Regulations: China’ (2024) accessed <https://www.pwc.com/id/en/pwc-publications/services-publications/legal-publications/a-comparison-of-cybersecurity-regulations/china.html> (21 December 2024).

[56] Aldasoro (n 4).

[57] Smart (n 45).

[58] National Institute of Standards and Technology (NIST) accessed <https://www.nist.gov/cybersecurity> (20 December 2024); Morgan Lewis, ‘EU AI Act, US NIST Target Cyberattacks on AI Systems – Guidance and Reporting Obligations’ (2024) accessed <https://www.morganlewis.com/pubs/2024/07/eu-ai-act-us-nist-target-cyberattacks-on-ai-systems-guidance-and-reporting-obligations> (20 December 2024).

[59] Alex Engler, ‘The EU and US Diverge on AI Regulation: A Transatlantic Comparison and Steps to Alignment’ (2023) Brookings, accessed <https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/> (19 December 2024).

[60] Tim Büthe and Others, ‘Governing AI – Attempting to Herd Cats? Introduction to the Special Issue on The Governance of Artificial Intelligence’ (2022) 29(11) Journal of European Public Policy 1721.

[61] Araz Taeihagh, ‘Governance of Artificial Intelligence’ (2021) 40(2) Policy and Society 137.

[62] Esmat Zaidan and Imad Antoine Ibrahim, ‘AI Governance ain a Complex and Rapidly Changing Regulatory Landscape: A Global Perspective’ (2024) 11 Humanities and Social Sciences Communications 1121.

[63] Lewis Carroll, Through the Looking-Glass and What Alice Found There, Chapter 2 (Macmillan, 1871).

[64] Tom Wheeler, ‘The Three Challenges of AI Regulation’ (2023) Brookings, accessed <https://www.brookings.edu/articles/the-three-challenges-of-ai-regulation/> (20 December 2024).

[65] AI Act (n 42).

[66] Tommaso Giardini, ‘AI Compliance is a Strategy Problem’ (2024) accessed <https://www.imd.org/ibyimd/artificial-intelligence/ai-compliance-is-a-strategy-problem/> (20 December 2024).

[67] Juan Carlos Cristanto and Others, ‘Regulating AI in the Financial Sector: Recent Developments and Main Challenges’ (2024) BIS Working Paper, FSI Insights on Policy Implementation No 63, accessed <https://www.bis.org/fsi/publ/insights63.pdf > (20 December 2024).

[68] ibid.

[69] ibid.

[70] Filippo Lancieri, Laura Edelson and Stefan Bechtold, ‘AI Regulation: Competition, Arbitrage and Regulatory Capture’ (2024) ETH Library Working Paper, accessed <https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/708626/CLE_WP_2024_11.pdf?sequence=1&isAllowed=y> (20 December 2024).

[71] Klaas Knot, ‘The AI Adventure – How Artificial Intelligence May Shape the Economy and the Financial System’ Speech at the IMF World Bank Constituency meeting (11 July 2024), accessed <https://www.bis.org/review/r240716c.pdf > (18 December 2024).

[72] ibid.

[73] Peter Cihon, ‘Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development’ (2019) Future of Humanity Institute, University of Oxford.

[74] Heather Domin, ‘AI Governance Trends: How Regulation, Collaboration and Skills Demand are Shaping the Industry’ (2024) accessed <https://www.weforum.org/stories/2024/09/ai-governance-trends-to-watch/> (19 December 2024).

[75] ibid.

[76] Zaidan (n 59).

[77] Engler (n 56).

[78] ibid.

[79] ibid.

[80] Kennedys Law (n 49).

[81] Lyytinen Lescrauwaet and Others, ‘Adaptive Legal Frameworks and Economic Dynamics in Emerging Technologies: Navigating the Intersection for Responsible Innovation’ (2022) 16(3) Institute for Law and Economic Studies 202, accessed <https://www.researchgate.net/publication/375988655_Adaptive_Legal_Frameworks_and_Economic_Dynamics_in_Emerging_Tech-nologies_Navigating_the_Intersection_for_Responsible_Innovation > (21 December 2024).

[82] Loris S Bennear and Jonathan B Wiener, ‘Adaptive Regulation: Instrument Choice for Policy Learning Over Time’ (2019) Draft Working Paper, Harvard Kennedy School, accessed <https://www.hks.harvard.edu/sites/default/files/centers/mrcbg/files/Regulation%20-%20adaptive%20reg%20-%20Bennear%20Wiener%20on%20Adaptive%20Reg%20Instrum%20Choice%202019%2002%2012%20clean.pdf> (21 December 2024).

[83] Lescrauwaet (n 75).

[84] Cristanto (n 61).

[85] Keegan Fonte, ‘The Intersection of AI and Emerging Markets: Opportunities and Challenges’ (2024) accessed <https://business.cornell.edu/hub/2024/08/13/intersection-ai-emerging-markets-opportunities-challenges/#:~:text=Economic%20impact%20of%20AI,fostering%20AI%20and%20ML%20startups.> (21 December 2024).

[86] ibid.

[87] Kennedys Law (n 49).

[88] Neven Dujmovic, ‘Regulatory Sandboxes: A  Safe Harbour for AI Innovation’ (2024) accessed < https://www.linkedin.com/pulse/regulatory-sandboxes-safe-harbor-ai-innovation-neven-dujmovic-dcv5f/> (21 December 2024).

[89] Christopher Woolard CBE, ‘AI and Financial Stability: A Balancing Act’ (2024) accessed <https://www.linkedin.com/pulse/ai-financial-stability-balancing-act-christopher-woolard-cbe-ecjfe/> (20 December 2024).

[90] Bennear (n 76).

[91] CGTN, ‘AI: The Double-Edged Sword – The Agenda in Full’ (2024) accessed <https://newseu.cgtn.com/news/2024-03-02/AI-The-Double-Edged-Sword-The-Agenda-in-Full-1rAUugkTN1S/index.html> (18 December 2024).

[92] Noguer I Alonso (n 3).

[93] Mark Carney, ‘Looking Back, Moving Forward – Canada and Global Financial Reform’ (The International Centre for Monetary and Banking Studies, Geneva, 9 November 2010) accessed <https://www.bis.org/review/r101111a.pdf> (21 December 2024).

[94] ibid.