For decades, financial forecasting models were treated primarily as technical tools—evaluated on historical performance and predictive accuracy. A model that performed well in back tests earned trust; one that failed prompted explanations, often after losses had already occurred. Today, that approach is being fundamentally redefined.
The EU Artificial Intelligence Act (EU AI Act) signals a profound shift in how artificial intelligence (AI) is governed in financial decision-making. No longer simply analytical instruments, AI models are now treated as regulated decision-making systems. This reframing introduces a new standard for financial risk governance: success is no longer measured solely by accuracy, but by an institution’s ability to explain, justify, and defend the decisions produced by its models, even under stress or regulatory scrutiny. As Dr Efstathios Polyzos, Associate Dean and Associate Professor at the College of Interdisciplinary Studies, observes: “Purely data-driven statistical performance is a bad decision-maker. A model might perform well statistically, but if we cannot explain it or justify it, regulators will not accept it—and rightly so.” This regulatory shift matters far beyond Europe. Financial markets are globally interconnected, and standards developed in one jurisdiction increasingly shape operational expectations worldwide. For institutions in Abu Dhabi, Dubai, and across the Gulf region, EU governance principles are influencing how AI is deployed locally. The key transformation is clear: it is not the technology that is changing but who is held accountable when models fail.
This is the fourth in a series of blog articles exploring emerging trends shaping the
finance sector in the UAE
In a recent interview with Dr Efstathios Polyzos, we explore how the EU AI Act reframes financial forecasting models as critical infrastructure rather than optional tools, examines what high-risk AI means for financial institutions, highlights the new accountability requirements for model governance, and discusses how these changes impact the skills financial leaders need to succeed.
Financial Models as Infrastructure
A central feature of the EU AI Act is its risk-based classification system for artificial intelligence applications. Certain uses of AI, particularly those that influence economically significant decisions, are categorized as high-risk systems. In finance, this classification captures models that shape lending decisions, risk management, and market forecasts. Credit scoring systems, liquidity forecasting tools, stress-testing models, and early warning systems fall within this category. Historically, these tools were treated as analytical aids designed to support decision-making. Under the new regulatory framework, they are increasingly recognized as systems that directly influence capital allocation, institutional resilience, and financial stability.
As Dr Polyzos explains: “Being classified as high-risk is not a penalty. It simply recognizes that these systems have significant influence and therefore must be governed responsibly.” High-risk designation does not prohibit the use of AI. Instead, it establishes governance requirements. Institutions must demonstrate how models are built, how they are monitored, and how human oversight can intervene when necessary. In practice, this elevates forecasting models to something closer to financial infrastructure. They are no longer optional analytical tools but operational systems with systemic consequences.
Why the Regulation Matters in the Gulf
Although the EU AI Act is European legislation, its influence extends globally. Financial institutions in the Gulf Cooperation Council operate within international financial networks that rely heavily on European technology vendors, cross-border financial flows, and regulatory alignment. When AI systems used by European vendors must comply with EU governance standards, those requirements inevitably shape the broader technological ecosystem. As Dr Polyzos notes: “Global finance does not operate in isolated silos. Even when institutions in the region are not directly regulated by European authorities, the systems they rely on may already be operating under those standards.” A similar dynamic occurred with the General Data Protection Regulation (GDPR), which began as European privacy legislation but soon became a global benchmark for data governance. The EU’s approach to AI regulation appears to be following a comparable trajectory.
From Performance to Legitimacy
One of the most significant implications of the new regulatory framework is how it redefines what makes a financial model acceptable. Traditionally, models were judged primarily by their predictive accuracy. If a system produced reliable forecasts, it was considered effective. Under the EU AI Act, performance alone no longer guarantees legitimacy. High-risk AI systems must demonstrate several additional characteristics, including:
This reflects a broader shift in how artificial intelligence is evaluated. In regulated environments, accountability and transparency are becoming as important as efficiency. Dr Polyzos draws a comparison with academic research: “In academic work, everything must be documented so that others can replicate and evaluate the results. The EU AI Act effectively brings that same level of rigor into financial modelling.” In this sense, AI models are no longer judged solely by their outputs but by whether they can withstand institutional and regulatory scrutiny.
Lessons From Financial Crises
The regulatory philosophy behind the EU AI Act reflects the institutional memory of previous financial disruptions. During the 2008 Global Financial Crisis, many widely used risk models failed when economic conditions diverged from historical assumptions. More recently, the COVID-19 pandemic exposed similar weaknesses in predictive systems that struggled to account for unprecedented economic shocks. As Dr Polyzos explains: “These blind spots contributed to systemic losses, and in many cases those losses ultimately fell on public finances and taxpayers.” The Act responds by requiring stronger model validation, traceability of decisions, and explicit acknowledgment of uncertainty. These requirements reinforce the principle that model risk should be treated as a governance issue rather than a purely technical challenge.
Accountability Cannot Be Outsourced
Another important implication of the new regulatory framework concerns third-party technology providers. Financial institutions frequently rely on external vendors for analytical models and software platforms. Under the EU AI Act, however, purchasing a model does not transfer responsibility for its outcomes. Institutions must conduct due diligence, review documentation, and monitor performance continuously. In other words, they remain accountable for how AI systems are deployed within their operations. As Dr Polyzos notes: “Regulators will not ask what the developer did. They will ask what the institution did to verify compliance before using the model.” This approach significantly reshapes the relationship between financial institutions and technology providers.
The Skills Financial Leaders Now Need
Perhaps the most important lesson emerging from this regulatory shift is that governing artificial intelligence in finance requires more than technical expertise. Financial professionals must be able to interpret regulatory frameworks, understand modelling limitations, challenge assumptions, and explain analytical decisions to regulators and stakeholders. These capabilities require advanced analytical judgment and interdisciplinary knowledge. At Zayed University, the Master of Science in Finance reflects this evolving landscape. The program integrates financial forecasting, machine learning, and model governance into a comprehensive analytical framework. Students are trained not only to build models but to explain their assumptions, test their robustness, and defend their conclusions in regulated environments.
Regulation as a Governance Upgrade
The EU AI Act is often described as a constraint on innovation. In practice, it functions more as a governance upgrade. Institutions that treat it merely as a compliance obligation may struggle. Those that embrace the Act’s principles can strengthen credibility, resilience, and trust within global financial networks. As AI governance standards become international reference points, the distinction between financial expertise and regulatory literacy is narrowing. Future financial leaders will be judged not simply by the sophistication of their models but by how responsibly those models shape decisions. For financial institutions across the Gulf, the question is no longer whether European AI governance standards matter. The real question is whether they are prepared to operate in a financial system where AI-driven decisions must be explainable, auditable, and accountable. According to Dr Polyzos, that system is already emerging.
Conclusion: AI Governance as the New Standard in Global Finance
In this evolving regulatory landscape, the role of financial models is being fundamentally redefined. Systems once valued primarily for predictive accuracy are now evaluated through a broader lens that includes transparency, accountability, and governance. The EU AI Act reflects a wider transformation in global finance: decisions shaped by artificial intelligence must not only be effective but also explainable and defensible. For financial institutions operating within interconnected markets, including those across the Gulf region, this shift signals a new era in which responsible model governance is inseparable from financial leadership. Preparing professionals who can navigate this complexity—through rigorous analysis, ethical judgment, and regulatory awareness—will be essential for sustaining trust and resilience in an increasingly AI-driven financial system.
Interested in understanding how AI, financial forecasting, and governance intersect in modern financial systems? Explore the Master of Science in Finance at Zayed University and discover how the program prepares the next generation of financial leaders to build responsible and resilient financial models. Contact the College of Business at +971-2-599-3605 / dgs.recruitment@zu.ac.ae
Frequently Asked Questions
1. What is the EU Artificial Intelligence Act?
The EU Artificial Intelligence Act (EU AI Act) is the first comprehensive legal framework regulating artificial intelligence across the European Union. Adopted by the European Parliament and implemented by member states under the European Commission, the Act establishes binding rules for AI systems placed on or used within the EU market.
It applies to:
2. Who enforces the EU AI Act and how does it affect non-EU organizations?
Enforcement is coordinated by:
The Act applies to AI systems deployed inside or outside the EU that affect the EU market. Organizations must:
As a result, the EU AI Act is becoming a global benchmark for AI governance, influencing operations far beyond Europe.
3. How does the EU AI Act regulate general-purpose and generative AI?
General-purpose AI systems and generative AI models are subject to:
Generative AI at scale is recognized as potentially systemic risk. Providers, including start-ups and SMEs, must adopt trustworthy AI practices and cooperate with the European AI Office.
4. What are the EU AI Act’s risk classification and compliance requirements?
Risk Classification of AI Systems
Compliance Requirements for High-Risk AI
High-risk AI systems must comply with:
5. What are high-risk AI systems?
High-risk AI systems are those with the potential to significantly affect:
High-risk AI systems are those with the potential to significantly affect:
High-risk AI must comply with strict governance requirements: human oversight, transparency, risk management, and fundamental rights impact assessments. National authorities enforce compliance within the EU.
6. What are the penalties for non-compliance?
The EU AI Act enforces significant financial penalties:
7. How does the EU AI Act integrate with the EU Digital Regulation?
The EU AI Act works alongside:
Together, these laws create a coordinated digital and AI regulatory framework, addressing AI governance, data protection, platform accountability, and market fairness across the EU.
8. Why does the EU AI Act matter for global AI governance?
Even outside the EU, organizations interacting with European markets or technology ecosystems must align with EU AI standards. This sets a global benchmark, shaping how AI is developed, deployed, and governed worldwide, particularly in sectors like finance, healthcare, and critical infrastructure.