海外之声丨金融智能革命:人工智能如何改变金融行业?
创始人
2024-12-13 09:21:38
0

导读

信息处理和数据聚合成为现代金融系统的核心任务,它们能通过价格信号协调经济参与者的行为。在此过程中,金融系统不仅促进了资本有效流动,还通过风险管理、保持流动性和支持稳定性,维持经济系统的整体健康。金融市场和中介机构的良好运作是经济进步和社会福利的基本源泉,而金融政策和监管的作用在于纠正系统性问题,利用金融系统的智能增强社会福利。

随着信息处理技术的进步,特别是AI技术的发展,金融行业发生了显著变化。包括机器学习(ML)、生成性AI(GenAI)和AI代理在内的AI技术迭代,为金融行业带来大量机遇和挑战。AI技术显著提高了金融系统信息处理、数据分析、模式识别和预测能力。例如,早期的基于规则的系统被用于自动交易和欺诈检测。随着技术的进步,机器学习和深度学习模型广泛应用于资产定价、信用评分和风险分析。虽然GenAI尚处于初期阶段,但它已经通过后端处理、客户支持和合规性方面融入了现代金融系统。

然而,AI技术也带来了许多挑战。例如,复杂的机器学习模型缺乏透明性,对大量数据过度依赖可能会导致消费者隐私保护,网络安全和算法偏见等一系列问题。由于增加了对数据和计算能力的依赖,GenAI进一步加剧了这些挑战。此外,GenAI模型往往由少数主导公司生产,因此还可能造成市场集中度和竞争问题。

AI技术在金融系统中的应用还可能加剧系统性风险。早期的基于规则的计算机交易系统就曾引发级联效应和羊群效应,例如1987年美国股市崩盘。在机器学习模型中,这些风险变得更加复杂,包括模型的一致性、网络连接性和解释性差等问题。对于监管者来说,一方面,使用高级AI技术带来的进一步挑战是复杂的互动和内在的不可解释性,使市场操纵或金融稳定风险难以及时发现;另一方面,GenAI的共同驾驶和机器人顾问可能导致决策同质化,进一步加剧系统性风险。

AI技术在金融系统中的应用不仅对金融行业本身产生影响,还对实体经济产生重要的溢出效应。在乐观情景下,广泛使用AI将带来生产力的提升,并产生良性的经济效应。短期来看,AI将会使生产力显著提高,且不会对劳动力市场产生重大破坏性影响。已有研究表明,AI的采用与企业生产力之间存在正相关关系。因为AI可被视为一种积极的生产力冲击,对不同部门产生差异化影响。AI的使用短期内具有去通胀的效果,因为家庭和企业可能无法完全预见AI对经济的影响。然而,不管经济参与者如何形成预期,从长期来看,这种影响是通胀性的。AI的广泛使用可能会在短期内缓解通胀压力,支持央行将通胀率恢复到目标水平。但在中长期,通胀可能会由于AI引发的需求增长而上升,因此央行必须要通过收紧政策来抑制需求。长期而言,AI的带动可能会抵消一些如包括人口老龄化、产业回迁和全球供应链的变化以及地缘政治紧张局势和政治分裂等威胁经济增长的长期不利因素。产出的增加将增强经济偿还债务的能力,对债务可持续性产生正面影响。生产率提高带来的金融资产重估也将支持这一过程,前提是借贷成本的上升不会超过增长效应。

然而,这一情景也为金融系统带来了一些挑战。AI自动化带来的工作流失可能会影响消费者的消费模式,以及消费者和企业偿还贷款的能力。违约的连锁反应可能进一步影响金融系统,而金融系统本身需要支持由这种置换引起的资源重新分配。

在另一种更加破坏性的情景中,AI的迅速发展可能会带来重大扰动。一些AI专家预测,高度智能的自主AI代理可能在十年内达到人工通用智能(AGI)水平,能够自动化几乎所有由人类完成的任务。即使有新的任务出现,这些机器也可能同样擅长,从而大规模地破坏现有就业市场。在这种情况下,产出将呈现指数级增长,不再受劳动力稀缺的限制。这将对企业和劳动力市场产生巨大扰动,特别是劳动力市场的价值将会严重贬值。例如,OpenAI的CEO Sam Altman最近提出,我们很快会看到没有(人类)员工的万亿美元公司迅速占领某些商业领域。这种数字技术的赢家通吃效应可能会加强这种动态。如果AI的迅速进步显著加速经济增长和价格上涨,利率可能会大幅上升。这一激增可能导致信贷质量严重恶化和普遍违约,可能对金融机构资产负债表造成严重压力。此外,如果劳动力市场——政府的主要收入来源——被削弱,政府可能会面临显著的税收收入减少,从而质疑其债务的可持续性。

AI的发展不仅对发达国家产生影响,还可能导致“智能鸿沟”,使其他国家落后,从而导致严重的贸易条件损失。所有这些对实体经济的扰动也可能引发政治不满和不稳定,进一步削弱金融稳定。

为应对AI对金融系统的变革性影响,需要一个全面的监管框架。监管框架应基于透明度、责任、隐私保护、安全等原则,为AI系统的设计、训练、测试、部署和长期管理提供指导。为了平衡AI创新和风险管理,监管机构需要采取前瞻性的方法,提前预见潜在问题,并在问题升级前进行处理。这可能包括评估AI模型的系统性、国家安全和社会风险。并且,国际合作在AI治理中尤为重要。全球统一的治理规则和风险评估方法是确保AI应用安全和负责任的关键。国际合作有助于提升信任,促进跨境AI应用,并有效应对隐私、安全和公平访问等全球性挑战。

未来的监管框架应关注AI进步与更广泛经济之间的相互关联,考虑多种情景以确保包容性经济增长和稳定。持续的研究和实证分析对于深入理解AI对金融系统的影响并指导政策决策至关重要。

作者 |国际清算银行货币与经济部(MED)

英文原文(节选,详见文末二维码):

Intelligent financial system: how AI is transforming finance

by I Aldasoro (BIS), L Gambacorta(BIS & CEPR), V Shreeti(BIS),

A Korinek(Univ. of Virginia & GovAI), M Stein(Univ. of Oxford)

Monetary and Economic Department

June 2024

BIS Working Papers No 1194

Section 2: Decoding Artificial Intelligence

The evolution of the financial system has gone hand in hand with the evolution of information processing technology. To understand the implications of AI for finance it is therefore helpful to examine the historical development of computational methods in tandem with concurrent developments in money and finance. Advances in computational hardware and software have enabled the evolution of advanced analytics, machine learning, and generative AI. At each technological turn in the past, the financial system has either been a catalyzer of change or an early adopter of technology.

The origins of computation can be traced back to ancient Sumerians and the abacus, the first known computing device. This was one of the earliest instances of numerical systems being crafted to address financial needs. Laws have also been driven by the changing needs of commerce and finance: the Code of Hammurabi, one of the earliest legal edicts, laid out laws to govern financial transactions as early as the 18th century BCE. Similarly, medieval Italian city-states pioneered double-entry bookkeeping, a seminal development in accounting that opened the door to an unprecedented expansion of commerce and finance. In fact, double-entry bookkeeping underpins regulation, taxation, governance, contract law, and financial regulation to this day.

Computation Over time, analytic tools saw tremendous advances at an increasing pace. One of the most significant of these advances occurred in the last century: the invention of computers. Unsurprisingly, the financial sector was among the first to adopt and use computers. For example, the IBM 650, introduced in 1954, became popular partly because of the efficiency improvements it brought in finance. In the early days of modern computing, capabilities were limited to basic arithmetic, logical and symbolic operations (for example, following “if-then” rules) to solve problems. With more computing power, analytic capabilities evolved and allowed AI to emerge from basic computer systems.

Artificial Intelligence AI broadly refers to computer systems that perform tasks that typically require human intelligence (Russell and Norvig, 2010). Alan Turing and John von Neumann laid the theoretical groundwork, delineating principles that would become the cornerstone for subsequent computational and AI advancements. For much of the 20th century, AI was dominated by GOFAI and expert systems that were developed in the wake of these seminal contributions. GOFAI emerged in the late 1950s and continued to be the dominant paradigm through the 1980s. During this period, AI researchers focused on developing rule-based systems to emulate human intelligence, based on logical rules and symbolic representations. While highly useful for basic financial functions (e.g. risk management, basic algorithmic trading rules and credit scoring, fraud detection), they were far from human-level abilities in pattern recognition, handling uncertainty and complex reasoning. Hardware advances enabled small desktop computers, such as the personal computer in the 1980s and 1990s. The ability to store data and perform basic analytics using spreadsheets and other computer programs led to wide adoption and efficiency improvements in finance.

Machine Learning The next wave of progress came with machine learning (ML), a sub-field of AI. ML algorithms can autonomously learn and perform tasks, for example classification and prediction, without explicitly spelling out the underlying rules. Like earlier advances in information processing, ML was quick to be adopted in finance, even though in the early days, its usefulness was limited by computing power. Early examples of ML relied on large quantities of structured and labelled data.

The most advanced ML systems are based on deep neural networks, which are algorithms that operate in a manner inspired by the human brain. Deep neural networks are universal function approximators that can learn systematic relationships in any set of training data, including increasingly in complex, unstructured datasets. These developments enabled financial institutions to analyze terabytes of signals including news streams and social media sentiment. At an aggregate level, this led to increasingly fast-paced and dynamic markets, with optimized pricing and valuation. However, as these models dynamically adapt to new data, often without human intervention, they are somewhat opaque in their decision-making processes.

Generative AI For the past 15 years, i.e., since the beginning of the deep-learning era, the computing power used for training the most cutting-edge AI models has doubled every six months – much faster than Moore’s law would suggest. These advances have given rise to rapid progress in artificial intelligence and are behind the advent of the recent generation of GenAI systems, which are capable of generating data. The most important type of GenAI are Large Language Models (LLMs), best exemplified by systems like ChatGPT, that specialize in processing and generating human language.

LLMs are trained on enormous amounts of data to predict the continuation of text based on its beginning, for example, to predict the next word in a sentence. During their training process, they learn how different words and concepts relate to each other, allowing them to statistically associate concepts and developing what many interpret as a rudimentary form of understanding. Drawing from this simple but powerful principle, LLM-based chatbots can generate text based on a starting point or a “prompt”. A leading explanation for the capacity of modern LLMs to produce reasonable content across a wide range of domains is that the training process leads such models to generate an internal representation of the world (or “world model”) based on which they can respond to a wide variety of prompts.

The use cases of LLMs have blossomed across many sectors. LLMs can generate, analyze and categorize text, edit and summarize, code, translate, provide customer service, and generate synthetic data. In the financial sector, they can be used for robo-advising, fraud detection, back-end processing, enhancing end-customer experience, and internal software and code development and harmonization. Regulators around the world are also exploring applications of GenAI and LLMs in the areas of regulatory and supervisory technologies.

The different iterations of AI described above can be seen as a continuous process of increasing both the speed of information processing in finance and the ability to include more types of information in decision-making. At present, AI has an advantage over humans in areas with fast feedback cycles (“reward loops”) to calibrate its decision-making, high degrees of digitization of relevant data and large quantities of data. For these reasons, autonomous computer systems are currently deployed mainly in areas that fit these characteristics, for example, high-frequency trading. With increasing capabilities, over time, autonomous computer systems might also be at an advantage in medium-term and long-term markets (e.g. short term derivatives and bonds respectively), as well as in other applications.

AI Agents The next frontier on which leading AI labs are currently working are AI Agents, i.e., AI systems that build on advanced LLMs such as GPT-4 or Claude 3 and are endowed with planning capabilities, long-term memory and, typically, access to external tools such as the ability to execute computer code, use the internet, or perform market trades. Autonomous trading agents have been deployed in specific parts of financial markets for a long time, for example, in high-frequency trading. What distinguishes the emerging generation of AI agents is that they have the intelligence and planning capabilities of cutting-edge LLMs. They can, for example, autonomously analyze data, write code to create other agents, trial-run it, update it as they see fit, and so on. AI agents thus have the potential to revolutionize many different functions of financial institutions–just like autonomous trading agents have already transformed trading in financial markets.

Artificial General Intelligence For several of the leading AI labs, the ultimate goal is the development of Artificial General Intelligence (AGI), which is defined as AI systems that can essentially perform all cognitive tasks that humans can perform. Unlike current narrow AI systems, which are designed to perform specific tasks with a pre-defined range of abilities, AGI would be capable of reasoning, problem-solving, abstract thinking across a wide variety of domains, and transferring knowledge and skills across different fields, just like humans. Relatedly, some AI researchers and AI lab leaders speak of Transformative AI (TAI), which is defined as AI that is sufficiently capable so as to radically transform the way our economy and our society operate, for example, because they can autonomously push forward scientific progress, including AI progress, at a pace that is much faster than what humans are used to, or because they significantly speed up economic growth. There is an active debate on whether and how fast concepts such as AGI or TAI may be reached, with strong views on both sides of the debate. As economists, we view it as prudent to give some credence to a wide range of potential scenarios for the future.

Section 3: AI Transforming Finance

3.1 Opportunities and Challenges of AI for Finance

The integration of the rapidly evolving capabilities of AI is transforming the financial system. But as we have seen in Section 2, AI is just the latest information processing technology to do so. Table 1 summarizes the impact of the technologies we described earlier, from traditional analytics to AI Agents, on four key financial functions: financial intermediation, insurance, asset management and payments.

Traditional Analytics Early rule-based systems were adopted in financial intermediation and insurance markets to automate risk analysis. In asset management, they allowed for automated trading and the emergence of new products like index funds. In payments, they automated a significant part of the infrastructure and were also useful for fraud detection. While these models were generally easy to interpret, they were also rigid and required significant human supervision. They typically had a small number of parameters – a key limitation in their effectiveness.

Moreover, the automation of information processing requires large volumes of data, which comes with its own challenges. For example, in the financial sector this is often sensitive, personal data. Ensuring that data are collected, stored, and processed in compliance with privacy laws (such as GDPR) is a complex challenge.

Machine Learning Advances in ML unlocked a new range of applications of AI in finance. Whereas earlier generations of computational advances relied on processing numbers, ML can process a wide range of data formats. Kelly et al. (2023) identify three factors intrinsic to finance that make the use of ML particularly relevant. First, expected prices or predictions of prices are central to the analysis of financial markets. Second, the set of relevant information for prediction analysis is typically very large and can be challenging to incorporate in traditional models. Third, the analysis of financial markets can critically depend on underlying assumptions of functional forms, over which agreement is often lacking. Machine learning models can be powerful in this context, as they can incorporate vast amounts of data (and thus, information sets) and are based on flexible, non-parametric functional forms. Owing to these benefits, ML models have been widely applied in finance and economics.

Machine learning has a range of use cases across the four economic functions we consider. In financial intermediation, the use of ML models can reduce credit underwriting costs and expand access to credit for those previously excluded, although few financial institutions have taken advantage of the full range of these opportunities. ML models can also streamline client onboarding and claims processing in several industries, particularly in insurance. Across industries, but especially in insurance and payments, ML models are used to detect fraud and identify security vulnerabilities.

ML is also heavily used in asset pricing, in particular to predict returns, to evaluate risk return trade-offs, and for optimal portfolio allocation. Thanks to their ability to analyze large volumes of data relatively quickly, ML models also facilitate algorithmic trading. In payments, ML models can provide new tools for better liquidity management. Finally, not only the private financial sector benefits from ML: these models are also increasingly used by regulators to detect market manipulation and money laundering.

The opportunities created by ML models also come with risks and challenges. The flip side of flexible, highly non-linear machine learning models is that they often function like black boxes. The decision process of these models – for example whether or not to grant credit – can be opaque and hard to decipher.

Generative AI mostly in the form of LLMs, is part of the new frontier and comes with its own set of opportunities. Two key aspects of GenAI are particularly useful for the financial sector. First, whereas earlier computational advances have made the processing of traditional financial data more efficient, GenAI allows for increased legibility of new types of (often unstructured) data, which can enhance risk analysis, credit scoring, prediction and asset management. Second, GenAI provides machines the ability to converse like humans, which can improve back-end processing, customer support, robo-advising and regulatory compliance. Moreover, it also allows for the automation of tasks that were until recently considered uniquely human, for example, advising customers and persuading them to buy financial products and services.

The financial industry has already started adopting GenAI. OECD (2023) provides several recent examples: Bloomberg recently launched a financial assistant based on a finance-specific LLM, and the investment banking division of Goldman Sachs uses LLMs to provide coding support for in-house software development. Several other companies use GenAI to provide financial advice to customers and help with expense management, as well as through co-pilot applications.

Despite these potential benefits and growing adoption, LLMs also create new risks for the financial sector. They are prone to “hallucinations”, i.e., to generate false information as if it were true. This can be especially problematic for customer-facing applications. Moreover, as algorithms become more standardized and are uniformly used, the risk of herding behavior and procyclicality grows.

There are also concerns about market concentration and competition. GenAI is fed by vast amounts of data and is very hungry for computing power, and this leads to a risk that it will be provided by a few, dominant companies. Notably, big tech companies with deep pockets and unparalleled access to compute and data are well positioned to reinforce their competitive advantage in new markets. Regulators, especially competition authorities, have also started highlighting intentional and unintentional algorithmic collusion, especially with algorithms based on reinforcement learning, with potential implications for algorithmic trading in financial markets.

The data-intensive nature of GenAI, combined with the reliance on a few (big tech) providers also exacerbates consumer privacy and cybersecurity concerns.

AI agents are AI systems that act directly in the world to achieve mid- and long-term goals, with little human intervention or specification of how to do so. While current AI agents might be limited in their planning ability, the pace of advancements might lead to more capable agents in the near future. Such AI agents come with opportunities to process novel types of information more quickly than humans and to act autonomously, e.g., for designing software or performing data analysis. AI agents could expand high-frequency information processing and autonomous action from trading to other parts of finance. For example, they could soon autonomously design, market, and sell financial products and services.

Challenges can arise in a world with an increasing adoption of AI agents in finance and in sectors affecting finance, without oversight and security measures. In the short-term, this might include cybersecurity, fraud and unequal access due to hyper-personalized digital financial assistants; in the mid-term, potential liquidity crisis, or a structural over-reliance on AI agents.

The case of algorithmic trading might illustrate challenges with mid-term planning AI agents in other environments. Correlated failures, in the form of flash crashes due to correlated autonomous actions might happen in a different form in financial intermediation, asset management, insurance and payments. While for algorithmic trading, there is a clear digitized environment with precise short-term rewards, AI agents in other environments require more sophisticated reinforcement loops. These action-reward loops might be created over time, as AI agents will be more and more capable to act in unstructured, open-ended environments. Contingent upon the configuration of action-reward loops, novel risks might emerge, including the challenge of aligning them with human goals over longer-time horizons.

AI agents could also pose significant systemic risks if their behavior is highly correlated, their actions difficult to explain and missing oversight or behaviors are not transparent or misaligned. Appendix A discusses the hypothetical influence of AI agents on a financial crisis. For example, an AI designed for efficient asset allocation might start exploiting market inefficiencies in ways that lead to increased volatility or systemic imbalances.

3.2 AI and Financial Stability

Even with limited capabilities, computational advances already had important implications for financial stability. The US stock market crash of 1987 is an illustrative example. In October 1987, stock prices in the United States declined significantly – the biggest ever one-day price drop in percentage terms. This was attributed in large part to the dynamics created by so-called portfolio insurance strategies, which relied on rule-based computer models that placed automatic sell orders when security prices fell below a pre-determined level. Initial rule-based selling by many institutions using this strategy led to cascade effects and further selling, and eventually the crash of October 1987.

Machine learning added new dimensions to financial stability concerns, mostly due to increased data uniformity, model herding and network interconnectedness. The first dimension is the reliance of ML models on similar data. Due to economies of scale and scope in data collection, often there are only a few producers of large datasets critical to train these models. If most ML applications are based on the same underlying datasets, there is a higher risk of uniformity and procyclicality arising from standardized ML models. The second dimension is “model herding”: the inadvertent use of similar optimization algorithms. The use of similar algorithms can contribute to flash crashes, increase market volatility and contribute to illiquidity during times of stress. Algorithms that react simultaneously to market signals may increase volatility and market disruptions. This problem is exacerbated when financial firms rely on the same third-party providers, which is pervasive in the AI space. A third dimension is that of network interconnectedness, which may create new failure modes.

There are also other characteristics inherent to ML models that have implications for financial stability. In particular, the black-box nature of ML models that arises due to their complexity and non-linearity makes it often hard to understand how and why such models reach a particular prediction. This lack of explainability might make it difficult for regulators to spot market manipulation or systemic risks in time.

Like other ML models, the pervasive use of GenAI will present new challenges and will likely also have consequences for financial stability. As noted earlier, one of the most powerful tools made possible by language models is the increased legibility of alternative forms of data. Compared to traditional data sources, alternative data can have shorter time series or sample sizes. Recommendations or decisions based on alternative data may therefore be biased and not generalizable. Financial or regulatory decision making based on alternative data would need to be very mindful of this limitation.

The risks arising from the use of homogeneous models highlighted above also apply to GenAI. A key application of GenAI in the financial sector is the use of LLMs for customer interactions and robo-advising. Since many of these applications are likely to rely on the same foundational models, there is a risk that the advice provided by them becomes more homogenized. This may by extension exacerbate herding and systemic risk.

Financial stability concerns that derive from the uniformity of datasets, model herding, and network interconnectedness are further exacerbated by specific characteristics of GenAI: increased automaticity, speed and ubiquity. Automaticity refers to GenAI’s ability to operate and make decisions independently, increasingly without human intervention. Speed pertains to AI’s capability to process and analyze vast amounts of data at rates far beyond human capacity, enabling decisions to be made in fractions of a second. Ubiquity highlights GenAI’s potentially widespread application across various sectors of the economy, and its integration into everyday technologies.

A number of systemic risks could arise from the use of AI agents. These agents are characterized by direct actions with no human intervention and a potential for misalignment with regards to long-term goals. The fundamental nature of the resulting risks is well-known from both the literature on financial regulation and the literature on AI alignment and control: if highly capable agents are given a single narrow goal – such as profit maximization – they blindly pursue the specified goal without paying attention to side goals that have not been explicitly spelled out but that an ethical human actor would naturally consider, such as avoiding risk shifting or preserving financial stability. Moreover, even when constraints such as satisfying the requisite financial regulations are specified, AI agents may develop a super-human ability to pursue the letter rather than the spirit of the regulations and engage in circumvention. As an early example, an LLM that was asked to maximize profits as a stock trader in a simulation engaged in insider trading even when knowing it is illegal. Moreover, when caught, the LLM lied about it. We discuss some of the broader risks in Appendix A in a thought experiment looking at how such agents could have interacted with known causes of the great financial crisis of 2008/09. As AI Agents advance towards AGI, the resulting risks would be greatly amplified.

3.3 AI Use for Prudential Policy

As the private sector increasingly embraces AI, policymakers may find it increasingly useful to employ AI for both micro- and macroprudential regulation. In fact, they may have no choice but to resort to AI to process the large resulting quantities of data produced by regulated financial institutions. Microprudential policy concentrates on the supervision of individual financial institutions, whereas macroprudential policy concerns itself with the supervision of the financial system as a whole. AI can be leveraged for both types of prudential policies but comes with a different set of risks in each domain.

For microprudential policy, AI might enable more sophisticated risk assessment models and improve the prediction of institutional failures or spot market manipulation. However, the routine use of such methods is still far away. As AI is particularly adept at recognizing patterns in large volumes of data, it could be a powerful tool for supervisors to predict emerging risks for financial institutions. Moreover, GenAI in particular can be powerful for regulatory reporting and compliance by allowing automation of repetitive tasks.

Some of the implications of AI for microprudential policy are already discussed in the previous section. The main limitations include exacerbated threats to consumer privacy, challenges arising from the black-box nature of algorithms, the risk of algorithms magnifying biases that exist in input data and exposure to sophisticated cyber attacks.

The use of AI in for macroprudential policy carries a different set of difficulties. Danielsson and Uthemann (2023) identify five main challenges for traditional ML: i) data availability, ii) uniqueness of financial crises, iii) Lucas critique, iv) lack of clearly defined objectives, and v) challenges of aligning regulatory and AI objectives. Financial crises are very disruptive but, fortunately, rather rare events. Since the usefulness of ML is directly linked to the amount of data fed into models, applications in macroprudential policy may be limited and extrapolating from a few data points may lead to incorrect outcomes.

A related challenge is the uniqueness of each financial crisis. Just as this makes it difficult for humans to predict financial crises, it also makes it difficult for AI: although crises have some commonalities, each has its own specific risk factors which – while rationalizable ex post – are nearly impossible to understand ex-ante. The reason is their “unknown-unknowns” characteristic. Accordingly, even if AI were able to learn from past crises, the lessons might have limited applicability for predicting the next one. Moreover, even in the cases where AI is able to generate insights from a specific crisis episode, the policy insights themselves will change the environment of decision making – the so-called Lucas critique.

Due to constraints of data and the uniqueness of financial crises, it is often challenging even for regulators to have clearly defined objectives for macroprudential policy. Objectives may be fairly broad, such as “maintaining financial stability,” which may be difficult to parse for AI.

Finally, there is a risk of misalignment, which is made worse by incomplete information on the objectives of macroprudential policy. AI and humans may have very different ways of reaching the same objective, and there is a risk that AI adopts ways that are detrimental to social welfare or out of touch with ethical or moral standards.

However, just like humans have found ways of dealing with these challenges, future advances in AI may open up new possibilities for macroprudential regulation that go beyond the limitations of traditional ML. As AI systems become more advanced, they may be able to better deal with the limited data on financial crises by learning from a much broader set of data sources, including granular data on financial transactions, news and sentiment analysis, and simulations of hypothetical crisis scenarios. They may also be able to identify more generalizable patterns of systemic risk that are robust across different types of crises. Moreover, future AI systems that can engage in counterfactual reasoning and causal inference could help regulators better understand the potential consequences of different policy interventions, even in a world where the Lucas critique applies. And as AI alignment techniques improve, it may become possible to specify clear objectives for AI systems to optimize, while ensuring they do so in a way that is consistent with human values and regulatory intent. By leveraging these advances, future AI could become a powerful tool for enhancing the speed, scope, and precision of macroprudential regulation, helping to build a more resilient and stable financial system.

全文二维码

编译:浦榕

监制:崔洁

来源|国际清算银行货币与经济部(MED)

版面编辑|傅恒恒

责任编辑|李锦璇、蒋旭

主编|朱霜霜

  • 周末读史 | 论明前期的禁钱政策及其影响
  • 完善政府债务管理制度:思路与展望
  • 人民币跨境结算改革与企业出口
  • 过度负债如何影响经济
  • 如何调整企业雇佣策略以应对美国关税挑战?

相关内容

安乃达上半年营收增长38%...
8月29日晚间,安乃达披露2025年半年报,实现营业收入10.32...
2025-09-01 09:00:04
原创 ...
柯洁的故事,不仅仅是一个棋手累积冠军的轨迹,更是中国围棋走向职业化...
2025-09-01 08:59:33
欢迎投稿!山推杯·2025...
风起湖畔,诗意奔涌。 山推杯·2025济宁太白湖半程马拉松,将于金...
2025-09-01 08:58:49
樊振东德甲首秀遭遇“开门黑...
2016年,樊振东在萨尔布吕肯收获自己职业生涯第一个三大赛冠军,开...
2025-09-01 08:58:15
光伏“内卷致死” “另类”...
文|壹度Pro 前两年风光无限的光伏行业似乎正进入新一轮下行周期...
2025-09-01 08:58:00
安徽亲子5日游行程推荐,黄...
黄山,这座屹立于皖南大地的奇峰峻岭,宛如一幅流动的山水画卷,自古以...
2025-09-01 08:50:40
沈阳多景区发布公告!
近日,沈阳多个景区发布公告,详情如下: 尊敬的各位观众: 随着暑期...
2025-09-01 08:50:39
安徽5天4晚行程推荐,黄山...
黄山,是大自然的宠儿,四季皆有景,季季皆不同。春天,黄山的山谷间开...
2025-09-01 08:49:17
周也穿粉色吊带游韩国好快乐...
搜狐娱乐讯 近日,周也晒出自己旅行韩国的美照,只见她穿着粉丝吊带好...
2025-09-01 08:48:01

热门资讯

原创 淮... 据说抗美援朝战争之后,老蒋又重新拾起了自信心:跟麦克阿瑟相比,我简直就是一个优秀的军事家。 这个传说...
中央农办部署加强农村高额彩礼问... 综合整治农村高额彩礼问题是党中央关注、老百姓关心的一件民生实事。中央农村工作领导小组办公室将加强部署...
喝茶项目经理前景如何?这茶还能... 最近,建筑行业里“喝茶项目经理”这个话题讨论得热火朝天。好多人好奇,这喝茶项目经理到底是干啥的?前景...
八十年,再敬礼   八十年,再敬礼。
欧盟指责俄罗斯干扰冯德莱恩所乘... 【文/观察者网 陈思佳】据英国《金融时报》9月1日报道,欧盟委员会发言人当天表示,欧盟委员会主席冯德...
如果中年失业就尽量不要去打工,... 如果中年失业就尽量不要去打工,建议去做这10件事 1、承包菜鸟驿站,现在快递多到放不下。 2、摆早餐...
上海合作组织成员国元首理事会关... 新华社天津9月1日电 上海合作组织成员国元首理事会关于支持多边贸易体制的声明我们,上海合作组织(以下...
广西崇左市委副书记丁东弟调任自... 据微信公众号“南宁工会”消息,8月28日,广西南宁市总工会举行南宁市职工AI科技教育体验馆开馆仪式。...
周薪超20万镑!天空:维拉将承... 直播吧09月01日讯 据天空体育最新消息,曼联球员桑乔租借加盟维拉已达协议。 报道称,该交易为单纯租...
金正恩乘专列启程赴华,将于2日... 韩联社1日报道,朝鲜消息人士1日透露称,朝鲜国务委员会委员长金正恩当天下午搭乘专列从平壤启程赴华。金...