The adoption of artificial intelligence (AI) continues apace. In 2024, a report published by the Bank of England and Financial Conduct Authority (FCA) found that three quarters of UK firms were already using AI, with a further 10% planning to follow suit in the next three years.
But while AI presents numerous opportunities for automation and process optimisation, it also comes with some significant risks. In April, a report by the BoE’s financial policy committee (FPC), Financial Stability in Focus: Artificial intelligence in the financial system, highlighted a “high degree of uncertainty” over the future evolution of the technology and how it is used.
As the report states, “the complexity of some AI models – coupled with their ability to change dynamically – poses new challenges around the predictability, explainability and transparency of model outputs. And their use of very large amounts of data poses new challenges for users around ensuring the integrity of that data. The potential for market concentration in AI-related services, including vendor-provided models, is a further challenge.”
Adverse behaviours
The report notes some firms are already using AI-based techniques at various stages of the lending process, while AI-based models are widely used by insurers to support pricing and underwriting decisions. In addition, firms that undertake algorithmic trading in highly liquid markets use AI “to help refine the predictive power of models that feed into their trading strategies.”
While there is an opportunity to use AI to inform trading and investment activities, the report identifies a number of risks that could arise as a result, including the potential for participants to take correlated positions.
There is also a risk that models could rationally exploit profit-making opportunities in a destabilising way or engage in other adverse behaviours. “For example, models might learn that stress events increase their opportunity to make profit and so take actions actively to increase the likelihood of such events,” the report notes. Likewise, there is potential for AI models to facilitate collusion or other forms of market manipulation.
Other risks highlighted by the report include ‘data poisoning’ via the malicious manipulation of model training data, as well as the use of deepfakes to exacerbate geopolitical tensions and increase economic uncertainty.
AI behaving badly?
“Just as there are bad actors in markets, it’s quite possible that AI will behave badly,” says James Kelly, a former FTSE 100 treasurer and co-founder of Your Treasury.
He explains that in reality, AI has long been a feature of financial markets, with algorithmic trading common. “High frequency trading is almost exclusively automated, and we have seen that this can increase volatility. Similarly, leading firms have used AI to monitor market sentiment and momentum.” But as Kelly points out, these firms typically employ highly skilled technologists and data scientists to monitor their models.
“What we are now seeing is that models such as Chat GPT make this technology available to firms without needing the same level of tech expertise,” he adds. “Studies have shown that these large language models will often try to ‘win’ by employing strategies that we would see as cheating – like trying to beat a chess computer by altering its code, or by deleting pieces from the board.”
In order to mitigate the risks associated with AI, Kelly emphasises the importance of requiring it to be auditable and restricting its role. “AI trading tools which are advising should not be able to communicate to the market, and users should be aware that AI can be manipulative, concealing better possibilities from users if it is more work for the system.”
While there are clearly risks if badly managed, Kelly concludes, there are also great opportunities for companies to better understand and better manage their risk. “It’s just important to actively monitor AI tools and treat them with professional scepticism, which is one good reason why the future is AI plus people, not just AI.”