Depending on what report you read, we are either all about to be usurped by robot overlords or enjoy far greater productivity and job satisfaction with technology’s help. Although not yet mainstream in treasury or corporate banking, what should we make of the threat/opportunity that is artificial intelligence?
The truth about any situation when monumental change is predicted is often surrounded by equal parts hype and fear. The reality, usually, turns out to be somewhere in between. The use of artificial intelligence (AI) in business has risen up the agenda in recent times and is now a fixture with content providers in many sectors. But for those at the sharp end of treasury, is it a threat or an opportunity?
AI is sometimes seen as an all-knowing, all-conquering conscious mind; something to be feared. But worrying about AI taking over our lives is the equivalent of worrying about pollution and over-population on Mars, according to AI expert Andrew Ng, Chief Scientist at Chinese web search giant, Baidu, and an Associate Professor at Stanford University. We are, Ng writes, a very long way from facing up to the killer robots. Thoughts of this kind, he has concluded, are “an unnecessary distraction” to progress in this field of endeavour.
AI today
AI at the moment is not much more than a complex mathematical equation, says Matt Armstrong-Barnes, Chief Technologist, Hewlett Packard Enterprise. Asked if AI will change business processes, he responds with an emphatic ‘no’. “AI today is a tool that ingests a lot of information and uses machine learning to provide a critical input for humans to make a decision.” The emphasis here, he says, is on ‘human’ decision-making.
Of course, it can be hard to work out the reality of AI through all the noise. But whoever or whatever is taking the decisions, one thing is certain and that is “AI is revolutionising every industry and is transforming our lives”, says Alex Housley Founder and CEO, Seldon, an open-source machine-learning framework vendor.
Housley concurs with Andrew Ng’s belief that AI is “the new electricity”. Indeed, he says, with the ready availability of some “fantastic tools”, businesses should now be taking advantage of what AI can offer. But AI has been around since the 1950s (and arguably even before then), so why is it only now coming to prominence?
For Kristian Luoma, Head of financial services firm, OP Financial Group’s OP Lab, the stars have now aligned. Key ‘stars’ include the availability now of so much more data, the pace of technological development at the heart of computing (particularly CPUs and GPUs), and consumer behaviour trending massively towards digital channels. The importance of the latter should not be underestimated.
Luoma agrees with Armstrong-Barnes that, for business, back office processes won’t be subject to wholesale change. However, he does see an opportunity for certain processes or interfaces to be replaced by the introduction of what he calls “maths-based recommendations”. Currently, this notion is serving to reduce the amount of time consumers must spend interacting with the services being created for them, increasing their satisfaction – and expectation – levels. As more is expected from the digital experience, it will play a significant driver for the uptake and eventual cross-over into business applications.
Indeed, for Husayn Kassai, CEO of identity verification software company, Onfido, the need for businesses to address customer satisfaction is one of the drivers for why AI is now coming to the fore. The capacity, for example, for banks to on-board customers quickly and cost effectively in an environment where fraud is such a huge regulatory issue, will increasingly rely on AI-based tools to keep them engaged, he believes.
Light and dark
Despite Ng’s reassurances, with the acceleration in the past couple of years of ‘deep’ machine-learning capabilities, and the increasing pools of data from which AI can derive answers, people do worry about it. Less about evil super-intelligence perhaps but certainly about losing their jobs and their place in the labour market to machine-learning technology.
The increasing presence of AI as an agent of change has seen opinions polarise. PwC research has shown that AI is a commercial opportunity that could boost global economic output by 14%. This, it noted, equates to GDP gains of US$15.7trn, making it “the biggest commercial opportunity in today’s fast changing economy”. Furthermore, US-based not-for-profit think tank, the Institute for the Future, and its panel of 20 technology, business and academic experts from around the world, expectantly suggests that 85% of the jobs that will exist in 2030 haven’t been invented yet, so there is much to look forward to.
But then McKinsey reported the darker side of AI. Using conclusions drawn from detailed analysis of 2,000-plus work activities for more than 800 occupations, it reports that about 30% of tasks in 60% of occupations could be automated using current technologies. “As automation advances in capability, jobs involving higher skills will probably be automated at increasingly high rates,” it warned. Realisations of this nature have led to Bank of England Chief Economist, Andy Haldane, to claim that in the US and UK alone, about 80m and 15m jobs respectively could be taken over by robots.
Hope and fear, it seems, are pedalled freely. In the treasury space, because AI is yet to be adopted with any real vigour, the facts are few and far between. Here, readers of a certain age may be reminded of the battle between VHS and Betamax video formats; consumers and producers followed one or the other trend, initially the outcome being difficult to predict. Eventually VHS came out on top and was everywhere, at least until that too was superseded.
The point is that following the ‘right’ technology can be a gamble at the early stages – precisely where we are with AI in the context of treasury and corporate banking. The talk is loud but the action limited as the ideas and use cases jostle for position. Making a point of understanding the trends has great value because those heeding the advanced warnings will have a clear advantage over those who do not.
Consumer driver
Just as it has with the advance of mobile technology, the consumer space is likely to lead the march of AI into the realm of business adoption. The advent of High Performance Computing and massive leaps forward in GPU and CPU technology has enabled the vast data processing and presentational needs of AI to be met in non-specialised environments.
The consumer world now abounds, for example, with online chatbots using powerful and quick-learning AI technology to steer sales conversations to a positive conclusion. As Armstrong-Barnes points out, in the insurance sector, chatbots are more successful at selling than their human counterparts. With the so-called millennials more aligned with the culture of messaging than previous generations, he argues that this format will only grow in strength.
That’s not to say the corporate space is not without its adventures in AI. Some banks are offering a practical nod to its adoption with applications handling banking back office operations, trading, risk management, fraud detection and KYC compliance. Others have deployed AI in the realm of customer services, having recognised the need to reduce the time customers spend interacting with online portals.
Bank of America Merrill Lynch and Wells Fargo, for example, are using ‘virtual assistants’ to help deliver a quicker and more relevant service to their retail customers, so how long will it be before the more complex needs of corporate clients are tackled in this way?
It is true that a treasurer presiding over multiple accounts, multiple locations and currencies, and even multiple access rights, is a much bigger challenge. But just as mobile has ridden high on a consumer wave that is now breaking over the commercial space, the use of AI tools in retail seems to be a warm up exercise for the committed bankers of corporates – and not just as a risk management tool.
J.P. Morgan is said to spend around 40% of its US$10.8bn annual technology budget on new technology including AI, RPA and blockchain. It has already rolled out mobile apps for its trading community, and now it is deploying Amazon’s voice-activated assistant, Alexa, to give its investment banking clients an easier AI-driven way to use its research database.
The plan is eventually to enable AI to help its treasury clients navigate the bank’s online portal and be able to ask an online assistant for balance information. By continuously learning from user questions and online behaviour, the so-far nameless system, which the bank reports as being in pilot mode, is expected one day to be able to offer clients alerts and actionable options based on predictions.
AI at the cusp
The stage AI is at now makes it a useful tool to manage vast quantities of data. It has the capacity to tackle highly complex business processes and to create new opportunities, offering the kind of rapid insight – including behavioural and pattern analysis – that otherwise would not be possible. This, says Housley, could be a vital boost for treasurers who lack immediate insight into how efficiently subsidiaries are using cash, for example, with this data subsequently fuelling improvements in their working capital, forecasting and funding models.
Through the identification of patterns and characteristics within increasingly vast data pools, AI can also begin to move beyond simple insights and start to assist the improvement of processes. For treasury, a prescriptive model is envisioned where the dynamic use of data – on cash flows, balances and so on – can be turned not just into warnings on limits but also suggestions to optimally route payments, for example.
Virtual assistants could even be used to provide help with supplier negotiations or make bespoke recommendations for certain FX exposures. The transaction banking community’s oft-repeated claim that its advisory role is taking a bold step forward would certainly be supported by such tools.
A question of ethics
As with most forms of data use, AI is subject to ethical and social enquiry. For Housley, the need to maintain ‘explainability’ of decisions is a vital consideration. As data users use AI to move away from hand-crafted rules-based processing, and simple ways of drawing conclusions, and start to hand over decisions to “highly complex and uninterpretable black boxes”, he feels there is a risk of not being able to offer clear reasoning for the decisions being returned (why was this loan not granted?), a state that is not acceptable under GDPR, for example.
Armstrong-Barnes argues that the discipline of “algorithmic accountability”, dissecting how an answer was arrived at, is something that must be fully developed as decisions become more reliant upon AI. This is essential to address any notion that business is entering a dark age where “the computer says no” and that’s the end of it.
AI decisions can be based on extremely complex data manipulations and their formation, he notes, can become almost impenetrable for normal human intellect. “If we get to the point where we can’t understand and explain the complexity of the machine-learning algorithms, we have to build something that will understand them.”
AI is a tool and it is one that needs to be used effectively; you need to choose and to plan how you use it and it needs to be part of a wider strategy.
Matt Armstrong-Barnes, Chief Technologist, Hewlett Packard Enterprise
Both Kassai and Housley raise the idea that bias exists in most data sets. Even if sensitive fields are removed, machine-learning algorithms can find patterns elsewhere in the data, or make assumptions based on insufficient data, inadvertently reintroducing those biases. Machine learning programmers need to be mindful of such cognitive bias, just as those interpreting output need to avoid applying their own partial readings, if the output is to be of any use.
As a mark of the seriousness with which this is taken, Microsoft has formed an academic team, FATE – Fairness, Accountability, Transparency and Ethics in AI – to try to tackle this issue. “As we move toward relying on intelligent agents in our everyday lives, how do we ensure that individuals and communities can trust these systems?”, it asks.
For OP Financial Group’s Luoma, the opportunity to derive more from data, whether that’s using data in medical research to detect problems ahead of time or in banking to reduce fraud, the advantages to all stakeholders can be compelling. However, he says, even though protection regimes such as Europe’s GDPR offer “sound principals on how privacy is taken care of”, it will be quite a “balancing act” between function and fairness going forward.
Human after all
“AI needs a human being,” states Armstrong-Barnes. As a defence against an unlawful decision, for example, “the computer told me to” is unlikely to be acceptable, he notes. “AI is a tool and it is one that needs to be used effectively; you need to choose and to plan how you use it and it needs to be part of a wider strategy.”
As with any data source, whatever is put in, dictates what you will get out. As Armstrong-Barnes has said, AI is just a mathematical algorithm: “we need to make sure that humans beings are the decision-making entity”.
For Kassai too, “the importance of human judgement should never be forgotten”. As such, he believes that it must be possible for data-related issues to be dealt with as exceptions by small teams of highly skilled individuals, not large teams of unskilled personnel. The implication for professional treasurers is clear; expertise will always be required.
Job or not?
Ultimately, will AI lead to job losses? Most likely it will: “fewer resources with higher skills, but becoming a lot more effective”, explains Luoma. With treasurers typically operating in lean teams and almost always charged with doing more with less, AI presents an opportunity to remove many or all mundane, repetitive tasks. This can enable treasurers to focus on adding value, tackling more complex and unexpected situations, where experience is essential. As McKinsey has said, “the majority of the benefits may come not from reducing labour costs but from raising productivity”.
AI’s raison d’être is, arguably, to learn and take over certain tasks from humans. For some professions the human touch remains essential to the task. Just as few today would be happy to know that the commercial plane they are travelling in at 38,000 ft has no pilot (the basic technology to fly planes without pilots already exists), so removing the human element when it comes to an organisation’s financial existence would unsettle all but the most committed.
‘Turn it off and turn it on again’ is common but often-experienced reality when working with technology. This is a somewhat flippant argument perhaps, but the fact is, human intervention will always be necessary. Direct contact with the situation (an airline pilot correcting computer error, for example) is often required for successful decisions to be made.
This is perhaps why the concept of Expert Automation and Augmentation Software (EAAS) may offer the right balance that Luoma talks about. EAAS uses machine learning to seek out highly complex patterns in data and to automate tasks, leaving the human to step in where professional skills can be applied to context.
Whilst tomorrow’s treasurer may have a whole new approach to that intervention, preparation for what is to come should start now. The future of the profession lies not in worrying about being replaced, but in accepting the challenge to keep up to date with the skills, knowledge and the technologies that are on the horizon today. The rules of progress then are simple: first start, then keep moving, but make sure skilled humans keep a watching brief.