After decades in the doldrums, AI has graduated from Hollywood to the headlines. It has even entered our daily lives and may be poised to threaten our jobs. But before we get too excited about early retirement, it is probably worth looking in more detail at the substance and the hype.
The initial premise of AI was that we can programme computers to process information in the same way that humans do. Sounds reasonable. But this premise ignores two critical realities. Firstly, we have almost no clue how humans function from a cognitive, psychological, and even physical perspective. Even today, with lots of fancy kits like fMRI scanners and decades of research, we are only just beginning to map some basic brain processes like vision (which very quickly turns out to be very complicated).
Secondly, wetware (neurons in living brains) is very different from hardware (silicon transistors in chips). Neural nets were originally designed to mimic wetware but turned out to be more useful for flexibly crunching lots of statistics. There is some promise of computing with DNA. Quantum computing is essentially many more binary computations at once. Impressive as it is, silicon remains good at doing massive numbers of simple calculations, and there is little likelihood that increasing scale will make silicon more like wetware.
Whereas the pioneers apparently thought that artificial intelligence might somehow imitate human intelligence, the understanding of AI has changed now. It is no longer about mimicking human intelligence but rather about creating a new kind of intelligence.
Last century AI was about imbuing a machine with life skills, which we now call broad AI. Current projects focus on specific skill sets such as recognising cats in images or winning a game of go, which is called narrow AI. Each individual narrow AI is excellent at a small task, and when specific narrow AIs are combined the result could seem to approach human intelligence. Combine vision with language processing and some empathy algorithms and AI is ready to pass the Turing test.
Broad artificial intelligence
Having established that AI is artificial, namely that AI is a different kind of intelligence than human intelligence, the next issue is to determine if AI is likely to achieve broad intelligence. Current AI, remarkable as it is, is very focussed on narrow data sets and specific outcomes.
There are lots of case studies describing how some form of AI discovered new patterns in big data that help companies to improve and sell more products. Careful reading shows that this kind of pattern discovery is guided by and mediated through data scientists and data analysts. In other words, AI can be very good at answering (some) questions, but we still need humans to figure out what questions to ask.
Also, correlation does not imply causation. Some data patterns are just coincidences and/or do not make clear the real actionable pattern of causation needed to make business decisions. Again, human judgement is required to make sense of the answers generated by AI.
In other words, to be useful in the wonderful wide-open world, AI needs human hand holding. This may sound reassuringly like needing a database administrator (DBA) to manage your Structured Query Language (SQL) database, but even if AI is not self-sufficient it represents a major shift in the kind of handholding required. AI can produce surprising results, and humans must be intellectually prepared and emotionally ready to deal with the surprises.
From the forgoing, it is clear that AI is different from human intelligence and that it can be superior in narrow domains. Looking at the dictionary definition of intelligence – “the ability to acquire and apply knowledge and skills” – it seems that if we simply add a narrowing rider such as “in a specific domain”, the term AI may not be misleading. The tech industry, presumably mindful of the doldrums of AI last century, prefer less dramatic terms such as “machine learning”, “neural nets”, and “predictive algorithms”, even though AI seems a reasonable description according to the dictionary.
The purpose of this article is not definitional, but rather to describe how this cluster of new technologies around AI might impact treasurers in their day to day lives. If machine learning and predictive algorithms appear intelligent, it is because they feed on huge quantities of data to learn. In practical terms, the AI that is likely to impact treasurers in the foreseeable future is a nexus of massive quantities of data and sophisticated statistical techniques ranging from Bayesian math to neural nets.
AI feeds off data and happily for AI we are generating exponentially increasing amounts of data. These can be aggregated and meaningfully trawled by big data technologies. And better still, we are increasingly able to dump in any old unstructured data into the AI nexus and get curious and maybe even meaningful insights.
Because AI feeds off data, scale matters. This has implications for corporations in an increasingly competitive world. First, corporations must start mining their data as soon as possible. Second, some corporations simply have more data than others – companies like Google, Alibaba, and Tencent have data volumes that few can match, and this gives them a competitive advantage in the intelligence race.
To choose a domain adjacent to treasury, Alibaba uses its huge trove of goods and settlement data to offer competitive financing to merchants on its platform. Because their trade goes across its platform, Alibaba sees sales, collections and customer satisfaction with a breadth and depth that allow it to know its customers with an intimacy that banks can only dream of. Better customer intimacy allows Alibaba to fund its merchants more cheaply and more profitably than banks. The outlook for banks, who even if willing will not be able to replicate Alibaba’s platform and scale, is bleak.
The same applies, or will apply, in other domains. The largest players with the most data will be have the best trained neural nets – they will be the most (machine) intelligent players – and they will have a substantial competitive advantage. And whilst some collaboration around data may provide a way forward, a more likely scenario may be that smaller businesses are forced onto market leaders’ platforms to get access to clients and pertinent machine intelligence.
Robotic process automation (RPA) is not AI though some implementations are AI augmented. RPA refers to software “robots” that basically screen scrape data from one system, possibly process the data in some way, and then input the data to another system. It brings hope of linking disparate systems – such as Excel and ERP – without multiyear IT projects to figure out and implement APIs.
RPA can also use AI to process the data it has scraped from one system before it enters it into another. In some cases, this can mean significant decision making such as approve or reject decisions. RPA can also be the interface from AI platforms to legacy systems in many instances.
The plausible way forward is human machine collaboration. No doubt we will see meta AIs managing some early generation AIs, but for the foreseeable future AIs will not be formulating meta goals of their own. Treasuries that have already implemented RPAs, or who have modern API based systems that obviate RPAs, know that the days of cutting and pasting and reconciliation drudgery are numbered.
Other areas that are ripe for AI disruption may include:
Not all of the examples are strictly speaking AI but as we have seen there is a blurry line between advanced statistics and AI in practice, so at least they show the direction in which the profession is headed.
Treasurers do not necessarily need to become data scientists but they need to be comfortable enough to feed and interpret AI systems, just as treasurers do not necessarily need to be ace programmers today but they need to be able to use and interpret Excel spreadsheets and key financial formulae.
This learning journey is complicated by the fluidity of the field, which brings risks of learning the wrong tool and difficulty finding guides along the way. Just as VisiCalc and Lotus expertise was not wasted when Excel came along, any experience even with AI platforms that eventually lose out commercially, will still help treasurers to understand what the technology can and cannot do.
Specialisation has been a good career move. Treasury itself would be seen by many to be a specialisation. Some specialisations such as compliance and data science remain attractive – although one has to wonder for how long.
Given that the mechanics of treasury – copy/paste, reconciliation, credit analysis, FX hedging, et al – will increasingly slip into the domain of AI, treasurers will be increasingly required to use human and AI interfacing skills.
Human skills will remain hard for AI to master – though AI may help train and support human skills – and it is unlikely that AI will build local entity buy in for new banking arrangements (assuming that local entities still exist).
And as stated above, AI hand holding – the ability to feed and interpret AI – will become a key skill for treasurers. In this context, treasurers will need to be competent generalists rather than specialists. Even specialisations that are currently in demand like compliance will eventually become heavily data focussed – no human will be able to remember all the rules to which corporations must adhere, so knowing how to get the right answers from extensive data will become more important than knowing all the rules.
The views and opinions expressed in this article are those of the authors