We are living in a world that is defined by data. It sits behind nearly everything we do and is what we use to quantify ourselves and everything else in the world. Some see this as a great opportunity; data arguably gives us the opportunity to drive improvements in every aspect of life and business. Others however, see it as a threat and believe that there is a real risk of businesses drowning in bits and bytes. Either way, looking at how to use data efficiently is increasingly progressing up the business agenda for companies around the world. Here we look at how corporate treasury may be able to leverage this data movement.
Multinational technology giant Google processes around 3.5bn search queries on average a day, which equates to roughly 1.2trn searches a year. That is a lot of data being created by Google’s search engine. Imagine now, what we as individuals do online on a daily basis, visiting tens if not hundreds of different websites, sending emails, buying goods, downloading items or posting photos and videos – all of which simply expands our data footprint.
The digital age has, without doubt, turned humans into supreme data creators. And just as our DNA confirms our genetic makeup, data arguably defines who we are, documenting our interests, spending habits and social interactions.
This is powerful information, and we have already seen some organisations begin to utilise this data, commonly referred to as ‘big data’, in various ways. Perhaps the most noticeable is how online advertising is now tailored to reflect our browsing history – if you spend a lot of time looking at golf websites, it is likely that you will see lots of adverts for golf equipment, for instance. This doesn’t happen by chance. This is a service offered by companies such as Google and its AdWords application, which utilises a complex algorithm analysing many different data sets to ensure that adverts find a suitable online audience.
Companies have naturally gravitated towards this use case for data, after all, it has the potential to increase sales and most of the technical aspects are outsourced. But, just as humans create significant amounts of data, so too do businesses. Yet, generally speaking, businesses today only utilise a tiny fraction of the data they create. This is beginning to change, however, and companies of all shapes and sizes are starting to take more interest in their own data and recognising how it can be the key to unlocking value within the company.
Reasons to embrace data
It is remarkable that in 2016 many companies still do not have a big data strategy. Indeed, according to a recent survey conducted by DNV GL, only 23% of the 1,200 professionals surveyed from across Europe, Asia and the Americas have such a strategy in place. Perhaps more startling is the fact that only 52% of the professionals surveyed agreed that big data presents a clear business opportunity.
Despite these results, there are clear and apparent reasons to embrace big data. As a blog post by American multinational computer company Dell outlines, big data can enable businesses to:
React faster to external events.
Better predict outcomes.
Foster better information sharing.
Have faster access to the right/relevant information.
Reduce infrastructure management costs.
Kelvin To, Founder and President at Data Boiler, explains: “Big data is an enabler to better and faster decision making. Through the use of big data, corporates, and indeed their treasury departments, can see the big picture of what challenges lay ahead of them. In having this ability, they will be better placed to proactively provide the best course of action to turn these challenges into opportunities.” It is a compelling argument.
Big data vs small data
Companies using data to improve sales, as shown in the Google example above, is not a new phenomenon. Throughout history companies have attempted to extract data from customers, through surveys and other means. This is defined as ‘small data’ because it has been collected specifically to be analysed and is pre-conditioned or structured for that analysis. A survey of customers asking if they like a particular product is a perfect example of this.
Big data on the other hand is formed of both structured data and unstructured data (spending habits of customers, their age, how they purchase, how often they do so and so on). Big data is often held in disparate systems, is extremely large in size and commonly is not created solely for analysis. As a result, it has to be cleaned and processed before being capable of analysis.
Others explanations of this distinction are available. Do not make the mistake of thinking a lot of data is ‘big data’.
Even within this Section there may be some times where the examples offered may seemingly contradict current definitions of big data. But, putting definitional nuances aside, it is important to remember that when a company is using big data, it is essentially using data that was not specifically created to be analysed, or was previously not possible to analyse, in order to deliver insights and help inform decision making.
Big data
Small data
Data condition
Always unstructured, not ready for analysis, many relational database tables that need to be merged.
Ready for analysis, flat file, no need for merging tables.
Location
Cloud, offshore, SQL Server.
Database, local PC.
Data size
Over 50K variables, over 50K individuals, random samples, unstructured.
File that is in a spreadsheet, that can be viewed on a few sheets of paper.
Data purpose
No intended purpose.
Intended purpose for data collection.
Data in action
It can be easy to get caught up in the buzz when discussing big data. The excitement the term generates may leave some feeling that it is the answer to all their management challenges. This is not the case. As Keng-Mun Lee, Citi’s Asia Pacific Head of Channels and Enterprise Services and Citi Innovation Lab in Singapore states: “Beyond the buzz, it is important to remember that it is not about the word (big data), but how you put it to good use.”
Indeed, if a company sets out to complete a general ‘big data project’ it is likely it will fail. If a company sets out on something specific, such as a cash flow forecasting project that utilises big data, then this is more likely to succeed. Big data is not a panacea. Harvesting big data alone will not enable treasury to improve its KPIs or elevate its status within the business. But it will help. And there are already plenty of examples available within the treasury space alone where big data can make a significant impact on operations.
Defining data analytics
Data analytics involves examining raw data with the purpose of drawing conclusions about that information. Data analytics is used in many industries to allow companies and organisations to make better business decisions. It is different from data mining, in which data miners sort through huge data sets using sophisticated software to identify undiscovered patterns and establish hidden relationships.
From a treasury perspective, data analytics involves taking the raw data that a company generates and using advanced models and data collation tools to pull out useful information for treasury. It encompasses business intelligence, business performance management and data integrators. It also employs a variety of other technologies that involve taking large amounts of data, pulling out the data relevant to a particular process, and analysing that data to improve process efficiency or the effectiveness of decision-making.
Data analytics can include modelling, reporting, analysis and score carding. Companies can use data analytics tools to analyse process efficiency, identify and respond to business or market trends and improve risk identification and mitigation.
Cash flow forecasting
Cash flow forecasting is one area where a prudent use of big data analysis may significantly benefit corporate treasurers. An accurate forecast requires bringing together data from a variety of internal and external sources. Traditionally, it is often the case that these data sources may have different formats and different levels of consistency.
Big data analysis has the potential to bring this disparate and inconsistent data together into a single format. Treasury will be able to integrate historical accounts payable (AP) and accounts receivables (AR) data, information from banking partners, exchange rates and other market factors, customer behaviour information, and so on, to build a complex forecast of cash flows. This goes beyond static historical data to incorporate events affecting the market, rates and vendor and customer behaviour to create a sophisticated model.
Fraud prevention
Another use case for big data analysis is fraud prevention. It has always been difficult to spot a fraudulent payment before it is too late. This is beginning to change however, thanks to a number of innovative solutions developed by banks and other parties, which are able to leverage a corporate’s payment information and spot a fraudulent payment before it is processed.
Deutsche Bank is one institution offering such a solution. It works by utilising a corporate’s historical payment data and a number of predefined rules that work together in order to flag up any irregularities in the payment being made. Therefore, if a fraudulent payment slips through the corporate’s own due-diligence work, it will be flagged up as soon as it hits the bank’s system should there be any irregularities such as a changed account number, beneficiary name, or a significant increase in the value of the payment in comparison to historical equivalents, for instance.
At this point, the payment will be sent back to the corporate who can then conduct their own analysis and decide if it is fraudulent or not. Solutions such as this have the potential to significantly mitigate the risk of fraudulent payments.
AP and AR efficiency
It is likely that a corporate with a large amount of buyer and supplier relationships will have some inefficiency in how it works with these counterparties, using inefficient payment methods, for instance. Previously, gaining this information would be highly manual and cumbersome, involving lots of phone calls and research. But through big data analysis, it can be garnered almost instantly.
Citi has developed a solution which enables it to take a client’s AP data and overlay this with data from other clients who use the same suppliers to see if there are any differences in how they interact with their counterparties. “Company A, for instance, may be paying supplier A with cheques,” explains Lee. “Whilst, company B may be paying the same supplier through an electronic channel. With this information the treasurer of company A can then target this supplier and move them onto a digital payment channel. These are insights that wouldn’t have been available, at least so readily, without the use of big data.”
Market decision making
Furthermore, organisations can use data analytics for improving decision support for borrowing, hedging and investment strategies. Data collators can pull information from external sources – such as historic fund performance or interest rates – plus historic and current internal business information, which can then be put through advanced analytics solutions to provide a rich picture of existing and future conditions that can be used to make better-informed decisions in these areas.
Barriers to entry
It is interesting to note that whilst big data is often talked about as a new phenomenon, it has in fact existed for a long time. What has changed now though, according to Manfred Richels, Head of Business Intelligence for Global Transaction Banking at UniCredit, is that thanks to advancements in technology there are new ways to store and analyse data, which is making it far more accessible to businesses, thus making it a talking point.
Unfortunately, many organisations have failed to keep up these developments and remain plagued by a plethora of legacy systems, disparate silos of data and many other issues that prevent companies utilising big data to its fullest. As a result, even beginning to think about how big data can be placed into some order may be a daunting thought.
Big data management can also be expensive, depending on the scale of the project and the tools used. In 2015 Bain & Company surveyed 325 global companies and found that 85% said they will require substantial investments to update their existing data platform, which includes consolidating and cleaning data, simplifying access and rights management, and improving access to external data sources.
Aside from the cost, there are other hurdles that may need to be overcome. As an article penned for Forbes by Steve Berez, Paul Callahan and Rasmus Wegener of Bain & Company highlights, companies may also have issues around:
The divide between ownership and stewardship of data. IT may not always know where the value resides in data, while executives on the business side may not understand the intricacies of data storage and management. This disconnect can have expensive consequences if the business (which owns the data) and IT (the data stewards) make decisions without a solid understanding of each other’s perspective.
Getting data into the hands of the right people. Historically, IT departments focused on how and where to store data and how to keep it secure. But companies don’t create value by storing data or managing access rights. Value comes from putting data in the hands of business people, and this is where a lot of companies’ stumble.
Old processes that no longer fit business needs. Data warehouses used to be the centrepiece of the IT organisation, walled gardens that protected a company’s most valuable data and restricted access to only a few. That model fails in a world where companies need to allow more people to access the data and make discoveries. For these uses, cloud-based infrastructure can work better than on premise systems because they can be scaled up, provisioned on-demand and paid for on a consumption basis.
A big data culture
Whilst these are chiefly practical reasons for companies failing to readily utilise their big data, there may, in fact, be a more fundamental issue in many companies: the absence of a big data culture.
Big data is not just a change in strategy, it arguably requires a complete overhaul of how the business operates – driven from the top down. It is worth reflecting here again on the finding of the aforementioned DNV GL study, which indicated that 48% of business leaders do not see the value of big data. If these high-level executives can’t recognise the value, then it is unlikely it will be utilised in the company, and if it is, it will be unlikely to deliver the desired results.
So what does a data driven culture look like? Big data solutions provider, Teradata, offers some indicators in a blog post on the company’s website:
Commitment. Data-driven culture starts with widespread commitment. Data-driven decision making must become the standard modus operandi. The expectation is that big data analytics is part of everyone’s job.
Top-down leadership and bottom-up engagement. The strongest data-driven culture is shaped and energised from both the top down and the bottom up. Senior management clearly and visibly signals the importance of big data to improving business performance through funding decisions and by defining and promoting new metrics to evaluate business performance. Meanwhile, end-users, front-line managers, business analysts and others use big data in action to do their jobs every day. And they have the tools, training and incentives they need to do so.
New roles, new titles. The rise of Chief Data Officers and/or Chief Analytics Officers is evidence that more companies view data as a crucial asset. But such titles do not by themselves change cultures into a data-driven culture. Organisational structures must be aligned under senior leadership to unleash full transformational potential of big data in action across the business.
Organised, accessible and high-quality data. A strong technology foundation entails multiple components, starting with an infrastructure capable of capturing, centralising and storing a wide range of data. Then there are analytical applications that enable people to track KPIs, visualise trends and ask questions of the data.
A change in culture may also see a change in the attributes required by those in top jobs. As a result, financial business leaders, including CFOs and senior treasury executives, may need to retrain to be fluent in big data. At least this is the view of William Fuessler, Global Financial Strategy and Transformation Leader for IBM Global Business Services who wrote in the Wall Street Journal that: “CFOs need to move quickly to bring their existing staffers into the realities of operating in a big data world… Big data is the foundation for 21st century financial skills, and finance professionals should lead the charge in their organisations to turn large amounts of data into better business insight.”
Asking the right questions
If a company has recognised the benefits of big data and resolved any issues that may prevent it from using it, such as those mentioned above, what does it then need to do, on a practical level, to begin getting a handle on its data and turn this into actionable business intelligence?
In the view of UniCredit’s Richels, an important first step is defining the project scope. “Any project that uses big data needs to begin small and also to be based around some very specific use cases,” he says. “You cannot start by building a large framework, containing a significant amount of data points and then try to work something out from this. The business needs to ask a simple question: What challenges are we trying to solve?”
Yet, Thomas Dolenga, MD, Global Head Cash Management, Product Development at UniCredit points out that whilst a project that utilises big data should be focused on a specific question, it should also be able to answer future questions when they arise. “If a company is using big data to improve its payments process, asking the question: who can we move from cheque to ACH payments, for instance. It shouldn’t only focus on the data it has on those companies it pays in cheques, it should take this opportunity to order all of its payments data so that it can then be reused for future queries. The key is to be as precise as possible with the question, but as broad as possible in respect to the data collection.”
Moreover, by understanding these requirements, it is possible to figure out not only which data should be found and analysed, but also what tools will be needed to do this. Business teams and IT can collaborate to reveal these needs. Only then will firms overcome barriers to leveraging their data, such as legacy systems that aren’t easily replaceable, lack of confidence in new technologies, and siloed functionality and data.
A technical approach
Whilst treasurers will likely only be interested in having this data to analyse, it is worth briefly exploring the various ways that companies may wish to store and aggregate big data. For many, centralisation is one of the key components for efficient use of big data. By centralising, companies are able to reduce the risk of having multiple versions of the truth.
For others, including Data Boiler’s To, centralisation is not necessarily the answer. “Putting everything into a central data warehouse and maintaining that every year is very expensive; an ‘Enterprise Service Bus’ architecture (a system of connecting and unifying multiple data inputs) is capable of addressing many non-standard data issues, but it too is not cheap.”
He suggests tiered data storage as a potential solution. High-tier, high-availability servers enable real-time delivery and instant backup for business-critical data, whilst the lower-tiered servers are used for non-urgent or archiving purposes. Tiered data storage has been made possible with the advent of new analysis tools. With older technologies, when data analysis was required, it was necessary to load all sets of data into a high-availability server to execute the process, To explains. “With modern in-memory analysis and tiered storage there is no need to do that; you can do the analysis wherever the data is.”
Tools of the trade
Whichever way data is organised behind the scenes, what will matter to treasurers is how they can get their hands on this and begin to make decisions informed by big data. But to do this they require tools that can turn the raw data files into digestible information. There are a number available to do this.
Many treasurers may, in fact, already have the tools needed at their disposal. “ERP systems have a broad range of analytical features,” explains UniCredit’s Richels. However, these are not necessarily intuitive and easy to use. “When using these solutions, you need qualified staff to perform data, data synchronisation and data flow analysis,” he adds.
TMS providers are also beginning to upgrade their systems to include more robust analytical tools. These may be less complex to use than those included in ERP systems, however their functionality may be limited in comparison.
A third option is to use technology offered by a specialist big data solution provider, such as that offered by Qlik. These are designed to be user friendly and enable businesses of all shapes and sizes to search and explore vast amounts of data by creating highly visceral charts and graphs that bring the data to life and almost instantly allow businesses to make more informed decisions. It is worth noting however that, compared to an ERP or TMS, the functionality is likely to be limited and may not cover the specific needs of the treasury department.
Instead of investing in its own solutions, a treasury may wish to outsource its big data analytics to a third party, namely its banking partners. This is increasingly an attractive proposition as a number of banks, in recent years, have invested heavily in big data analytics and utilise this to drive internal efficiencies and also offer insights to its clients.
“A lot of the value we offer to clients is through utilising the data that we have on them,” says Morgan McKenney, Head of Core Cash Management for Asia Pacific at Citi. “We are able to process this data through the numerous tools that we have developed and then offer insights and advice to our clients through various channels, including our online banking portal and tablet application.
“We have invested a great deal of effort in ensuring that these channels are as intuitive and useful for corporates as possible, leveraging our user interface (UI) specialists in our Citi Innovation Lab in Dublin,” she adds. “After all, it is only by visualising the data that it comes alive and enables corporates to make intelligent business decisions.”
Ultimately, whichever path the treasury chooses to take, what remains key is that the skills and expertise of a seasoned treasury professional are present to make decisions off the back of this analysis.
Data danger
We have explored the benefits that big data analysis can offer the business, and various ways that organisations can bring about these. It is also important to look at the darker side of big data and the risks that it may pose to businesses.
Data overload
Companies who place too great an emphasis on big data without correct management are likely to end up data rich, but insight poor. It is a logical argument; companies do, after all, already have more data than they are able to manage. And whilst we have already talked about some of the ways to ensure big data is used in a focused and efficient manner, sometimes the temptation to ‘scope creep’ can be too much.
Part of the reason for this is that companies have much more data than they actually need. Many keep data locked away, ‘just in case’ it will be useful in the future. Some data however, has a shelf-life and will become outdated. Companies, therefore, need to give data a ‘sell-by’ date and delete it once this passes.
This is essential given that the risk of data overload is only set to increase. As Citi’s McKenney explains: “The last 30 years have been about connecting people, and an enormous amount of data has been collected in that time. The next five to ten years however, will be about connecting things. This means a significant increase in data, highlighting the importance for businesses to have a progressive data strategy.”
Searching for the perfect answer
Linked to the risk of data overload, is the risk of analysis paralysis: waiting for the perfect answer. “It is correct that the quality of business intelligence derived through big data analytics is only as good as the source data itself,” says To. “However, a good decision, made now and pursued aggressively, is substantially superior to a perfect decision made too late. Thus, businesses do not need to wait until data management is completed to perform real-time analytics.”
Indeed, this is a point agreed upon by Citi’s Lee, who states that: “Of course, the more data that a business has, the better this will be, enabling noise to be cancelled out and more comparisons to be made. However, at the same time the data doesn’t have to be 100% accurate, it just needs to be directionally accurate. Sometimes business can overlook this and hesitate on making decisions as a result.”
Cyber security
A topic that is unavoidable when discussing any use of technology is cyber-security. And indeed, big data is no exception to this rule. As Citi’s McKenney outlines: “Data security is important. The speed and effectiveness of cyber-attacks will only continue to increase, so clients need to ensure they have thought about this and have robust controls around their data.”
What’s next
It is likely that in the future, those companies which utilise big data to its fullest will be those who are most successful. Big data provides a big opportunity to drive internal efficiencies and greater commercial opportunities, as well as offer many other benefits. But, of course, challenges remain. Big data can be highly complex and span the entire business. This means that it is something the whole organisation has to be aligned with to tackle. But the positives far outweigh the negatives and if your treasury is not already looking at this space, it should start soon.
Checklist – Steps to big data success
There are a number of steps which must be followed by companies setting up a new data collation and analytics project:
Decide the specific data sources that drive the process the company wants to analyse.
Assess the quality of data sources that will feed the analytics.
Create the IT architecture to deliver the model – which potentially includes buying the hardware and software to handle data integration, manage a data warehouse and host the analytics application(s).
Create the set of rules that will be used by the platform to pull source data and transform it into a usable format for the analytics application(s).
Implement the data analytics project and test the validity of the results.
If driven by treasury, clearly any such project will require both treasury and IT resources to prepare and implement. Innumerable other internal and external units could be involved as the relevant data is tracked and integrated – such as AP, AR, interest rate and foreign exchange management; CRM; pension management; data from the physical supply chain; shared service centres; payment factories; and data drawn from third-party systems.
It is critical to have such a project driven from above in order to ensure that all those involved in the project – or those whose functions touch on the project – are fully committed to seeing it through.
Please enter the email that you signed up with below. If your email is
connected to a member account, we will send you a reset link.
This website uses cookies and asks for your personal data to enhance your browsing experience. We are committed to protecting your privacy and ensuring your data is handled in compliance with the General Data Protection Regulation (GDPR).