The Treasurer’s Guide to Digitisation 2015

Big Data and enhanced analytics

Published: Sep 2015

‘Big Data is not a problem, it is an opportunity.’ This may sound like the kind of empty mantra beloved of assertiveness counsellors or lifestyle gurus, but in this case the statement might just be true. Since the concept of Big Data first emerged in the early 2000s, the term has often been used to describe a threat. Indeed, it has often been employed as shorthand for impending meltdown as businesses sink under the weight of uncountable bits and bytes. It is certainly true that the volume of data accrued by companies has increased exponentially over the past decade and that this is not slowing down any time soon.

In fact, to try to quantify just how ‘big’ Big Data is, IBM published an ‘infographic’ in 2014 showing that an estimated 2.3 trillion gigabytes of data are created each day and 40 zettabytes (43 trillion gigabytes) of data will be created by 2020. Data analysis and meta-data (data about data) generates yet more data ensuring that growth really is exponential.

But it is entirely possible to be positive about Big Data and realise that it can be an opportunity not a threat. Of course, it has to be properly managed if raw data is to be turned into actionable information but this can be achieved if measures are taken around its capture, storage, accessibility and analysis.

Understanding that Big Data is not just one function or business unit’s mass of data and that it refers to the enterprise-wide collection is the key to unlocking its capacity to advance a company’s case. Tapping into different cross-functional sets of information can deliver deeper insight than staying strictly within departmental boundaries.

Treasurers that have access to a variety of different data sets may be in a position to access information, for example, to bring greater accuracy to their forecasting. The broader finance function and perhaps even sales, marketing and business unit data could help paint a more precise view of the company’s positions at any given time and thus its funding needs. But whilst being able to look at a wider set of data can provide better intelligence, it is equally true that more information, if mismanaged, will just add to the confusion or even become a major source of risk. The range of issues surrounding data management at this level include duplication, inaccuracy, irrelevance and unavailability. In essence, if it takes too long to make a decision it may be too late.

Since Big Data is, in part, a result of technological advances, it is essential that technology is also exploited to help capture, file, store, search, share, transfer, analyse and visualise the results in a way that benefits all. Leveraging Big Data to unlock more business value requires planning. Kelvin To, Founder and President of Data Boiler Technologies (and a guest speaker and lecturer at City University, Hong Kong) says discovery of opportunities “often depends on the correct diagnosis of problems”.

For analysing historical bank transactions, for example, the typical reporting tool in a treasury management system (TMS) will probably limit analysis to a 12-month period at best, especially if the business has an annual, repetitive nature to its cash flows. However, the concept of Big Data in this context is about taking a larger data set, across a larger number of years, to give a better idea of what the cyclical nature of data is going to look like. Expanding reach is not just how far back in time analysis goes but also how broad a sweep is included – and this even covers macro-economic data. “Big Data is about being able to leverage all those data sets and being able to pick and choose how you want to use them,” says Bob Stark, VP Strategy, Kyriba. “For treasurers without the right tools it is not something they have been able to achieve in any meaningful way.”

The financial crisis of 2008 certainly provided motivation for looking further into and more widely at data, with most forms of risk mitigation higher on the agenda, notes Alex Robertson, Head of Treasury Analytics, Treasury Solutions, Capita Asset Services. The reliance on more data and deeper analysis to protect the business (counterparty credit risk might now take in CDS spreads, equity prices, tier one capital ratios, bond spreads and so on) naturally demands increased time, but so too does the need to seek out different investment and funding options and, notes Robertson, “this is all increasing the scope of information that treasurers need to look for and understand”.

If confusion reigns when too much information is available, Richard Childes, Director, Product Marketing, OpenLink Financial, says the problem is usually that people “do not know what they want, ask the wrong questions and end up being inundated with data”. In the Big Data world, knowing where to source data and then asking the right questions is critical.

Barriers

When it comes to seeking answers, barriers to meaningful business analytics may arise from existing technology, resources and processes. “Overlapping data sources and contradicting data versions often exist both within and outside the organisation,” notes Data Boiler’s To. This can be exacerbated by poor vendor performance and integration issues, constant battles with non-standardised technologies and frequent requirement changes. In ‘fire-fighting’ mode, many organisations are attempting to adapt by rolling out ‘retrofitting’ projects, “trying a hundred different things and hoping some of them will work” and/or using Business Process Outsourcing “to let someone else worry about it”.

For a treasurer, spending time identifying, collecting, cleansing and analysing additional data to the point where they can make meaningful decisions confidently means spending less time on core activities. Most businesses will have a clear corporate strategy, notes Robertson. “What is required here is a strategy for business analytics.”

Before any project commences, all companies need to ask this simple question: “What business challenge are we trying to solve?” By understanding what the requirements are, it is possible to figure out not only which data should be found and analysed but also what tools will be needed to do that. Business teams and IT should collaborate to reveal those needs. Only then will firms overcome barriers to leveraging their data, such as legacy systems that aren’t easily replaceable, lack of confidence in new technologies, and siloed functionality and data.

Big Data and business analytics issues can be tackled internally as long as the right plan and resources are in place. Alternatively, the project can be outsourced to a third party, typically using cloud technology to manage and process data offsite. Cloud technology still raises the issue of security (and may even be prohibited by corporate policy or cross-border regulations) but in reality the cloud is as secure, or more so than in-house storage, a view now espoused by many IT professionals who tend to be the drivers behind such moves. Be warned though: data management projects can be costly when IT and stakeholders who will use the information are not aligned and requirements for the project are unclear, says Leonardo Orlando, Manager in Finance and Risk Business Services at Accenture UK. “When the IT function and those who will ultimately use the data are in agreement about their final goal and have a mutual understanding of the project trigger, such as regulatory compliance, these projects tend to be more readily supported by those involved.”

Technology as the enabler

Centralisation is one of the key components for efficient use of Big Data for Orlando. “Without using a unique source, there is the risk of having multiple versions of the truth,” he notes. Standardisation, however, is not always the answer when there are multiple requirements across an organisation. One of the most efficient Big Data methodologies therefore involves creation of a common ‘golden source’, which can provide information at its lowest level of granularity. However, it also needs to offer organisations the flexibility to use additional data sources to meet unique departmental needs.

Current technology is capable of meeting Big Data requirements. What’s important, says Orlando, is to think through how your organisation wants to use Big Data and factor that in before selecting and implementing. “Technology is constantly evolving but there are multiple options that should be evaluated against business use cases to ascertain the most suitable – one size does not fit all.”

Notwithstanding his obvious bias as a technology vendor, Kyriba’s Stark believes companies need to centralise as much as possible. “It doesn’t mean that if you haven’t centralised it is impossible, but for a treasurer looking across 18 different systems to try to understand their cash forecast, rather than having it in one system, it’s easy to see which approach is simpler.”

But for Data Boiler’s To, centralisation is not necessarily the answer. “Putting everything into a central data warehouse and maintaining that every year is very expensive; an ‘Enterprise Service Bus’ architecture (a system of connecting and unifying multiple data inputs) is capable of addressing many non-standard data issues, but it too is not cheap.”

He suggests tiered data storage as a potential solution. High-tier, high-availability servers enable real-time delivery and instant backup for business-critical data, whilst the lower-tiered servers are used for non-urgent or archiving purposes. Tiered data storage has been made possible with the advent of new analysis tools. With older technologies, when data analysis was required, it was necessary to load all sets of data into a high-availability server to execute the process. With modern in-memory analysis and tiered storage there is no need to do that; you can do the analysis wherever the data is.

A single version of the truth

One of the persistent issues that affects the analysis process is cleansing and normalising data. “Unless you have common identifiers to integrate data you will have to spend a significant amount of time doing it manually,” Robertson explains. There are rules-based tools available to help look for and normalise common linkages between data (or a one-off SQL-based migration script for the more IT literate), and there are systems to ensure version control and document management (Cluster7 for spreadsheets, for example), but users must have confidence that their data has been properly integrated.

Standardisation and centralisation clearly have a place in Big Data management, but companies should focus on what ought to be fast, resilient, scalable and flexible. In the real world, where funding is limited, this creates a genuine need to balance the cost and benefit of standardisation with the need to preserve IT agility for rapid response to business change.

In order to keep costs down, Orlando urges companies to prioritise goals, information and stakeholder needs. A company might create a roadmap for change that also factors in short-term compliance considerations, and then slowly move towards an integrated reporting platform that enhances a more efficient decision-making process. Prioritisation requires cross-functional discussion, the aim being to find a common goal. It demands the buy-in of senior management, to drive fair and equitable discussion, and may even see the provision of a dedicated project manager to ensure focus. Everything should connect back to the business model.

One way of achieving unity when aggregating data from multiple sources, without recourse to a single central data warehouse, is to embrace ‘lean’ IT processes. The lean process stems from the Japanese motor-manufacturing sector. It is a way of maximising efficiency through organisation and removing the unnecessary through constant refinement. “If you look at all processes together in your system development, you will be able to prioritise the aspects that give the biggest ‘bang for your buck’,” says To, a keen advocate of the Six Sigma lean process. For him, in the context of Big Data, lean is about building a “flexible and agile” IT system that is capable of handling multiple standards.

Don’t forget the humans

Using technology to automate processes is a useful exercise in itself, but IT should be deployed as an enabler, helping treasurers take their processes to the next level. Automation can free-up individuals to concentrate on more valued-added activities, allowing the entire treasury team to contribute to data analysis and the search for new trends and opportunities that exist within the world of Big Data.

In this respect, treasury experience and expertise are still essential. As Stark points out, “if you don’t know what you’re looking for and don’t have the experience to know which buttons to press, the technology won’t just magically do your job for you; you need to know what you are doing to learn from data.”

But then perhaps designing a system that will do everything is not the point. The real aim is to give treasurers transparent information drawn from a vast pool of data. This should enable them to make good decisions; the process of good decision-making itself should not change just because the mechanics have been automated. “Becoming over-reliant on any application is a bad thing,” states Robertson. “It should be a tool to make the best decision, not one to make the decision.”

Put into real-world terms, in the risk management space, for example, there is an increasing need for stress-testing, complex simulation and scenario analysis, using higher volumes of data and at a more granular level. Whilst software can crunch the numbers and steer a decision in a certain direction, ultimately there will have to be a call made based on professional judgement.

There is another reason why over-reliance is a bad thing. “People design computer systems and people are not infallible; their systems will do what they are designed to do but that does not mean they are right,” warns Childes. Data has to be interpreted correctly and anomalies spotted, even with the best technology. This means treasurers will always have to remain on the ball.

Whatever technology is used, the final user may make their own analysis and evaluations. Experience, especially when business-related, may be a factor in data interpretation, but some control may be needed to tie decision-making processes sufficiently to the information provided. With regard to treasury, the risk management team will have defined a risk appetite and will monitor and control treasury actions to ensure they are compatible with that appetite.

There will always be a need for treasury professionalism, but technology can offer a different and perhaps unexpected view when it comes to managing Big Data. What an intelligent Big Data programme enables the treasurer to do is look across the board, at places he or she would not normally think of as presenting opportunities. To believes that combining this broad view with a machine-learning approach to data analysis is the route to success. “If you don’t make that connection, you will be like everyone else because whatever you can think of, other intelligent individuals will be able to think of too!”

An interactive approach

A business is linked by many processes. Where data is generated, the individuals working with it day-in and day-out will have developed specific sensitivities to that data. This unique understanding allied with the Big Data toolset enables every experience and understanding to be drawn together to enhance the view of the business as a whole. Orlando refers to this as an “interactive and dynamic data flow”. Big Data is not just an IT issue, but one for the whole business to tackle. Everyone in the organisation who works with data can make better use of it as long as they know how to access it, know what questions to ask and can use their professional experience to move towards the most appropriate answers.

Big Data checklist

Big Data is not a problem if a business has the right processes, the right teams and the right technology in place to manage it. This means ensuring the following:

  • There are clear functional requirements, which need to come from the information user.
  • There is a unique golden source (even if the level of data granularity may be different across the organisation).
  • The IT architecture is clear, simple and scalable.
  • The right teams are in place with clear lines as to who is responsible for which tasks is key. Many organisations now have a dedicated Chief Data Officer.
  • There is strong data governance and data management. Centralisation is not necessary if strong data governance is in place.

All our content is free, just register below

As we move to a new and improved digital platform all users need to create a new account. This is very simple and should only take a moment.

Already have an account? Sign In

Already a member? Sign In

This website uses cookies and asks your personal data to enhance your browsing experience.