Technology

Thinking big

Published: Apr 2014
Corridor of a working data center

Few in the business world can have failed to notice the concept of ‘Big Data’. Since it first emerged in the early 2000s it has often been used as shorthand for ‘impending meltdown’ as businesses sink under the weight of uncountable bits and bytes. But the volume of data carried by a business as a whole could be a problem or an opportunity – it all depends on how data is managed and what is done with it to make it useful. Treasurers have access to a variety of different data sets that could, for example, be useful in helping them with their forecasts so they can make better decisions. Being able to look at a wider set of data provides better intelligence however mismanaged information is at best confusing but potentially a major source of risk.

To try to quantify just how ‘big’ Big Data is, IBM published an ‘infographic’ last year showing that an estimated 2.3 trillion gigabytes of data are created each day and 40 zettabytes (43 trillion gigabytes) of data will be created by 2020. Data analysis and data-about-data (meta-data) generates yet more data ensuring that growth really is exponential. The problem with having so much data is that it can be difficult to capture, file, store, search, share, transfer, analyse and visualise without coming up against a range of issues such as duplication, inaccuracy, irrelevance or unavailability. In essence, if it takes too long to make a decision it may be too late.

Today, very few businesses will be unaware of the importance of their data. But leveraging Big Data to unlock more business value requires planning. Kelvin To, Founder and President of Data Boiler Technologies (and a guest speaker and lecturer at City University, Hong Kong) says discovery of opportunities “often depends on the correct diagnosis of problems”.

For analysing historical bank transactions, for example, the typical reporting tool in a treasury management system (TMS) will probably limit analysis to a 12-month period at best, especially if the business has an annual, repetitive nature to its cash flows, says Bob Stark, VP Strategy, Kyriba. “You end up narrowing down what you are analysing very quickly.” However, the concept of Big Data in this context is about taking a larger data set, across a larger number of years, to give a better idea of what the cyclical nature of data is going to look like. Expanding reach for Stark is not just how far back in time analysis goes but also how broad a sweep is included – and this even covers macro-economic data. “Big Data is about being able to leverage all those data sets and being able to pick and choose how you want to use them; for treasurers without the right tools it is not something they have been able to achieve in any meaningful way,” says Stark.

The financial crisis of 2008 certainly provided motivation for looking further into and more widely at data, with most forms of risk mitigation higher on the agenda, notes Alex Robertson, Head of Treasury Analytics, Treasury Solutions, Capita Asset Services. The reliance on more data – both internally and externally sourced – and deeper analysis to protect the business (counterparty credit risk might now take in CDS spreads, equity prices, tier one capital ratios, bond spreads and so on) naturally demands increased time, but so too does the need to seek out different investment and funding options and, notes Robertson, “this is all increasing the scope of information that treasurers need to look for and understand”.

If confusion reigns when too much information is available, Richard Childes, Director, Product Marketing, OpenLink Financial, says the problem is usually that people “do not know what they want, they ask the wrong questions and end up being inundated with the data”. In the Big Data world, knowing where to source data and then asking the right questions is critical.

What’s stopping you?

When it comes to seeking answers, Robertson suggests barriers to meaningful business analytics may arise from existing technology, resources and processes. “Overlapping data sources and contradicting data versions often exist both within and outside the organisation,” adds Data Boiler’s To. “This can be exacerbated by poor vendor performance and integration issues, constant battles with non-standardised technologies and frequent requirement changes.” In ‘fire-fighting’ mode, he notes many organisations are attempting to adapt by rolling out “retrofitting” projects, “trying a hundred different things and hoping some of them will work” and/or using Business Process Outsourcing “to let someone else worry about it”.

For a treasurer, spending time identifying, collecting, cleansing and analysing additional data “to the point where they can make meaningful decisions confidently” means spending less time on core activities, comments Robertson. “Most businesses will have a clear corporate strategy; what is required here is a strategy for business analytics.”

Starting a project

Before any project commences, Childes urges all companies to ask this simple question: “What business challenge are you trying to solve?” By understanding what the requirements are it is possible to figure out not only which data should be found and analysed but also what tools will be needed to do that. Business teams and IT should collaborate to reveal those needs. Only then will firms overcome barriers to leveraging their data, such as legacy systems that aren’t easily replaceable, lack of confidence in new technologies, and siloed functionality and data.

Big Data and business analytics issues can be tackled internally as long as the right plan and resources are in place. Alternatively, as To states above, the project can be outsourced to a third party, typically using cloud technology to manage and process data offsite. Cloud technology still raises the issue of security (and may even be prohibited by corporate policy) but in reality the cloud is as secure, or more so than in-house storage, a view now espoused by many IT professionals who, Stark notes, tend to be the drivers behind such moves. Be warned though: data management projects can be costly when IT and final stakeholders who will use the information are not aligned and requirements for the project are unclear, says Leonardo Orlando, Manager in Finance and Risk Business Services at Accenture UK. “When the IT function and those who will ultimately use the data are in agreement about their final goal and have a mutual understanding of the project trigger, such as regulatory compliance, these projects tend to be more readily supported by those involved.”

Technical solutions

Centralisation is one of the key components for efficient use of Big Data for Orlando. “Without using a unique golden source, there is the risk of having multiple versions of the truth,” he notes. “Standardisation, however, is not the answer when there are multiple requirements across an organisation.” One of the most efficient Big Data methodologies therefore involves creation of a common golden source, which can provide information at its lowest level of granularity. However, it also needs to offer organisations the flexibility to use additional data sources to meet unique departmental needs.

Current technology is capable of meeting Big Data requirements. What’s important, says Orlando, is to think through how your organisation wants to use Big Data and factor that in before selecting and implementing. “Technology is constantly evolving but there are multiple options that should be evaluated against business use cases to ascertain the most suitable – one size does not fit all.”

Notwithstanding his obvious bias as a technology vendor, Stark believes companies need to centralise as much as possible. “It doesn’t mean that if you haven’t centralised it is impossible, but for a treasurer looking across 18 different systems to try to understand their cash forecast, rather than having it in one system, it’s easy to see which approach is simpler.”

But for Data Boiler’s To, centralisation is not necessarily the answer. “Putting everything into a central data warehouse and maintaining that every year is very expensive; an ‘Enterprise Service Bus’ architecture [a system of connecting and unifying multiple data inputs] is capable of addressing many non-standard data issues, but it too is not cheap.”

He suggests tiered data storage as a potential solution. High-tier, high-availability servers enable real-time delivery and instant backup for business-critical data, whilst the lower-tiered servers are used for non-urgent or archiving purposes. Tiered data storage has been made possible with the advent of new analysis tools. With older technologies, when data analysis was required, it was necessary to load all sets of data into a high-availability server to execute the process, To explains. “With modern in-memory analysis and tiered storage there is no need to do that; you can do the analysis wherever the data is.”

One of the persistent issues, according to Robertson, is cleansing and normalising data. “Unless you have common identifiers to integrate data you will have to spend a significant amount of time doing it manually,” he explains. There are rules-based tools available to help look for and normalise common linkages between data (to quite a granular level), and there are systems to ensure version control and document management, but users must have confidence that their data has been properly integrated.

Standardisation and centralisation clearly have a place in Big Data management, but for To, “firms should focus on optimising what ought to be fast, resilient, scalable and flexible”. In the real world, where funding is limited, this creates a genuine need to balance the cost and benefit of standardisation with the need to preserve IT agility for rapid response to business change.

In order to keep costs down, Orlando urges companies to prioritise goals, information and stakeholder needs. “A company might create a roadmap for change that also factors in short-term compliance considerations, and then slowly move towards an integrated reporting platform that enhances a more efficient decision-making process.” Prioritisation requires cross-functional discussion, the aim being to find a common goal. It demands the buy-in of senior management, to drive fair and equitable discussion, and may even see the provision of a dedicated project manager to ensure focus. Everything should connect back to the business model.

One way of achieving unity when aggregating data from multiple sources, without recourse to a single central data warehouse, is to embrace ‘Lean’ IT processes. The Lean process, To explains, stems from the Japanese motor-manufacturing sector. It is a way of maximising efficiency through organisation and removing the unnecessary through constant refinement. “If you look at all processes together in your system development, you will be able to prioritise the aspects that give the biggest ‘bang for your buck’.” In the context of Big Data, Lean is about building a “flexible and agile” IT system that is capable of handling multiple standards.

Human intelligence

Using technology to automate processes is a useful exercise in itself, but Stark believes IT should be deployed to enable treasuries to take their processes “to the next level”. Automation can free-up individuals to concentrate on more valued-added activities, allowing the entire treasury team to contribute to data analysis and the search for new trends and opportunities. In this respect, treasury experience and expertise are still essential. “If you don’t know what you’re looking for and don’t have the experience to know which buttons to press, the technology won’t just magically do your job for you; you need to know what you are doing to learn from data.”

From Robertson’s perspective, designing a system that will do everything is not the point. “What we are really trying to do is give treasurers transparent information to enable them to make good decisions; the process of good decision-making shouldn’t change,” he says. “Becoming over-reliant on any application is a bad thing; it should be a tool to make the best decision, not one to make the decision.” In the risk management space, for example, there is an increasing need for stress-testing, complex simulation and scenario analysis, using higher volumes of data and at a more granular level. Whilst software can steer a decision in a certain direction, ultimately there will have to be a call made based on professional judgement.

“People design computer systems and people are not infallible; their systems will do what they are designed to do but that does not mean they are right,” notes Childes. Data has to be interpreted correctly and anomalies spotted so even with the best technology “treasurers will always have to remain on the ball”.

Whatever technology is used, the final user may make their own analysis and evaluations, says Orlando. “Experience, especially when business-related, may be a factor in data interpretation, but some control may be needed to tie decision-making processes sufficiently to the information provided.” With regards to treasury, risk management will want to define a risk appetite and monitor and control treasury actions so they are compatible with that appetite.

In To’s view, there will always be a need for treasury professionalism, but he suggests technology can offer a different and perhaps unexpected view. “What Big Data enables you to do is look across the board, where places you would never think of may present big opportunities.” He believes that combining the broad sweep of Big Data with a machine-learning approach to analysis is the route to success. “If you don’t make that connection, you will be like everyone else because whatever you can think of, other intelligent individuals will be able to think of too!”

What next?

A business is linked by many processes. Where data is generated, the individuals working with it day-in and day-out will have developed specific sensitivities to that data. This unique understanding allied with the Big Data toolset enables every experience and understanding to be drawn together to enhance the view of the business as a whole, through what Orlando refers to as an “interactive and dynamic data flow”. Big Data is thus not just an IT issue, but one for the whole business to tackle and everyone in the organisation who works with data can make better use of it. As Childes says, “you just have to know how to access it, what questions to ask and what tools to use to mine that data to get the answers you are looking for”. Instead of seeing Big Data as a threat, why not embrace it?

Big Data checklist

According to Accenture UK’s Orlando, Big Data does not become a “big problem” if a business has the right processes, the right teams and the right technology in place. He suggests the following:

  • There are clear functional requirements, which need to come from the information user.
  • There is a unique golden source (even if the level of data granularity may be different across the organisation).
  • The IT architecture is clear, simple and scalable. To help meet regulatory requirements corporates and banks may want to adjust their processes without changing the information source, rather than making manual adjustments which can affect data integrity.
  • Having the right teams with clear lines as to who is responsible for which tasks is key. Many organisations now have a dedicated Chief Data Officer.
  • Data governance and data management is essential. Not all data has to be physically stored in one big box as long as strong data governance is in place.

All our content is free, just register below

As we move to a new and improved digital platform all users need to create a new account. This is very simple and should only take a moment.

Already have an account? Sign In

Already a member? Sign In

This website uses cookies and asks for your personal data to enhance your browsing experience. We are committed to protecting your privacy and ensuring your data is handled in compliance with the General Data Protection Regulation (GDPR).