The value of treasuring your data and the cost of neglect

Reports | Sponsored | Technology

Jelle de Jong

Written by Jelle de Jong

Terms such as master data, data cleansing, data structure and data quality are enough to put most of us to sleep.

Most senior people love to be consumers of the data – analysis and insights made possible using master data has boosted many a career and are essential to many of a company’s core processes.  However, to think long and hard about how to structure, maintain and store this data is usually something done by “other people” or not at all.

Unfortunately, this does come at a cost; inconsistent and unclean master data is detrimental to any basic analytics work, resulting in frustrating additional ad-hoc work, or even worse, wrong conclusions being drawn.  In the words of one of our clients: “Only 20% of our time is spent on conducting actual analyses, the remaining 80% is spent on cleansing and structuring the data”.  He then added insult to injury by saying that “as the amount of data increases, this ratio worsens!!”

Secondly, poor master data leads to massive inefficiencies in business processes such as setting discounts, processing of invoices, arranging payments, etc. Finally, IT implementation ends up being much costlier as much time is spent on understanding and cleaning the data that needs to be interfaced instead of focusing on the change that needs to take place.

Reasons that lead to bad master data management

The most important reason for corrupted and unstructured master data is the lack of understanding of the crucial role of master data management. This results in two frequently seen bad master data management practices: (1) master data management is made the IT teams responsibility with limited to no involvement from the business and (2) master data management is centralised at a regional / global level with insufficient flexibility for markets to add their own characteristics.

Both practices lead to a situation where master data stored in the central system does not serve its required local business purpose.  As a result, people start developing their own “master data”.  In our work we regularly see that when we ask for something as basic as the product or customer master data from 3 different departments, we get 3 sets of completely different data back.

Research from Ventana discovered that 70% of organisations do not have one source of truth and rely mostly on spreadsheets in their data maintenance. When people are so used to using the “master data” from their own departments, the real master data in the systems and its maintenance process get even less attention from the business.

Demand Gen revealed that nearly 85% of businesses said they’re operating databases with between 10-40% bad records. This creates a vicious cycle where the business refuses to use the master data from the systems due to lack of maintenance, and master data from the systems doesn’t get maintained because no one uses it.  Eventually, the knowledge about master data, the definitions, the use cases and the maintenance processes slowly disappear or become irrelevant.

Costs of bad master data management

Substandard master data can lead to many issues from daily operations to strategic decisions. According to Ovum research, the staggering cost of poor data is “at least 30% in revenue”.  IBM estimates that bad quality data is costing the US economy US$ 3.1 trillion per year. The three main issues with corrupted and unstructured master data are (1) a growing inefficiency during day to day work, data analysis and IT implementation projects, (2) a quickly fading corporate memory increasing the audience for “urban myths” rather than facts and (3) basic avoidable legal exposure.

With regards to the inefficiencies, one can only feel for modern day managers and executives who are expected to make data-driven decisions.  At the surface, simple questions thrown their way by senior management lead to midnight oil burning major spreadsheet disasters with the result being as ephemeral as the coffee that keeps them up.

According to IBM, knowledge workers on average waste up to 50% of their time hunting for data, identifying and correcting errors, and seeking confirmatory sources for data they do not trust. For data analysts this number is 60%3. With inconsistent master data from different departments, there is no common language in the business and it is hard to achieve any cross-functional synergy.

Process automation or optimisation is almost impossible when the “master data” of each department doesn’t align. At one of our clients, a simple singular discount set for one key account had resulted in sales operations inputting over 5,000 individual pricing conditions and maintaining it over many years.

Ironically, often the poor-quality data people try to make sense of, originates from their own department. The most shocking experience for us as a business remains how a lack of comprehensive well-structured master data leads to a form of corporate amnesia where anything beyond the current financial year fades into a fog of stories rather than facts.  In one instance, one of our clients could not put together a list of promotions run, and volumes sold by customer on their most important product

for anything further than 3 months back.  And this for a product that had been in existence since the mid-19th century and for which, by their own admission, they would continue promoting on a bimonthly basis.  In a world which through data science tries to leverage historical patterns to extract small value pockets amid an ever more competitive marketplace, this is starting the fight with two hands tied behind your back.

Finally, a major issue that allows bad data management to perpetuate is the use of a forest of spreadsheets to maintain master data, and often lots of different versions of it.  The use of spreadsheets is detrimental in many respects because it lacks version control and governance.  Usually, for any change in master data, the owner of the spreadsheet will just overwrite the existing data with no auditable trail.  Given the limited or almost nonexistent access control feature in a spreadsheet, it is also hard to protect the sensitive information in it. In this time of GDPR, with a slowly rising awareness of privacy and data protection, this can easily lead to costly and avoidable legal issues.

How to set up master data management

It is best to be honest upfront – setting up a proper master data management process is neither fun nor easy.  It usually involves a substantial change in the way of working and some tedious one-off data cleansing.  Hence, it requires a strong belief in the importance and heavy buy-in from senior executives. Finally, it is important to realise that master data management is not a one-off investment but rather a continuous process that will need to be maintained, funded and cared for.  A standard master data process is depicted below:

When starting or expanding on your master data management process, the best place to start is with consulting the businesses that make use of master data. Arrange for in-market visits, understand the business processes that need to be supported, and identify what master data is required to support these processes.

Next comes the even more difficult process of standardising and aligning the master data needs across the different stakeholders that will make use of them. In this step, you weigh the pros and cons of standardisation vs locality and mix global demands with local realities.

Enabling businesses to use the master data consists of two activities. First, it consists of implementing or improving a master data management solution. Once implemented it is essential to frequently review the functionalities it offers and ensure that it meets the needs of the business. Second, the data itself needs to be cleaned, structured and merged (in the case where another system is made redundant). One key component, often overlooked is making sure the data is available! Remember, not everyone likes to browse through SAP master data windows or knows how to extract data using BW. A key objective in the design phase should be how to make data as readily accessible as you can from a security perspective.

The process above can be used for a complete overhaul of your master data system, which is something many companies do every few years or so. But it is also essential to more frequently (e.g. quarterly) loop through the process, making minor adjustments to both the structure and/or the data inside. Even though this will be costly, think of the potential return: currently 50% of your G&A is likely being wasted.

Great data management – an investment that is non-negotiable

Data matters and great commercial data matters a lot.  Often, it doesn’t get a lot of attention because it just isn’t exciting and belongs in the same area as HR systems, supply chain processes etc.  It often gets sidelined to the domain where stuff just gets done, and is overlooked because it doesn’t generate an immediate return.

However, as mentioned above, the failure to pay enough attention to data management comes at a cost, and this cost will only increase over time.  It is not only an immediate cost in terms of spending more on resources, discounts, etc. but is also a cost in terms of missed opportunities for revenue growth.  Maintaining master data is just like doing exercise, it is painful in the beginning, especially if one hasn’t done it for a while, but once the habit is established, doing it regularly becomes a normal routine.  And once a company realises how crystal clear their view of their business and competitive landscape has become, there will be no way back.

Did you enjoy reading this content?  To get more great content like this subscribe to our magazine

Reader's Comments

Comments related to the current article

Leave a comment