Part 1: What is driving the need for data quality metrics in retail banking?
A little under two weeks ago the London Marathon presented a titanic challenge that has long dominated the human psyche: the will to finish, to get over the line, to win.
The official app contained a feature to help friends, family and supporters locate their chosen runner by name or vest number, and so track their progress at every stage of their torturous journey of 26.2 miles.
It’s said that many runners hit “the wall” at around 22 miles, so very often loved ones wait at that marker to help spur them on to the finish a little over four miles away. But imagine for a moment that there are no distance markers, no Fitbits, and no stopwatches. Over 38,000 runners crowd the starting line, with no way of knowing critical data dimensions such as where they are, the pace at which they’re running, or how long it will take to finish.
It seems absurd to think that anyone would choose that way of running such a race, yet this is the decision being made by any retail bank not currently using metrics to measure its customer and financial data quality. Understanding the condition, accuracy, quality and maturity of datasets across the vast array of products and service channels is impossible if these elements aren’t actually being measured.
At this point, retail banks can quite rightly indicate four major issues confronting them that are very often seen as preventers of that holy grail of data management: a timely, accurate and complete single view of a customer.
- Size and scale
For starters, the amount of data to be measured is usually vast. One data record for a current account customer might include first name, middle name, last name, current address (five lines), previous address (another five lines), phone number, mobile number, email, employer and their address (five more lines), National Insurance number, date of birth, dependants, spouse… and that’s before their transactional data, banking IDs and so on are included!
- IT infrastructure
Further complicating matters, many retail banks operate systems which are decades old and highly immovable1. New products and services demand features old systems can’t deliver; mergers and acquisitions bring in whole new datasets, and separate systems in silos rarely categorise data in the exact same way. Yet the IT department is frequently held responsible for owning data quality, without specialist knowledge of what the data is, what it seeks to represent and what purposes the bank has for it2. They have no authority or budget to change or improve it, so even when they do report on it, it’s usually only able to say that the data is deteriorating.
- Operational processes
Single view of customer is fine in theory, but continually compromised in reality: even if a customer’s data is fine today and entered into the best system money can buy, if it’s not being measured and referenced regularly there’s no way of telling if it’s better or worse than any of the rest of the data in the warehouse.
- Regulation and the marketplace
On top of that, ‘big data’ remains big news and subject to never-ending scrutiny. Since the financial crisis, the importance of measuring data quality has swung from a nice-to-do to a must-do, with key dependencies increasingly assigned to compliance and risk functions. In retail banking, teams managing risk and compliance have seen their numbers increase by thousands of percent3 in an effort to comply with regulatory expectations.
This is in stark contrast with the 1,700 bank branches closed at the five largest banks in the UK in the past five years4, whilst challenger banks start to pursue the opposite strategy5. Put simply, if getting on top of big data is seen as too big and broad a problem, investment in data quality solutions quickly becomes a major, centralised IT infrastructure purchase with a correspondingly hefty price tag attached – and this multi-million, multi-year outlay makes it much harder to justify getting it done at all.
This is where targeted, tactical data quality metrics can provide genuine and demonstrable insight. Quick wins that utilise industry-standard definitions of data quality (such as the Enterprise Data Management’s Data Management Capability Assessment Model6, DCAM™) work alongside enterprise data management routines to solve specific data problems, meet changing regulations and free up resources to help the bank develop market-leading customer propositions.
Instead of being hamstrung by an unwieldy data warehouse, implementing data quality metrics that visualise critical quality dimensions such as conformity, completeness, and integrity of datasets can not only enhance compliance with regulatory obligations, but also yield an accurate picture of the current landscape and its progression over time.
It moves the needle from simply understanding whether a data element is right or wrong, to intelligent analysis of how right or wrong it is, and whether its quality is improving or decreasing.
Thanks to ongoing data measurement, our marathon runners know where they are, how far they have to go, and how long it’s taken them. They know how they stack up against their competitors, updated to the second through constant data analysis and review. Being just as diligent, and demanding detailed metrics on the condition of vital datasets may seem daunting when it comes to retail banking data, but it has to be the cornerstone of any rigorous approach to data quality management.
This blog is Part 1 of a series looking at how data quality in retail banking can be reviewed, monitored and remediated. Next week will cover how banks can utilise their SME knowledge to adopt a self-service approach to data quality improvement.
Matt Flenley is marketing manager at Datactics, a bespoke provider of data quality solutions throughout the financial services industry and further afield. Datactics prides itself on the agility and scalability of its in-memory data quality products including RegMetrics™ – an award-winning data quality analysis tool. For more on RegMetrics please see our White Paper.