What is the cost of bad data?

With an increased demand for data, and with technology such as 5G making more information available than ever before, the risk that bad data presents is becoming increasingly common and dangerously frequent.

Issues such as data duplication, missing data and ultimately the misuse of data cause a range of technology-driven problems; on the human side, poor or low data awareness leads to the wrong data being prioritised, data being poorly maintained or fundamentally misunderstood.

Alongside these risks, in the market’s drive to achieve more personalised, on-demand and enhanced data statistics and analytics, it has also been seen that the same passion doesn’t necessarily exist for data quality.

At Datactics we believe every piece of business data gained or held, whether it is received directly from a customer, sourced internally or externally is extremely valuable – and should be treated as such.

It should be treated as an asset, not a liability, because clean data has the power to be hugely beneficial to a business, in the same way that poor quality data, if unchecked, will hinder that same business.

In this post we delve into the cost of bad data in three specific areas;  across finance and risk, productivity and customer satisfaction, and probably the most important of all in today’s tough economic climate, reputational risk.

Financial Implications

Firstly, we’re not the first to say that bad data is extremely expensive when it comes to breaching regulations, like GDPR, MIFID, BCBS 239 and more from the slew of acronyms across the globe.  As recently as 2017, notable blogger Chris Skinner reported that financial regulations were changing as rapidly as every 12 minutes! Maintaining data quality for regulatory compliance, therefore, is not something that’s simply going to stop being important. 

Additionally, poor data quality can lead to bad analysis and even more serious, poor decisions. These decisions can have a negative impact on how a business performs which, in turn, can lead to financial losses. According to research conducted by Gartner, ‘the average financial impact of poor data quality on organisations is $9.7 million per year’. This statistic highlights the weight that poor data quality can be on your organisation if not maintained well. It’s not oversimplifying it to say that failing to maintain data quality can undermine your business, undoing all your hard work to get it to that point.

On this point, Harvard Business Review reported the cost of poor data quality as over $3tn annually to the US economy, as of 2016. For a problem that can be solved, this seems an extraordinarily high price to pay.

Productivity

Good quality data empowers business insights and starts new business models in every industry.Gartner, 2018. 

The reverse is also true: bad data quality hampers business insights and thwarts business models across every industry.  Put simply, if your people can’t access the data they need, then they’re not going to get a lot done. Then anything they do manage to achieve will be subject to the risk of poor data quality. Did your bank send you two exact same letters? Is there one account that for some reason they haven’t got your most recent address on? These are simple customer data errors that are easy to fix with a proper Single Customer View – something that’s within reach of every firm with the right mindset.

A sales team, for example, might become frustrated at an out of date email address, old phone number lists and outdated customer details being kept. If data is not cleaned, matched and deduplicated it can lead to unsuccessful, misinformed campaigns. A marketer could spend a significant period of time curating pieces that are not read by the right people, at the right time, due to the data being inaccurate, inappropriate and irrelevant. Campaigns can be duplicated, overrun on spend, require corrections and fixes that steer budgets into operational losses. Errors in data can steer us back to the regulatory risk of data breaches – and the associated cost.

It stands to reason, therefore, that the cleansing of data will help communication specialists in their pursuit to create accurate marketing campaigns, targeting the audience directly at the right time and in the right way.

Reputational Damage

It can be said that reputational damage is far worse than just receiving a fine because reputational damage is very hard if not impossible to recover from. Your reputation is the biggest way to instil trust and customer loyalty, and if people don’t trust you, they won’t buy from you. It follows on from the first two points, that if you’ve been fined for a data quality-driven problem, then people will start to view you with mistrust. If the data’s incorrect and unclean, then time, money and reputations can be severely damaged and, in many cases, lost forever.

Reputation can be damaged long-term if a customer chooses to publicly express disappointment. Sharing negative experiences can impact the chances that any client would return to the service provider. Recently, online vendors using Amazon have claimed that a flood of fake one-star reviews has cost them genuine business because people tended not to trust products with anything lower than 4.5 stars out of 5. This is both a poor data issue – the apparent inability to detect fake reviews – and one showing how reputational damage costs actual revenue.

It’s not just external reputational risk, either. Poor data can lead to internal issues to arise, as team members within an organisation could lose trust in their employer if there is scepticism over the accuracy and validity of the underlying data at hand. Glassdoor, the employee review site, gives people an opportunity to report honestly on their experiences as a current or former employee. If employees’ experiences are bad, it doesn’t take long for people to report them in a public forum that could permanently damage their reputation.

Conclusion

Data quality can no longer be seen as a niche specialism for people who “do data.” It’s in everything, right from financial reporting through to employee satisfaction and reputational risk. It’s the main reason we’ve majored on self-service solutions that seek to empower all people at an organisation to take responsibility for their data – measuring, improving, fixing and reporting it.

If you would like to see how self-service can transform your organisation’s approach to data, please get in touch.

Authored by Matt Flenley and Jamie Gordon

Get ADQ 1.4 today!

With Snowflake connectivity, SQL rule wizard and the ability to bulk assign data quality breaks, ADQ 1.4 makes end-to-end data quality management even easier.