With Statista.com reporting that 59 zetabytes of data has been captured, created, copied and consumed worldwide since 2010, it’s easy to see the problems that arise when even a fraction of this is incorrect.
The chaos that can – and does – arise can seem totally insurmountable, creating a problem that’s as unappealing to solve as it is difficult. It can also make it hard for data leaders to really get their people inspired about the art of the possible in finding out what’s wrong, what good should look like, and how to make a difference.
This post, then, is designed to help energise, excite and encourage your data people in three easy-to-implement ways that will help deliver your next data management programme and truly change your data culture.
Firstly: You don’t need to rip & replace your expensive tech stack
The major enterprise data management firms are already in situ at many of the world’s biggest firms, yet people are still complaining about the quality of the data they’re working with. Any approach to solve this problem can lead C-suite executives to think that buying more software to replace it is just far too costly and risky to achieve.
Selecting vendors who can work alongside the Informaticas and IBMs of this world is clearly a pragmatic opportunity to independently measure and improve the quality of data right from the business teams. It puts the platform in your hands, so that you and your teams can play an active part in the data flow in the organisation without disrupting the stable enterprise technology stack.
(And what’s more, we’ve done this many times before).
Secondly, boil a kettle, not the ocean!
“Boiling the ocean” is a really evocative phrase when it comes to prioritisation and the approach to take – especially with something as central and fundamental as data quality. Everyone needs high-quality data, even those who are guilty of kidding themselves that they don’t! Heads of Innovation and Change are discovering that they won’t be able to innovate or change anything unless the data is right.
Picking something that will make a real difference for someone with access to the big purchasing levers is clearly a great strategy. If a general desire to improve data quality feels like “boiling the ocean”, then how about getting customer data right ahead of a new product launch instead? Fixed dimensions of success, a six-week delivery timeframe and a lower-than-you-think budget for a “time & materials” type licence can go a long way to getting that senior stakeholder buy-in for the bigger dreams you have in mind.
Lastly, now witness the power of this fully self-service system
The last thing your team wants to be doing is manually cleansing, standardising and matching data. There’s no quicker way of taking the wind out of a data analyst’s sails than by giving them manual “dirty data” work. And with up to 80% of their time currently being spent doing exactly that, it’s clear that this problem isn’t simply going to go away in the morning. Automated routines and processes that will do this for the analyst are like coming in with a double espresso with extra espresso to start the day; they’ll be flying at the data in a fit of pure delight. Solving one critical problem in a way that’s designed for business users to self-serve makes perfect sense. Especially if it’s scalable, repeatable and easily accessible, using automations and pre-built logic to save time and effort, it will liberate your data analysts to attack data-driven problems all over the enterprise and truly transform the culture of your organisation.
How could rethinking data quality make a difference to your organisation? Hit me up on LinkedIn and let’s continue the conversation!
To learn more about how how Self-Service Data Quality is the best approach to developing a next-gen data management strategy, catch our webinar from 2020 with key input from CTO, Alex Brown (or read a blog post version here).