Good Data Culture – Facing Down Practical Challenges of a Data Management Programme
“Nobody said it was easy” sang Chris Martin, in Coldplay’s love song from a scientist to the girl he was neglecting. The same could be said of data scientists embarking on a data management programme!
In his previous blog on Good Data Culture, our Head of Client Services, Luca Rovesti, discussed taking first steps on the road to data maturity. This time he’s taking a look at some of the biggest challenges that arise once those first steps have been made – and how to overcome them.
One benefit of being part of a fast-growing company is the sheer volume and type of projects that we get to be involved in, and the wide range of experiences – successful and less so – that we can witness in a short amount of time.
Without doubt, the most important challenge that rears its head on the data management journey is that of complexity: there are so many systems, processes and requirements of the data that it can be hard to see the wood for the trees.
Those who get out of the woods fastest are the ones who recognise that there is no magical way of solving things that must be done.
A good example would be the creation of data quality rule dictionaries.
Firstly, there is no way that you will know what you need to measure unless you go through what you have got.
Although technology can give us a helpful hand in the heavy lifting, discovery, and categorisation of content (data catalogues), the definition of domain-specific rules always requires a degree of human expertise and understanding of the exception management framework.
Subsequently, getting business data owners and technical people to contribute to a shared plan that takes the uses of the data and how the technology will fit in is a crucial step in detailing the tasks, problems and activities that will deliver the programme.
Clients we’ve been in discussions with are always experts in their subject areas but haven’t known what “best of breed” software can deliver; they can sometimes find it hard to express what they want to achieve beyond a light-touch digitalisation of a human or semi-automated process.
The most important thing that we’ve learned along the way is that the best chance of success in delivering a data management programme involves using a technology framework that is both proven in its resilience and flexible in how it can fit into a complex deployment.
From the early days of “RegMetrics” – a version of our data quality software that was configured for regulatory rules and pushing breaks into a regulatory reporting platform – we could see how a repeatable, modularised framework provided huge advantages in speed of deployment and positive outcomes.
Using our clients’ experiences and demands of technology, we’ve developed a deployment framework that enables rapid delivery of data quality measurement and remediation processes, providing results to senior management that can answer the most significant question in data quality management: what is the return on investing in my data?
This framework has enabled us to be perfectly equipped to provide expertise on the technology that marries our clients’ business knowledge:
- Business user-focused low-code tooling connecting data subject matter experts with powerful tooling to build rules and deploy projects
- Customisable automation that integrates with any data source, internal or external
- Remediation clinic so that those who know the data can fix the data efficiently
- “Chief Data Officer” dashboards provided by integration into off-the-shelf visualisation tools such as Qlik, Tableau, and PowerBI.
Being so close to our clients also means that they have a great deal of exposure and involvement in our development journey.
We have them ‘at the table’ when it comes to feature enhancements, partnering with them rather than sell and move on, and involving them in our regular Guest Summit events to foster a sense of the wider Datactics community.
It’s a good point to leave this blog, actually, as next time I’ll go into some of those developments and integrations of our “self-service data quality” platform with our data discovery and matching capabilities.
Blog Categories: Good Data Culture.