The Road to Data Maturity – Many data leaders know that the utopia of having all their data perfect and ready to use is frequently like the next high peak on a never-ending hike, always just that little bit out of reach. Luca Rovesti, Head of Client Services for Datactics, hears this all the time on calls and at data management events, and has taken some time to tie a few common threads together that might just make that hike more bearable, and the peak a little closer.
Without further ado: Luca Rovesti’s Healthy Data Management series, episode 1:
Key topics: Data Quality Journey; Exception Management; AI & DQ
2-3 minute read
Lots of the people we’re speaking with have spent the last eighteen months working out how to reliably measure their data, usually against a backdrop of one or more pressures coming from compliance, risk, analytics teams or board-level disquiet about the general state of their data.
For those who have been progressing well on the journey, they’re usually people with a plan and the buy-in to get it done. They’ve now moved their data maturity further into the light and can see what’s right and what’s wrong. They’ve managed to get people to stand up and be counted as data owners, enabling business teams and users to say that this or that data belongs to them, and are now looking at how they can push the broken data back to be fixed by data users in a fully traceable, auditable way.
The big push we’re getting from our clients is to help them federate the effort to resolve exceptions. Lots of data quality improvement programmes, whether undertaken on their own or as part of a broader data governance plan, are throwing up a high number of data errors. The best way to make the exercise worthwhile is to create an environment for the resolution of broken data by end users – those who know what good looks like, what the data’s used for and how it should be handled.
“The best way to make the exercise worthwhile is to create an environment for the resolution of broken data by end users – those who know what good looks like, what the data’s used for and how it should be handled.”
As a result, we’ve been able to accelerate development of features for our clients around federated exceptions management through integrating our Data Quality Clinic with dashboarding layers, for example PowerBI, Qlik, Tableau etc. We’re starting to show how firms can use the decisions being made on data remediation as a vast set of training data for machine learning models, which can power predictions on how to fix data and cut the amount of decision-making time manual reviews need to take.
It’s a million light years away from compiling lists of data breaks into Excel files and emailing them around department heads, and it’s understandably in high demand. That said, a small number of firms we speak to are still coming to us because the demand is to do analytics and make their data work for their money, but they simply can’t get senior buy-in for programmes to improve the data quality. I feel for them, because a firm that can’t build a business case for data quality improvement is losing the opportunity to make optimal use of its data assets and is adopting an approach prone to inefficiency and non-compliance risk.
“…a firm that can’t build a business case for data quality improvement is losing the opportunity to make optimal use of its data assets and is adopting an approach prone to inefficiency and non-compliance risk.”
Alongside these requests are enquiries about how data teams can get from their current position – where they can’t access or use the heavy-duty programmer-focused tools in IT and so have built rules themselves in SQL – to the place where they have a framework for data quality improvement and an automated process to implement it. I’ll go into that in more detail in the next blog.