In this blog with Datactics’ Head of Sales, Kieran Seaward, we dive into market insights and the sometimes-thorny issue of where to start. It’s a problem data managers and users will fully understand, and Kieran’s approach to this is influenced by thousands of hours of conversation with people at all stages of the process, all unified in the desire to get the data right.
Following hot on the heels of banks, we are seeing a lot of buy-side and insurance firms on the road to data maturity and taking a more strategic approach to data quality and data governance, which is great.
From what I hear, the “data quality or governance first?” conundrum is commonly debated by most firms, regardless of what stage they are at in a data programme rollout.
A decision is typically made to either prioritise ‘top-down’ data governance activities such as creating a data dictionary and business glossary, or ‘bottom-up’ data quality activities such as measurement and remediation of data as it exists today.
In my opinion, these activities are not in conflict but complementary and can be tackled in any order, so long as the ultimate goal is a fully unified approach.
I could be biased and say those market insights derived from data quality activities can help form the basis of definitions and terms typically stored in governance systems:
Figure 1 – Data Quality first
However, the same can be said inversely, data quality systems can benefit from having critical data elements defined and metadata definitions to help shape measurement rules that need to be applied:
Figure 2 – Data Governance first
The ideal complementary state is that of Data Governance + Data Quality working in perfect unison, i.e. :
- A Data Governance system that contains all identified critical data elements as well as definitions to help determine which Data Quality validation rules are applied to ensure they meet the definitions;
- A Data Quality platform that validates data elements and connects to the governance catalogue to understand who the responsible data owner or data steward is, in order to push data to them for review and/or remediation.
The quality platform can then push data quality metrics back into the governance front-end that acts as the central hub/visualization layer, either rendering data itself or through connectivity to third parties such as Microsoft PowerBI, Tableau, or Qlik.
Figure 3 – The ideal, balanced state
In the real world, this decision can’t be made in isolation of what the business is doing right now with the information they rely on:
- Regulatory reporting teams have to build, update and reconfigure reports in increasingly tighter timeframes
- Data analytics teams are relying on smarter models for prediction and intelligence
- Risk committees are seeking access to data for the client, investor, and board reporting.
If the quality of this information can’t be guaranteed, or breaks can’t be easily identified and fixed, all of these teams will keep coming back to IT asking for custom rules, sucking up much-needed programming resources.
Then when an under-pressure IT can’t deliver in time, or the requests are conflicting with one another, the teams will resort to building in SQL or trying to do it via everyone’s favourite DIY tool, Excel.
Wherever firms are on their data maturity curve, or how far into a data governance programme, data quality is of paramount importance and can easily run first, last or in parallel. This is something we are used to helping clients and prospects with at various points along that journey, whether it’s using our self-service data quality & matching platform to drive better data into a regulatory reporting requirement, or facilitating a broad vision to equip an internal “data quality as-a-service” function.
My colleague Luca Rovesti, who heads up our Client Services team, goes more into this in Good Data Culture.
I’ll be back soon to talk about probably the number one question thrown in at the end of every demo of our software:
What are you doing about AI?