Best Practices For Creating a Data Quality Framework

Chief Technology Officer, Alex Brown featured as a panellist in Data Management Insight’s webinar discussing the best practices for creating a data quality framework within your organisation. 

Best practices from creating a data quality framework

What is the problem?

A-Team Insight outlines that ‘bad data affects time, cost, customer service, cripples decision making and reduces firms’ ability to manage data and comply with regulations.

With so much at stake, how can financial services organisations improve the accuracy, completeness and timeliness of their data in order to improve business processes?

What approaches and technologies are available to ensure data quality meets regulatory requirements as well as their own data quality objectives?

This webinar discusses how to establish a data framework and how to develop metrics to measure data quality. It also explores experiences of rolling out data quality enterprise-wide and resolving data quality issues. It will examine fixing data quality problems in real-time and how dashboards and data quality remediation tools can help. Lastly, it will explore new approaches to improving data quality using AI, Machine Learning, NLP and text analytics tools and techniques.

The topics focused on:

  • Limitations associated with an ad-hoc approach
  • Where to start, the lessons learned and how to roll out a comprehensive data quality solution
  • How to establish a business focus on data quality and developing effective data quality metrics (aligning with data quality dimensions) 
  • Using new and emerging technologies to improve data quality and automate data quality processes
  • Best practices for creating a Data Quality Framework 

We caught up with Alex to ask him a few questions on how he thought the webinar had gone, whether it had changed or backed up his views, and where we can hear from him next…

Firstly I thought the webinar was extremely well-run, with an audience well over 300 tunings in on the day.

The biggest takeaway for me was that it confirmed a lot of the narrative we’re hearing about the middle way between two models of data quality management – a centralised, highly-controlled but slow model of IT owning and running all data processes, and the “Wild West” where everyone does their own thing in an agile but disconnected way. Both sides have benefits and pitfalls, and the webinar really brought out a lot of those themes in a set of useful practical examples. It was well worth a listen as the session took a deep dive into establishing a data quality framework, looking at things like data profiling, data cleansing and data quality rules. 

Next up from me will be a whitepaper on this subject which we’ll be releasing really soon; there’ll be more blogs from me over at Datactics.com; and finally, I’m also looking forward to the Virtual Data Management Summit, as CEO Stuart Harvey’s got some interesting insight into DataOps to share

Missed the webinar? Not to worry, you can listen to the full recording here

Click here for more from Datactics, or find us on LinkedinTwitter or Facebook for the latest news.

Scroll to Top