The data quality landscape and market for the calendar year 2014 was worth a fraction over $1 billion, of which software sales and maintenance accounted for around $850 million. The overall figure includes the professional services arms of data quality vendors, but excludes the (substantial) revenues of systems integrators and consultancies involved with data quality initiatives.
Data quality (DQ) is a fundamental problem for most enterprises. Who has not experienced a letter from a bank or utility with an incorrectly spelled name or a duplicated marketing email? Minor data quality errors like this can be merely embarrassing, but others can cost millions. Clients have shared with us anecdotes and experiences involving heavy costs associated with a simple data quality error, in one instance an incorrect unit of measure causing a $25 million problem…
Data quality has been an issue in computing ever since people first started to store data on computers. Data may be incomplete, out of date, inconsistent, misspelt, unavailable or just plain wrong. When companies and governments started to maintain name and address lists of customers, citizens and prospects it became clear that getting clean and accurate name and address data was a thorny problem; in the USA alone around 45 million people move address each year, so one-off data clean-up exercises are insufficient. An industry of software vendors has sprung up to address this problem, using algorithms designed to detect common misspellings and others to detect likely matches amongst multiple records that may, or may not, be duplicates. Despite this, a 2002 PWC study found that almost a quarter of mail is incorrectly addressed.
Read the full report including Information Difference’s Data Quality Landscape diagram recognising Datactics as a major data quality vendor with exceptional technology.