DQM EquilibriumThere’s no silver bullet for transforming data quality management (DQM) from problem into competitive advantage. But a structured approach that considers, among other things, the right combination of human judgment and automation will yield significant improvements that are sustainable over the long term.

The Data Explosion

Managing data has become a major struggle. The amount of data and number of vendors are exploding, and companies use data in complex ways in areas like advanced selling models, marketing and contracting analytics, and performance analysis.

These are difficult issues, and your company may be like many others: When you encounter data problems, you aren’t sure where to find the root issue. Data stewards delve into sources for anomalies. Management teams confer to diagnose and address the issue and deliver a resolution.

This approach leads to unhappy customers, overworked employees and upper management’s loss of confidence in the data operation. To cope, you hire extra data stewards and ask for more capacity. But the additional costs exceed the benefits, and you’re back at square one.

The DQM Equilibrium

The real solution is finding the right balance between the experience of human stewards and the efficiency of automation. DQM has relied upon data stewards catching errors that automated pass-fail checks cannot, but data stewards are not always cost-effective. Conversely, automated checks are cost-effective but cannot replace human judgment entirely.

Emerging tools and processes allow companies to automate what once was confined to the realm of human judgment. Combining these two inputs (the utmost automation and the best use of human judgment) results in a superior approach. Using one alone does not work well, as shown by the experiences of two pharmaceutical companies of similar size that tried to improve data reliability. One added data stewards, while the other tried fully automating data checking:

  • The company that hired data stewards cut the amount of time doing rework in half, and the percentage of error-free deliverables increased from 85% to 90%. However, the benefits of extra stewards did not justify the additional cost.
  • The company that embraced full automation saw the opposite happen: While costs declined because of reduced head count, data accuracy was only slightly better, remained static or, in some cases, declined.

Neither company enjoyed the transformative effect it had anticipated.

Like many companies, these two had to “do more with less.” They also had to maximize opportunities that did not exist a few years ago, including information from click-stream data, closed-loop promotion inferences, medical continuing education programs and mail campaigns. Ensuring data quality underlies all efforts to improve quality, reduce costs and take advantage of new opportunities.

A successful DQM process should automate data checking and the assessment of the results, leading to appropriate levels of warning for data stewards. This kind of judgment should be based upon expected data variance to allow a process to continue, and should remove outliers to make exceptions. Over time, the system should “learn” how to make and administer judgment.

How would you describe your company’s DQM equilibrium?
 

abhijit nAbout the Author

Abhijit Nimgaonkar is a ZS Principal based in Princeton, N.J. Abhijit has helped clients improve operations in numerous areas, including business intelligence, market data warehousing, customer relationship management, and sales force micro-targeting. His expertise includes development of processes and people capabilities, and systems in these areas.

 

Topics: data management, Abhijit Nimgaonkar, data quality management (DQM)