Organisations manage data that supports decision-making activities. As data storage costs continue to fall and organisational appetite for more and persistent information expands, so the problems created by poor or variable data become more pervasive. Financial, operational, social and legal issues associated with poor or inappropriate decision making are extensively documented. However many organisations fail to manage their data quality issues effectively; or even at all. Data quality management is costly, particularly when much of the effort is directed to non-critical data. This thesis reports on research that developed a method to better target data quality effort and built a software artefact to explore the validity of the method. The method is to identify, rank and sort data using a mix of technical, user and business-based ranking points to reflect the usage and importance value for each data element. The software artefact and method were tested in an experimental setting where different levels of random quality errors were introduced into a database and the impact of the errors assessed. The relative merits of quality assurance using ranked data elements rather than unranked elements have been demonstrated. The research was based on an extensive review of academic literature, commercial literature, and current commercial data quality products and services. The thesis demonstrates the validity of a ranking approach to data quality. The method provides a means for organisations to improve their data quality assurance and thereby to improve their decision making confidence. This research makes a significant contribution to the principles of managing data quality.
|Date of Award||2011|
|Supervisor||Craig Mcdonald (Supervisor), John Campbell (Supervisor) & Avon Richards-Smith (Supervisor)|