Until fairly recently, most business decisions were based on historical trends, high-level data and gut instinct. The rise of data analytics has made it possible to replace a large portion of the guesswork involved in the decision-making process with the certainty of hard numbers and facts. For example, at-risk students in higher education can be identified based on a number of factors, not just academic performance.
Of course, data-driven decision-making is dependent on the quality of your data. You may have mountains of data, but if the data isn’t reliable, or you’re collecting the wrong data, analytics won’t be much help. Garbage in, garbage out, as the old saying goes. Also, the vast majority of data is no longer structured in perfect rows and columns. Email, images, video, social media messages and other forms of unstructured data are more difficult to process. The use of fragmented systems further complicates this issue.
Unfortunately, data quality by and large is pretty bad, according to research by Tadhg Nagle, Thomas Redman and David Sammon published in Harvard Business Review. On average, 47 percent of new data records have at least one major error. Just 3 percent of respondents say their organization meets the minimum acceptable range of at least 97 correct data records out of 100.
A study from Experian identified some of the most common issues with data. Data resides in silos, as do inefficient, manual data entry processes that are slow and error-prone. Data management roles aren’t clearly defined, and there’s a lack of communication between departments. All of these problems contribute to inaccuracy in data and an inability to properly monitor data for quality. On average, U.S. respondents to this study believe inaccurate data wastes 27 percent of their organization’s revenue, and just one in four has a highly sophisticated, optimized approach to data quality.
In higher education, data quality issues are similar. They typically involve incomplete and outdated information, which often exists in silos. There is a lack of ownership and responsibility of data. This can result in operational inefficiency, the inability to understand and improve the student experience, and failure to identify at-risk students and offer the appropriate assistance.
An effective data quality strategy begins with collaboration between data creators and end-users to pinpoint the root cause of quality issues. This way, these problems can be fixed when data is created rather than forcing people to correct or work around bad data. Instead of focusing on cleaning up existing data, organizations should work to improve the quality of new data at the point of creation, which will limit how often data cleanups are required. Finally, data quality problems shouldn’t be dumped on IT’s plate. IT sees little benefit from good data and little pain from bad data. Data should be the responsibility of creators and end-users, who benefit the most from having access to quality data.
Axiom Elite helps higher education institutions overcome data quality and management issues by aggregating all data into a centralized system. Regardless of source, data is integrated and validated while sophisticated matching algorithms eliminate duplicate records. All data can then be managed through a simple, intuitive interface. Contact us to view a demo of Axiom Elite and start taking on your data quality issues.