Want to see our platform in action?    

← Back to Resources

Achieving the 3 C’s of Data Quality Through Effective Data Governance

03 Jun, 2020  |  By PeerNova   |  Published in Articles,

Achieving the 3 C’s of Data Quality Through Effective Data Governance for Enterprise Success

Data is one of the most valuable assets an enterprise can have. However, its existence alone does not guarantee business success. While having vast amounts of data can be beneficial, its quality and usability is even more important. Untrustworthy or low-quality data adversely affects every aspect of a business, quickly cascading throughout the enterprise landscape. Additionally, data decays as people leave their current roles in the organization, companies go out of business, and mergers occur. Data not only loses its value, but can be the cause for delayed or failed business decisions, which can undermine your clients and your credibility. Therefore, improving data quality should be a top priority for any enterprise.

To ensure data quality, enterprises must implement an effective data governance framework. By incorporating data governance into the overall data management strategy, firms can create processes and rules to ensure that poor data quality is quickly identified and addressed appropriately. Having the right data to rely on results in increased operational efficiency, reduced costs and regulatory fines, and ultimately, better decision-making.

The 3 C’s of Data Quality

By now, you have realized that the age-old saying, “quality over quantity” holds true. Data is only valuable if it meets the following dimensions:

Completeness
Front-to-back data completeness is a critical requirement for financial institutions. Incomplete data can result in incorrect reporting of critical data elements, as well as processing delays. Both of these challenges can have severe consequences from a regulatory compliance perspective. For example, CSDR requires European security depositories to impose penalties (in the form of financial fines) on institutions failing to complete transactions on the Contractual Settlement Date. Failure to settle trades in a timely manner can often be attributed to incomplete data, caused either by manual errors or system failures. Similarly, MIFID II, CCAR, and other regulatory reports rely on the completeness of underlying data.

Correctness
Data correctness is just as important as data completeness. Data is correct if it conforms to business rules that govern it at all points in the lifecycle, from the time it is created (through enrichment and transformation) to the time it is archived or deleted. Incorrect data can also have the same adverse effect on settlement times and regulatory reporting that incomplete data has. For example, incorrect risk metrics may result in a firm exceeding its risk thresholds. Even if data inaccuracies are detected before impacting critical reports, fixing inaccuracies may delay settlement times, thereby incurring penalties.

Consistency
Consistent data is in a standardized format that is easily recognizable regardless of where it is being held within the landscape. This same piece of data is synchronized across the enterprise, throughout its various systems, applications, and workflows. For example, if a trade date must be entered, does it match the MM/DD/YYYY format? Is this same piece of data similar in all systems? Even if the data formats are different in various systems, are they representing the same underlying date? Checking data consistency across different formats of representation is particularly challenging in existing solutions.

Additional key dimensions of data quality include current, clean, compliant, timely, unique, and valid data. We will discuss these in further articles.

Data Quality

Current Data Quality Challenges

Currently, firms experience an array of challenges that prevent them from ensuring enterprise-wide data quality. A study from the Harvard Business Review discovered that data quality is far worse than most enterprises realize, with only 3% of the data quality scores in the study rated as “acceptable.”

Fragmentation in Enterprise Landscape
One of the main data quality challenges is fragmentation across the enterprise landscape. Data is siloed within various systems, applications, and workflows. Each of these data sources consists of scattered, outdated, and duplicate records. To stay compliant, enterprises must be able to make specific quality assertions about their data—a significant challenge when data is stored in multiple systems.

Lack of End-to-End (E2E) Data Quality
Using existing metadata tools, enterprises are unable to monitor data quality throughout the data’s lifecycle. Instead, data quality is measured within individual siloed systems, applications, or workflows. As a result, enterprises struggle to measure timeliness across workflows, efficiently conduct root cause analyses, and unlock enterprise knowledge hindering better decision-making.

Static Data Quality Tools
Currently, most enterprises implement passive data governance with static data quality tools. Current tools perform data quality checks as a last step in the pipeline, instead of perpetually throughout data’s lifecycle. This approach measures data quality at specific points for individual data sources—it does not provide data quality metrics across E2E systems, applications, and workflows.

Lack of Ongoing Data Quality Checks and Bilateral Tools
Additionally, current data quality tools are batch-oriented and perform data quality checks typically on a nightly basis. In addition, existing tools are strictly bilateral, which means that to perform E2E data quality checks on entire business flows, multiple bilateral DQ checks must be performed. This results in high “false positives”, which means that the same upstream error can lead to duplicate issues in downstream DQ checks.

Benefits of Improved Data Quality

When it comes to data governance, we have previously discussed three key benefits.

Better Business Decisions
With ongoing data quality checks, enterprises have cleaner, safer, and higher quality data, resulting in more accurate analytics, clearer insights, and predictive advantages. This allows for increased actionable opportunities and better business decisions.

Reduced Risk and Increased Regulatory Compliance
One of the most valuable benefits of high data quality for financial firms is reduced risk and increased regulatory compliance. Meeting the stringent timeline of regulatory bodies and submitting quality data are challenging for financial institutions. In addition, penalties have skyrocketed in recent years with privacy laws surrounding GDPR and CCPA. With improved data quality, firms can be confident in the data they are submitting to regulators.

Decreased Operational Cost
Enterprises must manually comb through and verify large volumes of data, which is both resource-intensive and laborious. When data is incorrect or causes errors down the pipeline, enterprises must perform manual investigations. These efforts can lead to duplicate investigations and false positives, resulting in increased operational costs.

Effective Data Governance and Data Quality

Data quality improvement is a substantial piece of the data governance puzzle, creating a framework and setting the parameters for data management. An effective data governance model implements processes and rules to ensure that poor data quality is identified and addressed appropriately on an ongoing basis. Instead of manually fixing the actual error, leaders can reconcile the process, rules, or systems that caused the data issue in the first place.

Without cohesively monitoring quality throughout the data’s lifecycle, the smallest error or discrepancy in data quality could trickle down the pipeline and cause significant distress to an enterprise’s operational efficiency, regulatory compliance, and overall bottom line.

PeerNova’s Cuneiform Platform: Active Data Governance

PeerNova’s Cuneiform Platform is an active data governance tool that perpetually runs data quality and timeliness rules on live data across the organization’s systems, applications, and workflows. The platform ensures data quality across the entire workflow, which reduces duplicate false positives significantly. Through automation and ongoing data quality checks, the solution not only remedies existing data issues but also continuously manages and monitors data throughout its entire lifecycle across siloed sources.

Using this dynamic approach to data quality and management, the platform creates E2E, integrated, and active lineages across disparate tools and systems, making the solution significantly more efficient than other data governance tools.

Data quality is an important issue that enterprises should address early on through active data governance. Having high-quality data ensures that enterprises have more accurate reports, clearer insights, and strategic advantages, resulting in reduced risk, increased regulatory compliance, and decreased operational cost. Ultimately, all of these factors lead to better business decisions and enterprise success.

As the amount of data is only growing, it is key that enterprises unlock and leverage their most powerful asset. However, the old saying “quality over quantity” still remains strong.

To learn more about how PeerNova’s Cuneiform Platform can help your enterprise ensure E2E data quality, be sure to get in touch with us and request a demo today.

Sources:
Nagle, Tadhg, et al. “Only 3% of Companies’ Data Meets Basic Quality Standards.” Harvard Business Review, 9 Mar. 2018, Link




Want to see our platform in action?


By leveraging the Cuneiform Platform, you can obtain and use more accurate, data-driven insights through effective data quality monitoring. Learn more about how we can help you with your important tasks.