How to Architect Data Quality on the Snowflake – a case for serverless, autonomous, in-situ data validation

2809
Data Quality on the Snowflake

Executive Summary

Without effective and comprehensive validation, a data warehouse becomes a data swamp.

With the accelerating adoption of Snowflake as the cloud data warehouse of choice, the need for autonomously validating data has become critical.

While existing data quality solutions provide the ability to validate Snowflake data, these solutions rely on rule-based approach that are not scalable for 100s of data assets and often prone to rules coverage issues. More importantly, these solutions do provide an easy way to access audit trail of results.

Solution: Organizations must consider scalable solution that can autonomously monitor 100s of tables to detect data errors as soon as the data lands.

Current Approach and Challenges

The current focus in Snowflake Data warehouse  projects is on data ingestion, the process of moving data from multiple data sources (often of different formats) into a single destination. After data ingestion, data is used and analyzed by business stakeholders – which is where data errors/issues begin to surface. As a result, business confidence in the data hosted in Snowflake reduces. Our research estimates that an average of 20-30% of any analytics and reporting projects in snowflake is spent identifying and fixing data issues. In extreme cases, the project can get abandoned entirely.

Current data validation tools are designed to establish data quality rules for one table at a time—as a result there are significant cost issues in implementing these solutions for 100s of tables. Table wise focus often leads to incomplete set of rules or often not implementing any rules for certain tables resulting in unmitigated risks.

In general, data engineering team experiences the following operational challenges while integrating current data validation solutions

  • Time it takes to analyze data and consult the subject matter experts to determine what rules needs to be implemented
  • Implementation of the rules specific to each table. So, the effort is linearly proportional to the number of tables in the snowflake
  • Data needs to be moved from the snowflake to the data quality solution resulting in latency as well as significant security risks.
  • Existing tools comes with limited audit trail capability. Generating audit trail of the rule execution results for compliance requirements often takes time and effort from the data engineering team.
  • Maintaining the implemented rules as the data evolves

Solution Framework

Organizations must consider data validation solutions that, at minimum, meet the following criteria

  1. Machine Learning Enabled: Solutions must leverage AI/ML to:
  • Identify and codify the data fingerprint for detecting data errors related to Freshness, Completeness, Consistency, Conformity, Uniqueness, and Drift
  • Effort required for establishing validation checks should not depend on the number of tables. Ideally, Data Engineer/Stewart should be able to establish validation checks for 100s tables with a single click
  1. In-Situ: Solutions must validate data at source without the need to move the data another location to avoid latency and security risks. Ideally, the solution should be powered by Snowflake for performing all the data quality analysis.
  1. Autonomous: Solution must be able to:
    • Establish validation checks autonomously when a new table is created.
    • Update existing validation checks autonomously when the underlying data within a table change.
    • Perform validation on the incremental data as soon as the data arrives and alert relevant resources when the number of errors becomes unacceptable.
  1. Scalability

Solution must offer the same level of scalability as the underlying Snowflake platform used for storage and computation.

  1. Serverless

Solutions must provide a serverless scalable data validation engine. Ideally, solution must be using SNOWFLAKE underlying capability.

  1. Part of the Data Validation Pipeline

Solution must be easily integrated as part of the data pipeline jobs

  1. Integration and Open API

Solutions must open API integration for easy integration with the enterprise scheduling, workflow, and security systems.

  1. Audit Trail/Visibility of Results

Solutions must provide easy to navigate audit trail of the validation test results.

  1. Business Stakeholder Control

Solutions must provide business stakeholders full control of the auto-discovered implemented rules. Business stakeholders should be able add/modify/deactivate rules without involving data engineers.

Conclusion

Data is the most valuable asset for modern organizations. Current approaches for validating data, in particular SNOWFLAKE, are full of operational challenges leading to trust deficiency, time-consuming, and costly methods for fixing data errors. There is an urgent need to adopt a standardized autonomous approach for validating the SNOWFLAKE data to prevent data warehouse from becoming a data swamp.