Data Quality Engineer Hybrid Cincinnati, OH NO H1B NO CPT at Cincinnati, Ohio, USA |
Email: [email protected] |
From: Ankit Upadhyay, Pivotal Technologies [email protected] Reply to: [email protected] Role: Data Quality Engineer PV: Eliassen Group Client: Kroger, Division: Supply Chain RTR: yes last 4 of SS Location: Cincinnati, OH need to be a local. Notes (from manager or AE insight): Prescreen consists of 3 unique questions that need to be answered. Prescreens completed before will not be accepted Top 3 skills needed: testing, SQL, Azure Databricks, and Spark/PySpark Overview: We are seeking a skilled Data Quality Engineer to join our team and play a critical role in ensuring the reliability, accuracy, and consistency of data within our warehouse and manufacturing data platform in the Kroger supply chain space. While the title may suggest a focus on quality assurance, this role requires a blend of data engineering and testing expertise. The Data Quality Engineer will be responsible for implementing robust test frameworks, writing automation test cases, and collaborating closely with data engineers to validate and verify the functionality of data pipelines and processes. This is a key role in maintaining the integrity of our data platform and driving continuous improvement in data quality standards. Technical Skills: Proficiency in data engineering technologies such as Azure Databricks, Synapse Analytics, and SQL Hyperscale. Strong programming skills in Python and PySpark for developing automation test scripts. Experience with building and implementing test frameworks for data-centric applications in Azure cloud environments. Knowledge of data quality assurance methodologies and best practices. Familiarity with version control systems and CI/CD pipelines for automated testing and deployment. Understanding of data modeling, ETL processes, and data warehousing concepts. Key Responsibilities: Collaborate with data engineers to understand data pipelines, transformations, and business rules to develop comprehensive test strategies. Design, implement, and maintain test frameworks for validating data quality, completeness, and accuracy across all stages of the data lifecycle. Develop automation test scripts using Python and PySpark to execute test cases against data pipelines and processes. Conduct regression testing to ensure the stability and reliability of data pipelines after code changes or updates. Implement monitoring and alerting mechanisms to proactively identify and address data quality issues in production environments. Work closely with cross-functional teams to define acceptance criteria and ensure that data quality requirements are met. Document test cases, test results, and defects to facilitate collaboration and knowledge sharing within the team. Continuously evaluate and improve testing processes and methodologies to enhance data quality assurance practices. Stay updated on emerging technologies and trends in data quality engineering, and incorporate them into our testing practices as appropriate. Thanks & Regards, ANKIT UPADHYAY Technical Recruiter Office: +1 (703) 570-8775 (Ext-217) Connect with me:-- linkedin.com/in/ankit-upadhyay-a689a1232 Pivotal Technologies, Inc. Your VisionOur Process Keywords: continuous integration continuous deployment Ohio |
[email protected] View all |
Wed Mar 20 23:58:00 UTC 2024 |