Data Engineer (Python PySpark AWS at Remote, Remote, USA |
Email: [email protected] |
From: Abhishek Singh, Vizon Inc [email protected] Reply to: [email protected] Job Description -Data Engineer (Python/PySpark/AWS)Location: Sunnyvale, California9 month projectOnsite Tues-ThursSkypeGlider tests that have to be completedRole Summary:The Data Engineer will be responsible for building and maintaining the data pipelines that process data from CSV files through various transformation stages, following the Architects data model designs. This role requires hands-on experience with data transformations, working with large datasets, and implementing robust ETL processes within AWS. Key Responsibilities:Develop ETL pipelines to process and transform data across bronze, silver, and gold layers.Implement data cleansing, validation, and transformation processes to support analytics-ready datasets.Collaborate with the Sr. Data Engineer and Data Architect to ensure alignment with data models.Optimize data workflows to improve processing times and resource utilization. Skills and Qualifications:Bachelors degree in Data Engineering, Computer Science, or related field.3-5 years of experience in data engineering, preferably within AWS environments.Proficiency in PySpark, SQL, AWS Glue, and S3.Strong problem-solving skills and attention to detail. Keywords: sthree Data Engineer (Python PySpark AWS [email protected] |
[email protected] View all |
Thu Nov 14 05:22:00 UTC 2024 |