Requirement for Big Data Engineer with Pyspark and Hive - 2 positions - Phoenix-AZ - Onsite from Day1 at Phoenix, Arizona, USA |
Email: [email protected] |
From: SAPNA, ITECS [email protected] Reply to: [email protected] Position Title: Big Data Engineer with PySpark and Hive (2 roles) Start Date: ASAP Location: Phoenix, AZ (onsite from day1) Skills required: Technical Skills: Big Data Engineering PySpark, Hive, ETL automation, Data Pipeline optimization. Must have 7+ years (preferably more) in working with these technologies and understand the details of how PySpark and Hive are used to optimize SQL queries and data pipelines. Entry-level experience will not work for these roles. Other Key Skills: Must be able to work autonomously. Collect requirements from business stakeholders, driving solution design and implementation. Can potentially lead other team members. IMPORTANT: Please include pre-screened assessment record along with submission for the respective skillset when sharing the profile . Keywords: Arizona |
[email protected] View all |
Tue Jul 18 13:06:00 UTC 2023 |