Data Engineer Columbus, OH ONLY LOCAL GC , USC at Columbus, Ohio, USA |
Email: [email protected] |
From: Himanshu Pandey, WISE EQUATION SOLUTION INC [email protected] Reply to: [email protected] Job Title: Data Engineer (Python/Spark/PySpark/AWS) Location: Columbus, OH(Local) Job Overview: We are seeking experienced Data Engineers to join a critical AWS migration project. This role involves building data pipelines using PySpark, Python, and Spark, while migrating data from a legacy platform to a modern AWS-based platform. The project includes working with Datalake, AWS services, and potentially Databricks, depending on the candidate's experience. This is a contract-to-hire opportunity, with interviews conducted in 1-2 rounds. We are looking for both lead and developer positions at two different locations, with the lead engineers having additional experience in Java. Key Responsibilities: Lead Engineer (1 per location): Lead a team of data engineers in the development of data pipelines using PySpark and Java (70% PySpark, 30% Java). Provide technical direction and mentorship to other team members. Design and implement scalable data solutions, including migrating data from a legacy platform to AWS. Collaborate with cross-functional teams to ensure seamless integration and delivery of data projects. Data Engineers (3 per location): Develop and maintain data pipelines using Python, Spark, and PySpark. Participate in the migration of data from legacy platforms to AWS. Optimize and enhance data processing pipelines and ensure efficient data flow. Collaborate with lead engineers to ensure that project deliverables meet business requirements. Qualifications: For Lead Engineer: 7-10 years of enterprise-level experience in data engineering. Strong expertise in both Java and PySpark (70% PySpark, 30% Java). Strong experience with AWS services and data migration projects. Experience with Databricks is a plus; candidates must be able to speak to it if listed on their resume. For Data Engineers: 3-5+ years of enterprise-level experience in data engineering. Proficiency in Python, Spark, and PySpark. Strong knowledge of SQL and experience creating pipelines in PySpark. Experience with AWS and data migration is required. Knowledge of Datalake and Databricks is a plus. Required Skills: Proficiency in Python, Spark, and PySpark. Strong knowledge of AWS and migration processes. Strong SQL skills for data manipulation and querying. Experience with Datalake architectures. Ability to work collaboratively in a fast-paced environment. Strong problem-solving and analytical skills. Keywords: information technology Ohio Data Engineer Columbus, OH ONLY LOCAL GC , USC [email protected] |
[email protected] View all |
Thu Oct 17 00:06:00 UTC 2024 |