Home

Job opening || AWS Data Engineer || Onsite || Contract at Remote, Remote, USA
Email: [email protected]
From:

iyyappan,

Smartitframe

[email protected]

Reply to:   [email protected]

Dear,

Greetings from Smart IT Frame, Hope you are doing well!!!

Smart IT Frame specializes in enabling you with your most critical line of resources. Whether its for permanent staffing, contract staffing, contract-to-hire or executive search, we understand the importance of delivering the most suitable talent; on time and within budget. With our Core focus in emerging technologies, we have provided global technology workforce solutions in North America, Canada & India. We take pride in delivering specialized talent, superior performance, and seamless execution to meet the challenging business needs of customers worldwide.

Job Title:

Lead 
AWS 
Data 
Engineer

Location:

Onsite - preferable in Chicago, IL (OR) Austin, TX (OR) Santa Monica, CA

Duration:

6+ Months

Skills Required:

AWS, SQL, PySpark and Databricks

Responsibilities:

Collaborate with product managers, 
data scientists, 
data analysts and 
engineers to define features needed for a 
Data Platform

Provide mentorship and technical leadership for a team

Work closely with other 
engineers to scale infrastructure, improve reliability and efficiency

Improve developer tooling with a focus on reliability and efficiency

Write good technical documentation

Perform large system upgrades and migrations

Maintenance and improvement of multiple CI/CD pipelines

Act as an in-house 
data expert who makes recommendations regarding standards for code quality and pipeline architecture

Develop, deploy and maintain 
data processing pipelines using cloud technology such as 
AWS, Kubernetes, Lambda, Kafka, Airflow, Redshift, S3, Glue, and EMR

Make smart 
engineering and infra decisions based on 
data auditing and collaboration

Lead and architect cloud-based 
data infrastructure solutions to meet stakeholder needs.

Skills & Qualifications:

8+ years of professional experience in any one of the Cloud providers such as AWS, Azure or GCP

8+ years experience in engineering data pipelines using data technologies (Python, Databricks, pySpark, Kafka) on large scale data sets

Experience building or maintaining a Data Platform that supports multiple engineering teams and processes big data

Ability to quickly learn complex domains and new technologies.

Innately curious and organized with the drive to analyze data to identify deliverables, anomalies and gaps and propose solutions to address these findings

Experience designing data models that have been implemented in production

Strong experience in writing complex SQL and ETL development with experience processing large data sets

Familiarity with AWS Services (Redshift, RDS, EKS, S3, EMR, Glue, Lambda)

Experience using GitHub, Docker, Terraform, CodeFresh, Jira

Experience contributing to full lifecycle deployments with a focus on quality and scalability.

Good to Have:

Experience with customer data platform tools such as Segment

Experience contributing to full lifecycle deployments with a focus on testing and quality

Experience with data quality processes, data quality checks, validations, data quality metrics definition and measurement

AWS/Kafka/Databricks or similar certifications.

Keywords: continuous integration continuous deployment sthree information technology California Illinois Texas
[email protected]
View all
Wed Jul 19 20:13:00 UTC 2023

To remove this job post send "job_kill 424759" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 56

Location: , Indiana