Home

data engineer at Remote, Remote, USA
Email: [email protected]
From:

Satyajit Nayak,

tekinspirations

[email protected]

Reply to:   [email protected]

Hello, 

Hope you are doing well!

Please review the JD and share resumes

AWS Data Engineer

Remote-Madison, WI 

9+Month Contract 

Need 15 to 20 year candiate 

State of WI 

Any visa

Open rate

Required Skills
15-20+ Years of Experience with extensive hands-on experience designing, developing, and maintaining data pipelines and ETL processes on AWS Redshift, including data lakes and data warehouses
Hands-on experience with AWS services such as AWS DMS, Amazon S3, AWS Glue, Redshift, Airflow, and other pertinent data technologies
Strong understanding of ETL best practices, data integration, data modeling, and data transformation.
Bachelors or Master's degree in Computer Science, Information Technology, or a related field.

Extensive hands-on experience designing, developing, and maintaining data pipelines and ETL processes on AWS Redshift, including data lakes and data warehouses.

        Proficiency in SQL programming and Redshift stored procedures for efficient data manipulation and transformation.

        Hands-on experience with AWS services such as AWS DMS, Amazon S3, AWS Glue, Redshift, Airflow, and other pertinent data technologies.

        Strong understanding of ETL best practices, data integration, data modeling, and data transformation.

        Experience with complex ETL scenarios, such as CDC and SCD logics, and integrating data from multiple source systems.

        Demonstrated expertise in AWS DMS for seamless ingestion from on-prem databases to AWS cloud.

        Proficiency in Python programming with a focus on developing efficient Airflow DAGs and operators.

        Experience in converting Oracle scripts and Stored Procedures to Redshift equivalents.

        Familiarity with version control systems, particularly Git, for maintaining a structured code repository.

        Proficiency in identifying and resolving performance bottleneck and fine-tuning Redshift queries,

        Strong coding and problem-solving skills, and attention to detail in data quality and accuracy.

        Ability to work collaboratively in a fast-paced, agile environment and effectively communicate technical concepts to non-technical stakeholders.

        Proven track record of delivering high-quality data solutions within designated timelines.

        Experience working with large-scale, high-volume data environments.

        The ideal candidate possesses several years of hands-on experience working with Redshift and other AWS services and a proven track record of delivering high-performing, scalable data platforms and solutions within the AWS cloud.

        AWS certifications related to data engineering or databases. 
Hands-on experience with AWS services such as AWS DMS, Amazon S3, AWS Glue, Redshift, Airflow, and other pertinent data technologies
Strong understanding of ETL best practices, data integration, data modeling, and data transformation.
Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
Extensive hands-on experience designing, developing, and maintaining data pipelines and ETL processes on AWS Redshift, including data lakes and data warehouses.
Proficiency in SQL programming and Redshift stored procedures for efficient data manipulation and transformation.
Hands-on experience with AWS services such as AWS DMS, Amazon S3, AWS Glue, Redshift, Airflow, and other pertinent data technologies.
Proficiency in SQL programming and Redshift stored procedures for efficient data manipulation and transformation,
AWS certifications related to data engineering or databases are a plus.
Experience with complex ETL scenarios, such as CDC and SCD logics, and integrating data from multiple source systems,
Demonstrated expertise in AWS DMS for seamless ingestion from on-prem databases to AWS cloud

Job Summary:
Collaborate with data engineering and development teams to design, develop, test, and maintain robust and scalable ELT/ETL pipelines using SQL scripts, Redshift stored procedures, and other AWS tools and services.

        Collaborate with our engineering and data teams to understand business requirements and data integration needs, translate them into effective data solutions, that yield top-quality outcomes.

        Architect, implement, and manage end-to-end data pipelines, ensuring data accuracy, reliability, data quality, performance, and timeliness.

        Employ AWS DMS and other services for efficient data ingestion from on-premises databases into Redshift.    

        Design and implement ETL processes, encompassing Changed Data Capture (CDC) and Slow Changing Dimension (SCD) logics, to seamlessly integrating data from diverse source systems.

Regards,

Satyajit nayak

Sr. Technical Recruiter

TEK Inspirations LLC |
13573 Tabasco Cat Trail, Frisco, TX 75035

E


[email protected] 

Linkedin:

linkedin.com/in/satyajeet-nayak-85751625b

Keywords: sthree Texas Wisconsin
[email protected]
View all
Thu Sep 07 03:06:00 UTC 2023

To remove this job post send "job_kill 609713" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 0

Location: ,