Home

Locals Only: AWS Data Engineer @ Baltimore, MD (Hybrid) at Baltimore, Ohio, USA
Email: [email protected]
Hello

Job Title: AWS Data Engineer

Job Location:
Baltimore,
MD (Hybrid)

Job Duration: 24 months

Need Consultants from MD/DC/VA Only

Required Skills:

At least Nine (9) years of experience working on AWS
cloud-based batch and streaming data pipelines.

Strong proficiency in AWS cloud services, including
Kinesis, S3, Lake Formation, Glue, and Step Functions.

In-depth knowledge of SQL databases, such as Aurora
and SQL Server, and data lakes in S3, as well as enterprise data in
Redshift/Snowflake.

Hands-on experience with ETL tools, data
transformation, and data integration techniques.

Familiarity with data governance, data privacy, and
security best practices in AWS environments. Strong problem-solving skills and
the ability to troubleshoot complex data pipeline issues.

Excellent communication and teamwork skills to
collaborate effectively with cross-functional teams.

AWS certifications, such as AWS Certified Data
Analytics - Specialty or AWS Certified Big Data - Specialty, are advantageous.

Responsibilities:

Designing, implementing, and maintaining batch and
streaming data pipelines between various SQL sources, including Aurora and SQL
Server, and a target data lake in S3, as well as enterprise data stored in
Redshift/Snowflake.

Expertise in AWS cloud services, such as Kinesis, S3,
Lake Formation, Glue, and Step Functions, to build scalable, reliable, and
high-performance data pipelines that enable seamless data integration and
empower data-driven insights.

Design end-to-end data pipelines that efficiently
extract, transform, and load data from SQL sources (Aurora, SQL Server) to the
target data lake in S3 and the enterprise data in Redshift/Snowflake.

Implement both batch and real-time streaming data
integration solutions using AWS Kinesis and other relevant technologies.

Develop data transformation processes using AWS Glue
or other ETL tools to harmonize, cleanse, and enrich data for analytical use.

Oversee the setup and configuration of the data lake
in S3, applying AWS Lake Formation best practices for data organization,
cataloging, and access control.

Ensure adherence to data governance and security
standards across the data pipelines, guaranteeing data privacy and compliance.

Continuously monitor and optimize the performance of
the data pipelines, addressing bottlenecks and ensuring efficient data
processing and delivery.

Implement error handling mechanisms and robust data
monitoring to identify and resolve data pipeline issues proactively.

Establish and maintain data cataloging and lineage
information using AWS Glue Data Catalog to enable data discoverability and
traceability.

Create comprehensive technical documentation,
including design specifications, data flow diagrams, and operational guides.

Collaborate with data analysts, data scientists, and
other stakeholders to understand data requirements and deliver reliable data
solutions.

Ensure data governance principles are implemented
throughout the data pipelines to maintain data quality and integrity.

Thanks & Regards

Raj Kumar

Gtalk: 
[email protected]

--

Keywords: sthree information technology Maryland Virginia
[email protected]
View all
Tue Sep 05 21:11:00 UTC 2023

To remove this job post send "job_kill 601625" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 0

Location: ,