Home

Onsite Contract Position for AWS Data Engineer with Spark and Python in Bloomfield, CT at Bloomfield, New York, USA
Email: [email protected]
From:
Gaurav,
USG
[email protected]
Reply to: [email protected]

Hi,

Hope you are doing great.

Please go through the job description given below and if you are interested do share an updated word copy of your resume and best time to reach you over the phone.

Position: AWS Data Engineer

Location: Bloomfield, CT

Duration: Contract

Job Description:

Should take part in Analysis, requirement gathering and design.
Ability to understand the requirements from Functional Technical Spec.
Maintains currency of deep data related IaaS, PaaS and SaaS technology knowledge
Lead and contribute to data implementation projects and/or project independent workstreams
Work independently, or as part of a team, to design and develop enterprise data solutions
Define client data strategy, including designing multi-phased implementation roadmaps
Understanding of distributed systems and architecture design trade-offs
Prior data modeling and data architect role for enterprise data modeling across multi-subject areas
Hands-on development experience using and migrating data to cloud platforms (i.e., lift-and-shifts)
Research, analyze, recommend, and select technical approaches for solving difficult and challenging development and integration problems
Be responsible for small to large sized projects, automation, task tracking, process execution and metrics for Enterprise Data Lake workloads
Collaborate with field sales, pre-sales, training and support teams to help partners and customers learn and use AWS services such as Lake Formation, Amazon Aurora, Amazon Data Pipeline, Amazon Athena, Glue, Amazon S3, Amazon DynamoDB, Amazon Relational Database Service (RDS), Amazon Elastic Map Reduce (EMR), Amazon Kinesis, and Amazon Redshift. Experience in architecture, software design and operations in hybrid environments as well as projects at scale.
Demonstrated consulting skills, ideally through previous roles and delivery of on-site technical engagements with partners and customers. This includes participating in pre-sales on-site visits, understanding customer requirements, creating consulting proposals and creating packaged data solution offerings.
Engagements will include onsite projects proving the use of database, data and analytics solutions across on-premises and cloud environments. Customer are looking to migrate existing solutions as well as the develop new architectures and applications in the cloud. Strong verbal and written communication skills, with the ability to work effectively across internal and external organizations.
Work with engineering and support teams to convey partner and customer needs and feedback as input to technology roadmaps. Share real world challenges and recommend new capabilities that would simplify adoption and drive greater value from the use of AWS services. Experience, or a desire to work specifically in Healthcare and Life Sciences
Imagine bold possibilities and work with our clients and partners to find innovative new ways to satisfy business needs through database innovation, big data and business intelligence. Demonstrated ability to think strategically about business, product, and technical challenges
Provide technical assistance to offshore team and evaluate all codes.
Manage work and apply various performing techniques.
Understand the existing framework and develop/fine tune the jobs accordingly.
Coordinate with QA team and sort out the issue during testing.
Schedule/Plan the work item based on business priority.
Should take complete ownership of SDLC.
Strong skills in Oracle or relevant RDBMS, should have sound knowledge in writing SQL queries with Joins and sub queries.

Technical / Functional Skills:

We are looking for a strong hands-on AWS expert on the Data side of AWS Cloud with following skills:

Experience in shell scripting using Linux, AWS CLI commands and well versed with all AWS services.
Experienced writing lambda functions, unit test scripts for lambda using Python, so python is a must.
Experience writing code in Pyspark for glue jobs, one should be experienced in Spark.
Work on AWS services S3, Lambda, Athena and Glue mainly, one should be experienced in Glue/EMR( If one know Spark, It should be good) as both are similar services, Athena tables creation( if one knows Hive(HQL), it should be good).
Basic understanding on IAM roles, as we will use in glue jobs, lambda functions, etc. Work with Cloudops team to get the IAM roles created and for automation, but to provide them the requirements one should have good understanding.
Work with other teams like CloudOps, DevOps for code deployments, Roles. etc. Understanding on Jenkins, CI/CD automation is must.
Experienced working on Redshift, DMS. Experience with AWS scheduling mechanism and preferably experience working in ESP scheduler.
Strong SQL/PLSQL knowledge.
Candidate should be experienced handling application integration using AWS native services.
Experiences working in agile methodology and should be good team player and contribute towards design and best practices
Candidate should have worked in onsite/offshore model
Prior Java based experience is a plus
AWS certification is a big plus

With Regards,

Gaurav Sharma | Sr. Technical Recruiter

Phone: 614-495-9222 Ext.- 854

Direct/Cell: 251 744-7164

Email: [email protected]
[email protected]
View all
Wed Oct 12 02:05:00 UTC 2022

To remove this job post send "job_kill 806" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 7

Location: Bloomfield, Connecticut