Home

Need of: Data Engineer - AWS+ Snowflake, Hive, Spark Position in Beaverton, OR (On Site Position) at Beaverton, Alabama, USA
Email: [email protected]
Hi

Please go through the below job description and share me suitable resumes to: [email protected]

Data Engineer - AWS+ Snowflake, Hive, Spark

Beaverton, OR (On-site)

Long Term Contract

Need : 10+ Years and above resumes with Linkedin Id..

Job Description
:

Responsibilities: (The primary tasks, functions, and deliverables of the role)

Design and build reusable components, frameworks, and libraries at scale to support analytics products.

Design and implement product features in collaboration with business and Technology stakeholders.

Identify and solve issues concerning data management to improve data quality.

Clean, prepare and optimize data for ingestion and consumption.

Collaborate on the implementation of new data management projects and re-structure of the current data architecture.

Implement automated workflows and routines using workflow scheduling tools.

Build continuous integration, test-driven development, and production deployment frameworks.

Analyze and profile data for designing scalable solutions.

Troubleshoot data issues and perform root cause analysis to proactively resolve product and operational issues.

Requirements
:

Strong understanding of data structures and algorithms

Strong understanding of solution and technical design

Has a strong problem solving and analytical mindset

Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders

Able to quickly pick up new programming languages, technologies, and frameworks.

Experience building cloud scalable, real time and high-performance data lake solutions.

Fair understanding of developing complex data solutions

Experience working on end-to-end solution design.

Willing to learn new skills and technologies.

Has a passion for data solutions.

Required and Preferred Skill Sets:

Hands on experience in AWS - EMR [Hive, Pyspark], S3, Athena or any other equivalent cloud

Familiarity with Spark Structured Streaming

Minimum experience working experience with Hadoop stack dealing huge volumes of data in a scalable fashion.

hands-on experience with SQL, ETL, data transformation and analytics functions

hands-on Python experience including Batch scripting, data manipulation, distributable packages.

Experience working with batch orchestration tools such as Apache Airflow or equivalent, preferable Airflow.

Working with code versioning tools such as GitHub or BitBucket; expert level understanding of repo design and best practices

Familiarity with deployment automation tools such as Jenkins

Hands-on experience designing and building ETL pipelines; expert with data ingest, change data capture, data quality; hand on experience with API development.

Designing and developing relational database objects; knowledgeable on logical and physical data modelling concepts; some experience with Snowflake

Familiarity with Tableau or Cognos use cases

Familiarity with Agile; working experience preferred.

Thanks & Regards

Venkat

E-Mail:
[email protected]
  Website: 
http://sourcemantra.com/

A 295, Durham Avenue, Suite #201,, South Plainfield, New Jersey 07080, USA

--

Keywords: sthree information technology golang Idaho
[email protected]
View all
Fri Sep 01 02:46:00 UTC 2023

To remove this job post send "job_kill 595256" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 0

Location: ,