Home

Hot C2C opening forSr. Data Engineer Remote role at Remote, Remote, USA
Email: [email protected]
From:

Gobi,

PiplNow

[email protected]

Reply to:   [email protected]

Hi,

Hope you are doing well.

I have an urgent C2C opening for Sr. Data Engineer Remote role

Our client is looking to fill this role immediately.

Please share your updated resume, filled consultant details, filled Skill matrix, visa copy and dl copy asap. 

Skills

Years of experience

Over all experience

Total years of work exp in US

As Data Engineer

Python

SQL (expert level)

Spark

Hadoop

Airflow

Scala

Kafka

distributed storage systems (e.g., HDFS, S3)

Luigi, Oozie, AWS Glue

Relational databases (e.g., PostgreSQL, MySQL)

Columnar databases (e.g., Redshift, BigQuery, HBase, ClickHouse)

Consultant Details:  

Criteria

Consultant's Data

Full Name

Primary Phone

Primary Email

Education Details Graduation

Education Details Masters

Certification if any

Passport Number 

LinkedIn Profile

US work authorization and expiration

Expected pay rate on C2C

Current Company Name

Current location (City/State)

Willing to relocate (yes/No)

Availability to join new project/ Notice period

Have you ever worked or interviewed for this client in the past

If yes, as a consultant or as an employee

Last 5 digits of Social Security Number

Birth month and day (NOT YEAR)

Senior Data Engineer

Remote

Skill Sets
- Python, SQL (expert level), Spark and Scala, Airflow

 Expertise:

5-9+ years of relevant industry experience with a BS/Masters, or 2+ years with a PhD

Experience with distributed processing technologies and frameworks, such as
Hadoop, Spark, Kafka, and distributed storage systems (e.g., HDFS, S3)

Demonstrated ability to analyze large data sets to identify gaps and inconsistencies, provide data insights, and advance effective product solutions.

Expertise with
ETL schedulers such as Apache
Airflow, Luigi, Oozie, AWS Glue or similar frameworks

Solid understanding of data warehousing concepts and hands-on experience with relational databases (e.g., PostgreSQL, MySQL) and columnar databases (e.g., Redshift, BigQuery, HBase, ClickHouse)

Excellent written and verbal communication skills

A Typical Day:

Design, build, and maintain robust and efficient data pipelines that collect, process, and store data from various sources, including user interactions, financial details, and external data feeds.

Develop data models that enable the efficient analysis and manipulation of data for merchandising optimization. Ensure data quality, consistency, and accuracy.

Build scalable
data pipelines (SparkSQL & Scala) leveraging Airflow scheduler/executor framework

Collaborate with cross-functional teams, including Data Scientists, Product Managers, and Software Engineers, to define data requirements, and deliver data solutions that drive merchandising and sales improvements.

Contribute to the broader Data Engineering community to influence tooling and standards to improve culture and productivity.

Improve code and data quality by leveraging and contributing to internal tools to automatically detect and mitigate issues.

Keywords: sthree
[email protected]
View all
Thu Feb 22 01:15:00 UTC 2024

To remove this job post send "job_kill 1143682" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 0

Location: ,