Home

100% Interview || Data Engineer - Databricks, Spark || USA, PA at Remote, Remote, USA
Email: [email protected]
From:
Aryan kashyap,
Applab Systems
[email protected]
Reply to:   [email protected]

Hi There

Hope you are doing well

My name is Aryan and I'm an Sr. IT recruiter at AppLab Systems. We are having very urgent opening for one of our client. If you find any consultant for below requirement, then please share with me.

Position:            Data Engineer - Databricks, Spark

Location:           USA, PA (Initial Remote)
Duration:          Contract to Hire

Exp:                    10+ years (Mini)

Mandate Skills: Databricks, Spark, Snowflake Cloud Data Platform, Data Vault 2.0 model, Enterprise Data Integrations, AWS Cloud architecture, Event and messaging patterns, streaming data, AWS, Kafka.

Role & Responsibilities: 

Data Engineers will be responsible for design, build and maintain data pipelines ensuring data quality, efficient processing, and timely delivery of accurate and trusted data.
The ability to design, implement and optimize large-scale data and analytics solutions on Databricks, Spark, Snowflake Cloud Data Warehouse is essential.
Ensure performance, security, and availability of the data warehouse.
Establish ongoing end-to-end monitoring for the data pipelines.
Strong understanding of full CI/CD lifecycle.

Must Haves:

2+ years of recent experience with Databricks / Spark / Snowflake and a total of 6+ years in data engineering role.
Designing and implementing highly performant data ingestion pipelines from multiple sources using spark and databricks.
Extensive working knowledge of Spark and Databricks
Demonstrable experience designing and implementing modern data warehouse/data lake solutions with an understanding of best practices.
Hands-on development experience with Snowflake data platform including Snowpipes, SnowSQL,tasks, stored procedures, streams, resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, cloning, time travel, data sharing and understanding how to use these features.
Advanced proficiency in writing complex SQL statements and manipulating large structured and semi-structured datasets.
Data loading/Unloading and Data sharing
Strog hands-on Experience on SNOWSQL queries, script preparation and stored procedures and performance tunning
Knowledge of SnowPipe implementation
Create Spark jobs for data transformation and aggregation
Produce unit tests for Spark transformations and helper methods
Security design and implementation on Databricks
Build processes supporting data transformation, data structures, metadata, dependency and workload management.
A successful history of manipulating, processing and extracting value from large disconnected datasets.
Working knowledge of message queuing, stream processing, and scalable 'big data' data stores.

Good to Have:

Valid professional certification
Experience in Python/Pyspark/Scala/Hive Programming.
Confidence and agility in challenging times
Ability to work collaboratively with cross-functional teams in a fast-paced, team environment.

Thanks & Regards

Aryan Kashyap

Sr. Technical Recruiter

Office:   (609)629 -2043

[email protected]

4365 Route 1 South, Suite 105

Princeton, NJ 08540

Linkedin: https://www.linkedin.com/in/aryan-kashyap-015241157/

www.applabsystems.com

Keywords: continuous integration continuous deployment information technology
[email protected]
View all
Tue Nov 29 00:11:00 UTC 2022

To remove this job post send "job_kill 169332" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 17

Location: , Pennsylvania