Home

Azure Databricks Architect with PySpark and Snowflake exp at Iselin, New Jersey, USA
Email: [email protected]
From:

Sakchi kumari,

Nitya Software solutions

[email protected]

Reply to:   [email protected]

Hi ,

I hope you are doing good!

This is Sakchi from Nitya Software and we have an urgent opening for the below role , if you are interested, please send me your updated resume to [email protected]

Role: Azure Databricks Architect with

PySpark and Snowflake exp

Location: Iselin NJ / New York 10020 (100% onsite from Day 1)

Duration: Long-term contract

 Job Description:

Minimum Exp level : 15+ years

 Skills:

10+ years - Enterprise Data Management

10+ years - SQL Server based development of large datasets

5+ years with Data Warehouse Architecture, hands-on experience with Databricks platform. Extensive experience in PySpark coding and Snowflake

3+ years Python (NumPy, Pandas) coding experience

Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling

Good knowledge on Azure Coud and services like ADF, Active Directory, App Services, ADLS etc

Hands on experience on CI/CD pipeline implementations

Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills

Experience with Snowflake utilities such as SnowSQL and SnowPipe - good to have

Capable of discussing enterprise level services independent of technology stack

Experience with Cloud based data architectures, messaging, and analytics

Superior communication skills

Cloud certification(s)

Any experience with Reporting is a Plus

Excellent written and verbal communication, intellectual curiosity, a passion to understand and solve problems, consulting & customer service              

Structured and conceptual mindset coupled with strong quantitative and analytical problem-solving aptitude              

Exceptional interpersonal and collaboration skills within a team environment   

 Responsibilities:

Design, develop, and deploy Databricks jobs to process and analyze large volumes of data.             

Collaborate with data engineers and data scientists to understand data requirements and implement appropriate data processing pipelines.    

Optimize Databricks jobs for performance and scalability to handle big data workloads.   

Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions.

Demonstrated proficiency with Data Analytics, Data Insights

Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process

Leverage SQL, programming language (Python or similar) and/or ETL Tools (Azure Data Factory, Data Bricks, Talend and SnowSQL) to develop data pipeline solutions to ingest and exploit new and existing data sources.    

Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards.

Collaborate with data engineers and data scientists to understand data requirements and implement appropriate data processing pipelines. 

Optimize Databricks jobs for performance and scalability to handle big data workloads.

 Education

Minimally a Bachelor degree within an Engineering and/or Computer Science discipline

Masters degree preferred

Keywords: continuous integration continuous deployment New Jersey
[email protected]
View all
Mon Feb 05 21:31:00 UTC 2024

To remove this job post send "job_kill 1084900" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 12

Location: Iselin, New Jersey