Home

Position : Sr Azure Data Engineer at Remote, Remote, USA
Email: [email protected]
Hello
Associate,   

Hope
you are doing well

We
have the below requirement open. Please send me your genuine candidate on my
email ID
[email protected]

Position :
Sr Azure Data Engineer

Location :
Remote

Duration : Long Term 

Visa: EAD, GC, USC

Note Candidates will be
given with past UHG exp

Job Description :

Primary Responsibilities:

Create & maintain data pipelines
using Azure & Snowflake as primary tools

Create SQL Stored procs, Macros to
perform complex transformation

Creating logical & physical data
models to ensure data integrity is maintained

CI CD pipeline creation & automation
using GIT & GIT Actions

Tuning and optimizing data processes

Design and build best in class
processes to clean and standardize data

Code Deployments to production
environment, troubleshoot production data issues

Modelling of big volume datasets to
maximize performance for our BI & Data Science Team

Create Docker images for various
applications and deploy them on Kubernetes

Required Qualifications:

Computer Science bachelor's degree
or similar

Min 3-6 years of industry experience
as a Hands-on Data engineer

Excellent communication skills

Excellent knowledge of SQL, Python

Excellent knowledge of Azure
Services such as Blobs, Functions, Azure Data Factory, Service Principal,
Containers, Key Vault etc.

Excellent knowledge of Snowflake -
Architecture, best practices

Excellent knowledge of Data
warehousing & BI Solutions

Excellent Knowledge of change data
capture (CDC), ETL, ELT, SCD etc.

Knowledge of CI CD Pipelines using
GIT & GIT Actions

Knowledge of different data
modelling techniques such as Star Schema, Dimensional models, Data vault

Hands on experience on the following
technologies:

o Developing data pipelines in Azure
& snowflake

o Writing complex SQL queries

o Building ETL/ELT/data pipelines
using SCD logic

o Exposure to Kubernetes and Linux
containers (i.e. Docker)

o Related/complementary open-source
software platforms and languages (e.g. Scala, Python, Java, Linux)

Previous experience with Relational
Databases (RDBMS) & Non- Relational Database

Analytical and problem-solving
experience applied to a Big Data datasets

Good understanding of Access control
and Data masking

Experience working in projects with
agile/scrum methodologies and high performing team(s)

Exposure to DevOps methodology

Data warehousing principles,
architecture and its implementation in large environments

Very good understanding of
integration with Tableau

Preferred Qualifications:

Design and build data pipelines (in
Spark) to process terabytes of data

Very good understanding of Snowflake
integration with data visualization tool such as Tableau

Orchestrate in Airflow the data
tasks to run on Kubernetes/Hadoop for the ingestion, processing and cleaning of
data

Terraform knowledge and automation

Create real-time analytics pipelines
using Kafka / Spark Streaming

Work on Proof of Concepts for Big
Data and Data Science

Understanding of United States
Healthcare data

--

Keywords: continuous integration continuous deployment business intelligence information technology green card Idaho
[email protected]
View all
Wed Mar 13 20:06:00 UTC 2024

To remove this job post send "job_kill 1212456" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 0

Location: ,