Home

Job opening for Senior Data Engineer Remote at Remote, Remote, USA
Email: [email protected]
From:

Sushmita Soni,

Sonitalent

[email protected]

Reply to:   [email protected]

Hi

Hope you are doing well ,

We are looking for Senior Data Engineer

let me know if you are looking for this role and send me your updated resume also

Job Description

Job Title: Senior Data Engineer

Work Location: Remote

Duration: Contract

Note: Need LinkedIn ID.

Description:

Do you want to impact decisions to help people live healthier lives Are you an innovative Data Engineer who thrives on developing new solutions to solve tough challenges

We want to achieve more in our mission of health care, and so we have to be really smart about the business of health care. At Optum, we're changing the landscape of our industry. You will put your Data Engineering skills to work as you empower business partners and team members improve healthcare delivery. You will research cutting edge big data tools, and design innovative solutions to solve business problems that only a Data Engineer can. You'll be in the driver's seat on vital projects that have strategic importance to our mission of helping people live healthier lives. Yes, we share a mission that inspires. And we need your organizational talents and business discipline to help fuel that mission.

You will be part of the team who is focused on building a cutting-edge data analytics platform to support reporting requirements for the business. As a Senior Data Engineer, you will be responsible for the development of complex data sources and pipelines into our data platform (i.e. Snowflake) along with other data applications (i.e. Azure, Airflow, etc.) and automation.

This is a fully remote role based in the United States. Your counterpart team is located in Dublin, Ireland office. While there is no requirement to work in shift hours, there might be an occasional call with Dublin team which can require flexible working.

Primary Responsibilities:

Create & maintain data pipelines using Azure & Snowflake as primary tools

Create SQL Stored procs, Macros to perform complex transformation

Creating logical & physical data models to ensure data integrity is maintained

CI/CD pipeline creation & automation using GIT & GIT Actions

Tuning and optimizing data processes

Design and build best in class processes to clean and standardize data

Code Deployments to production environment, troubleshoot production data issues

Modelling of big volume datasets to maximize performance for our BI & Data Science Team

Create Docker images for various applications and deploy them on Kubernetes

Qualifications:

Computer Science bachelor's degree or similar

Min 3-6 years of industry experience as a Hands-on Data engineer

Excellent communication skills

Excellent knowledge of SQL, Python

Excellent knowledge of Azure Services such as Blobs, Functions, Azure Data Factory, Service Principal, Containers, Key Vault etc.

Excellent knowledge of Snowflake - Architecture, best practices

Excellent knowledge of Data warehousing & BI Solutions

Excellent Knowledge of change data capture (CDC), ETL, ELT, SCD etc.

Knowledge of CI CD Pipelines using GIT & GIT Actions

Knowledge of different data modelling techniques such as Star Schema, Dimensional models, Data vault

Hands on experience on the following technologies:

Developing data pipelines in Azure & snowflake

Writing complex SQL queries

Building ETL/ELT/data pipelines using SCD logic

Exposure to Kubernetes and Linux containers (i.e. Docker)

Related/complementary open-source software platforms and languages (e.g. Scala, Python, Java, Linux)

Previous experience with Relational Databases (RDBMS) & Non- Relational Database

Analytical and problem-solving experience applied to a Big Data datasets

Good understanding of Access control and Data masking

Experience working in projects with agile/scrum methodologies and high performing team(s)

Exposure to DevOps methodology

Data warehousing principles, architecture and its implementation in large environments

Very good understanding of integration with Tableau

Preferred Qualifications:

Design and build data pipelines (in Spark) to process terabytes of data

Very good understanding of Snowflake integration with data visualization tool such as Tableau

Orchestrate in Airflow the data tasks to run on Kubernetes/Hadoop for the ingestion, processing and cleaning of data

Terraform knowledge and automation

Create real-time analytics pipelines using Kafka / Spark Streaming

Work on Proof of Concepts for Big Data and Data Science

Understanding of United States Healthcare data

Thanks & Regards

Sushmita Soni

Sr. Technical Recruiter| SoniTalent Corp.

Desk | 859-659-1004   EXT 201

[email protected]

Address

5404 Merribrook Lane, Prospect, KY, USA

Keywords: continuous integration continuous deployment business intelligence Idaho Kentucky
[email protected]
View all
Thu Mar 14 05:20:00 UTC 2024

To remove this job post send "job_kill 1215916" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 0

Location: ,