Home

urgent opening for Software Engineer, (Mason Hybrid) at Mason, Wisconsin, USA
Email: [email protected]
Hi
Teams,

Greetings!!

 We have urgent opening for
Software
Engineer, (
Mason
Hybrid)

Job Title
:

Software Engineer

Client:

UST/Elevance Health

Location
:

Mason Hybrid

Exp-

15 Years,

Job
responsibilities  

The Software Delivery Data Engineering team is focused on
improving build data transparency, facilitating data driven decisions to be
made across Software Engineering. These cross-functional collaborations will
drive actions to ensure software releases are shipped on time with quality
features. We collaborate with cross-functional teams to build up large-scale
data pipelines, drive both description and predictive analysis, as well as
visualize the data via various tools, frameworks, and services. We're looking
for a senior data engineer who has extensive working experience in delivering
big data platforms or large-scale data pipelines or streaming systems in the
public cloud. You'll be working on a unique and challenging big data ecosystem,
building scalable data pipelines that process, clean, and validate the
integrity of the data from raw sources based on engineering specifications and
business intelligence for analytics use.

Key Qualifications

  6+ years experience in
architecting, designing, and developing large scale data solutions.

Deep understanding and strong development experience with
distributed data processing frameworks such as Hadoop, Spark and others

6+ years experience in building and maintaining large-scale
ETL/ELT pipelines (batching and/or streaming) that are optimized for
performance and can handle data from various sources, structured or
unstructured.

Proficiency in various data modelling techniques, such as ER,
Hierarchical, Relational, or NoSQL modeling.

  Excellent design and
development experience with SQL and NoSQL database, OLTP and OLAP databases

Expertise in Python, Unix Shell scripting and Dependency
driven job schedulers Description In this role, you will be collaborating with
data scientists, data analysts, software developers, data engineers, and
project managers to understand requirements and translate them into scalable,
reliable, and efficient data pipelines, data processing workflows, and machine
learning pipelines. You will be responsible for architecting and implementing
large scale systems and data pipelines with a focus on agility,
interoperability, simplicity, and reusability. You should have deep knowledge
in infrastructure, warehousing, data protection, security, data collection,
processing, modelling, and metadata management, and able to build an end-to-end
solutions that also support metadata logging, anomaly detection, data cleaning,
transformation, etc. The ideal candidate is a highly motivated, collaborative,
and proactive individual who can communicate effectively and can adapt and
learn quickly. Education & Experience Bachelors or Masters degree in
Computer Science, Information Systems, Software Engineering, Data Science, or a
related field. Additional Requirements

Capacity to translate business requirements into technical
solutions.

Experienced in writing and maintaining high-quality code using
standard methodologies such as code reviews, unit testing, and continuous
integration.

Stay up-to-date with the latest trends and technologies in
data infrastructure, architecture, big data analytics, and apply them to
improve the system.

  Familiarity with other
related fields, such as data science, machine learning, and artificial
intelligence, to design solutions that can accommodate advanced analytics.

Ability to identify and address issues in data design or
integration.

Collaborative mindset to work with various teams, including
data engineers, data analysts, and cross functional partner teams.

Good time management skills and can incrementally deliver to
tight schedules.

Mandatory Areas

Must Have Skills

6+ years experience in architecting,
designing, and developing large scale data solutions.

Deep understanding and strong development
experience with distributed data processing frameworks such as Hadoop, Spark
and others

6+ years experience in building and
maintaining large-scale ETL/ELT pipelines (batching and/or streaming) that are
optimized for performance and can handle data from various sources, structured
or unstructured.

Proficiency in various data modeling
techniques, such as ER, Hierarchical, Relational, or NoSQL modeling.

Excellent design and development experience
with SQL and NoSQL database, OLTP and OLAP databases

Expertise in Python, Unix Shell scripting
and Dependency driven job schedulers

Ruchi Verma

Professional Recruiter

Email:

[email protected]

Direct Number

:
248-473-3018

Desk Board No

:
248-473-0720 Ext: 176

LinkedIn-

linkedin.com/in/ruchi-verma-a28b4921a

24543 Indoplex Circle,Suite 220,

Farmington Hills,MI 48335

--

Keywords: information technology Michigan
urgent opening for Software Engineer, (Mason Hybrid)
[email protected]
[email protected]
View all
Tue Jun 18 00:55:00 UTC 2024

To remove this job post send "job_kill 1488438" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 1

Location: ,