Home

Data System Engineer, Final interview is onsite, Alpharetta, GA - Hybrid Must be local to GA and No (H1B or CPT) at Alpharetta, Georgia, USA
Email: [email protected]
From:

Sapna Thakur,

SibiTalent Corp

[email protected]

Reply to: [email protected]

Data System Engineer,
Final interview is onsite,

Alpharetta, GA - Hybrid

Must be local to GA and No (H1B/CPT)

3 days onsite / 2 days remote per week,

Duration :: 12 months

Experience :: 9 + years

Note
Final interview is onsite

JD:

The Data System Engineer will be responsible for tasks such as data engineering, data modeling, ETL processes, data warehousing, and data analytics & science. Our platform run both on premise and on the cloud (AWS/Azure).

Knowledge/Skills:

Able to establish, modify or maintain data structures and associated components according to design

Understands and documents business data requirements

Able to come up with Conceptual and Logical Data Models at Enterprise, Business Unit/Domain Level

Understands XML/JSON and schema development/reuse, database concepts, database designs, Open Source and NoSQL concepts

Partners with Sr. Data Engineers and Sr. Data architects to create platform level data models and database designs

Takes part in reviews of own work and reviews of colleagues' work

Has working knowledge of the core tools used in the planning, analyzing, designing, building, testing, configuring and maintaining of assigned application(s)

Able to participate in assigned teams software delivery methodology (Agile, Scrum, Test-Driven Development, Waterfall, etc.) in support of data engineering pipeline development

Understands infrastructure technologies and components like servers, databases, and networking concepts

Write code to develop, maintain and optimized batch and event driven for storing, managing, and analyzing large volumes of structured and unstructured data both

Metadata integration in data pipelines

Automate build and deployment processes using Jenkins across all environments to enable faster, high-quality releases

Qualification:

Up to 4 years of software development experience in a professional environment and/or comparable experience such as:

Understanding of Agile or other rapid application development methods

Exposure to design and development across one or more database management systems DB2, SybaseIQ, Snowflake as appropriate

Exposure to methods relating to application and database design, development, and automated testing

Understanding of big data technology and NOSQL design and development with variety of data stores (document, column family, graph, etc.)

General knowledge of distributed (multi-tiered) systems, algorithms, and relational & non-relational databases

Experience with Linux and Python scripting as well as large scale data processing technology such as spark

Exposure to Big data technology and NOSQL design and coding with variety of data stores (document, column family, graph, etc.)

Experience with cloud technologies such as AWS and Azure, including deployment, management, and optimization of data analytics & science pipelines

Nice to have: Collibra, Terraform, Java, Golang, Ruby, Machine Learning Operation deployment

Bachelors degree in computer science, computer science engineering, or related field required

MANAGER NOTES

Stream data, batch data, manages framework for machine learning for ETS

Hiring for a data systems engineer, will work on DevOps Cloud side

Making sure that the pipeline, the codes that they have are correct

They will work on data movements- could be batch or streaming, be on cloud

s

Exposure to design development When they do data movement or data hydration, they work with high volume data, DB2, Sybase, Snowflake

Whats hydration Moving data from a source system to datalake, itll be used to move data, terabytes of data

Which cloud do you prefer most Right now the platform is in AWS but they will be moving to Azure

What would be the 3 top skills/forte 1) Python, Spark, Shell scripting 2) platform in Kafka and ELK/Elastic Search 3) datalake prem, using GLUE, machine learning learning part, using GLUE to move data

Previous experience/ what would be an appealing resource Data engineering moving large amounts of data, using Python, have Devops experience, working with Jenkins, creating pipelines and moving the data

NoSQL required They would prefer it, concept that can be taught

ETL tools, Java, Golang etc. Good to have, this team doesnt to Golang and Ruby

Metadata experience will be enough

DB systems- DB2, SybaseIQ, Snowflake Snowflake will be helpful since its their destination database

DevOps- CICS etc Yes, their team is a liaison to another team either a Jenkins related team or another team

Certifications on AWS or Azure Azure certification would be preferred, certifications are a plus

Thanks & Regards

Sapna Thakur | Sr. Technical Recruiter

Keywords: database information technology Georgia
Data System Engineer, Final interview is onsite, Alpharetta, GA - Hybrid Must be local to GA and No (H1B or CPT)
[email protected]
[email protected]
View all
Mon Oct 14 22:01:00 UTC 2024

To remove this job post send "job_kill 1839463" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 84

Location: , Georgia