Data System Engineer, Final interview is onsite, Alpharetta, GA - Hybrid at Alpharetta, Georgia, USA |
Email: [email protected] |
Hi, Please find the job description below and share with me suitable profile for this role need genuine Visa: GC, USC, H4 EAD, GC-EAD or L2 EAD only and must be local to Georgia . Data System Engineer, Final interview is onsite, Alpharetta, GA - Hybrid (3 days onsite / 2 days remote per week) ( NO RELOCATION ) Duration :: 12 months Experience :: 9 + years Note Final interview is onsite JD: The Data System Engineer will be responsible for tasks such as data engineering, data modeling, ETL processes, data warehousing, and data analytics & science. Our platform run both on premise and on the cloud (AWS/Azure). Knowledge/Skills: Able to establish, modify or maintain data structures and associated components according to design Understands and documents business data requirements Able to come up with Conceptual and Logical Data Models at Enterprise, Business Unit/Domain Level Understands XML/JSON and schema development/reuse, database concepts, database designs, Open Source and NoSQL concepts Partners with Sr. Data Engineers and Sr. Data architects to create platform level data models and database designs Takes part in reviews of own work and reviews of colleagues' work Has working knowledge of the core tools used in the planning, analyzing, designing, building, testing, configuring and maintaining of assigned application(s) Able to participate in assigned teams software delivery methodology (Agile, Scrum, Test-Driven Development, Waterfall, etc.) in support of data engineering pipeline development Understands infrastructure technologies and components like servers, databases, and networking concepts Write code to develop, maintain and optimized batch and event driven for storing, managing, and analyzing large volumes of structured and unstructured data both Metadata integration in data pipelines Automate build and deployment processes using Jenkins across all environments to enable faster, high-quality releases Qualification: Up to 4 years of software development experience in a professional environment and/or comparable experience such as: Understanding of Agile or other rapid application development methods Exposure to design and development across one or more database management systems DB2, SybaseIQ, Snowflake as appropriate Exposure to methods relating to application and database design, development, and automated testing Understanding of big data technology and NOSQL design and development with variety of data stores (document, column family, graph, etc.) General knowledge of distributed (multi-tiered) systems, algorithms, and relational & non-relational databases Experience with Linux and Python scripting as well as large scale data processing technology such as spark Exposure to Big data technology and NOSQL design and coding with variety of data stores (document, column family, graph, etc.) Experience with cloud technologies such as AWS and Azure, including deployment, management, and optimization of data analytics & science pipelines Nice to have: Collibra, Terraform, Java, Golang, Ruby, Machine Learning Operation deployment Bachelors degree in computer science, computer science engineering, or related field required MANAGER NOTES Stream data, batch data, manages framework for machine learning for ETS Hiring for a data systems engineer, will work on DevOps Cloud side Making sure that the pipeline, the codes that they have are correct They will work on data movements- could be batch or streaming, be on cloud s Exposure to design development When they do data movement or data hydration, they work with high volume data, DB2, Sybase, Snowflake Whats hydration Moving data from a source system to datalake, itll be used to move data, terabytes of data Which cloud do you prefer most Right now the platform is in AWS but they will be moving to Azure What would be the 3 top skills/forte 1) Python, Spark, Shell scripting 2) platform in Kafka and ELK/Elastic Search 3) datalake prem, using GLUE, machine learning learning part, using GLUE to move data Previous experience/ what would be an appealing resource Data engineering moving large amounts of data, using Python, have Devops experience, working with Jenkins, creating pipelines and moving the data NoSQL required They would prefer it, concept that can be taught ETL tools, Java, Golang etc. Good to have, this team doesnt to Golang and Ruby Metadata experience will be enough DB systems- DB2, SybaseIQ, Snowflake Snowflake will be helpful since its their destination database DevOps- CICS etc Yes, their team is a liaison to another team either a Jenkins related team or another team Certifications on AWS or Azure Azure certification would be preferred, certifications are a plus Thanks & Regards Sapna Thakur | Sr. Technical Recruiter E-Mail: [email protected] Direct: +1 936-6361-013, EXT - 013 SibiTalent Corp. | 101 E. Park Blvd., Suite 600, Plano, TX 75074 Website: www.sibitalent.com Keywords: database information technology green card Georgia Texas Data System Engineer, Final interview is onsite, Alpharetta, GA - Hybrid [email protected] |
[email protected] View all |
Mon Oct 14 20:17:00 UTC 2024 |