Tomorrow Interview (Need local only) || Big Data Engineer with GCP, Spark, Apache Beam exp (MUST) || Alpharetta, GA (Hybrid) at Alpharetta, Georgia, USA |
Email: [email protected] |
From: Rahul Kumar, SPAR Information Systems [email protected] Reply to: [email protected] Hello Folks, (Must have GCP, Apache Spark, Apache Beam) AND Need local to Georgia only Hope you all are doing good. Please go through the Job description and let me know your interest. Title: Big Data Engineer Work Location: Alpharetta, GA (Day 1 onsite/Hybrid) Duration: Long Term Contract Mandatory Skills: Big Data, GCP, Apache Spark, Apache Beam Job Description: Extract, Transform and Load data from multiple sources and multiple formats using Big Data Technologies. Development, enhancement, and support of data ingestion jobs from various source systems following existing design patterns using GCP Services such as Apache Spark, Dataproc, Dataflow, BigQuery, Airflow, etc. Work across Teams and senior engineers to make Data more accessible to others within the organization. Modify data extraction pipelines into standardized approaches that can be repeatable and reusable with minimal supervision from senior engineers. Automation of manual processes, optimize data delivery, re-designing infrastructure for greater scalability, etc. Work closely with senior engineers to optimize query and data access techniques. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Participate in a tight-knit engineering team employing agile software development practices. What experience you need Bachelor's degree in Computer Science, Systems Engineering or equivalent experience. 5+ years of work experience as a Big Data Engineer. 3+ years of experience using Technologies such as Apache Spark, Hive, HDFS, Beam (Optional). 3+ years of experience in SQL and Scala or Python. 2+ years experience with software build management tools like Maven or Gradle. 2+ years of experience working with Cloud Technologies such as GCP, AWS or Azure. What could set you apart Data Engineering using GCP Technologies (BigQuery, DataProc, Dataflow, Composer, DataStream, etc) Experience writing data pipelines. Self-starter that identifies/responds to priority shifts with minimal supervision Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, Oracle, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github Thanks & Regards, Rahul Kumar Sr. Technical Recruiter SPAR Information Systems (a E-verify Company) Email : [email protected] Keywords: continuous integration continuous deployment golang Georgia Tomorrow Interview (Need local only) || Big Data Engineer with GCP, Spark, Apache Beam exp (MUST) || Alpharetta, GA (Hybrid) [email protected] |
[email protected] View all |
Mon Apr 22 18:26:00 UTC 2024 |