Home

Local to Atlanta GA Big Data Engineer On Site HYbrid at Atlanta, Georgia, USA
Email: [email protected]
Big Data Engineer (Databricks)

Location : Atlanta GA Metropolitan Area (On-site)

MOI : Face to face interview

Visa :- USC,GC,H1B,GC EAD

Neel DL as well

Approximate Duration: 24+ Months Contract

Position Requires Onsite Hybrid

******LOCAL ONLY NO RELOCATION AT ALL****** 
All candidates must be physically located in Atlanta, GA to be considered.

Our Client is currently seeking an experienced Data Engineer Big Data individual for their Midtown office in Atlanta, GA. The successful candidate must have Big Data engineering experience and must demonstrate an affinity for working with others to create successful solutions. Join a smart, highly skilled team with a passion for technology, where you will work on our state-of-the-art Big Data Platforms. They must be a very good communicator, both written and verbal, and have some experience working with business areas to translate their business data needs and data questions into project requirements. The candidate will participate in all phases of the Data Engineering life cycle and will independently and collaboratively write project requirements, architect solutions, perform data ingestion development, and support duties.

Skills and Experience:

1. Must have hands-on experience with Databricks

2. Must have hands-on experience with high-velocity high-volume stream processing: Apache Kafka and Spark Streaming a. Experience with real-time data processing and streaming techniques using Spark structured streaming and Kafka b. Deep knowledge of troubleshooting and tuning Spark applications

3. Must have hands-on experience with Python and/or Scala i.e. PySpark/Scala-Spark 4.Experience with Traditional ETL tools and Data Modeling

5. Strong knowledge of Messaging Platforms like Kafka, Amazon MSK & TIBCO EMS or IBM MQ Series

6. Experience with Databricks UI, Managing Databricks Notebooks, Delta Lake with Python, Delta Lake with Spark SQL, Delta Live Tables, Unity Catalog

7. Experience with data ingestion of different file formats across like JSON, XML, CSV

8. Knowledge of Unix/Linux platform and shell scripting

9. Experience with Cloud platforms e.g. AWS, GCP, etc. Experience with database solutions like Kudu/Impala, or Delta Lake or Snowflake or BigQuery

        6+ years of overall IT experience

        3+ years of experience with high-velocity high-volume stream processing: Apache Kafka and Spark Streaming

        Experience with real-time data processing and streaming techniques using Spark structured streaming and Kafka

        Deep knowledge of troubleshooting and tuning Spark applications

        3+ years of experience with data ingestion from Message Queues (Tibco, IBM, etc.) and different file formats across different platforms like JSON, XML, CSV

        3+ years of experience with Big Data tools/technologies like Hadoop, Spark, Spark SQL, Kafka, Sqoop, Hive, S3, HDFS, or

        3+ years of experience building, testing, and optimizing Big Data data ingestion pipelines, architectures, and data sets

        2+ years of experience with Python (and/or Scala) and PySpark/Scala-Spark

        3+ years of experience with Cloud platforms e.g. AWS, GCP, etc.

        3+ years of experience with database solutions like Kudu/Impala, or Delta Lake or Snowflake or BigQuery

        2+ years of experience with NoSQL databases, including HBASE and/or Cassandra

Experience in successfully building and deploying a new data platform on Azure/ AWS

Experience in Azure / AWS Serverless technologies, like, S3, Kinesis/MSK, lambda, and Glue

Strong knowledge of Messaging Platforms like Kafka, Amazon MSK & TIBCO EMS or IBM MQ Series

Experience with Databricks UI, Managing Databricks Notebooks, Delta Lake with Python, Delta Lake with Spark SQL, Delta Live Tables, Unity Catalog

Knowledge of Unix/Linux platform and shell scripting is a must

Strong analytical and problem-solving skills

Preferred (Not Required):

        Strong SQL skills with ability to write intermediate complexity queries

        Strong understanding of Relational & Dimensional modeling

        Experience with GIT code versioning software

        Experience with REST API and Web Services

        Good business analyst and requirements gathering/writing skills

Education: 
Bachelors Degree required. Preferably in Information Systems, Computer Science, Computer Information Systems, or related field

Regards,

Steve Williams

Technical Recruiter

A :

25 Oak Tavern Cir Branchburg, New Jersey - 08876

Email Disclaimer:

This email and any attachments are confidential and intended solely for the recipient. If you received it by mistake, please notify the sender and delete it. The views expressed are solely those of the sender and not necessarily those of the company. We do not accept responsibility for any viruses transmitted. Email communication may be monitored.

Click Here to

--

Keywords: user interface message queue sthree information technology green card Georgia
Local to Atlanta GA Big Data Engineer On Site HYbrid
[email protected]
[email protected]
View all
Tue Jul 23 22:52:00 UTC 2024

To remove this job post send "job_kill 1588871" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 27

Location: Atlanta, Georgia