Home

Big Data Engineer (Databricks) : Atlanta, GA (95% remote) : Need Local candidate at Atlanta, Georgia, USA
Email: [email protected]
From:

Deepali Jha,

SibiTalent

[email protected]

Reply to:   [email protected]

Hello,

Hope you are doing well!!

My name is Deepali Jha and I am a Staffing Specialist at Sibitalent. I am reaching out to you on an exciting job opportunity with one of our clients.

Job Title :- Big Data Engineer (Databricks)

Visa : No H1b/CPT/OPT

Location:- : Atlanta, GA, This position is 95% remote candidates still must come onsite for mandatory meetings and events

Duration: 6+ Months

Experience: 10+ Years

Note :- Need Local candidate

Job Description:

Notes:

            
MUST have DataBricks, Apache kafka, and Spark streaming experience ON RESUME OR I WILL

             Locals are always preferred over non locals.

Big Data Engineer (Databricks) - various levels (Principal, Lead, Senior, Intermediate, Junior)

Corporation is Fortune-300 transportation company specializing in freight railroading. We operate approximately 21,000 route miles in 22 states and the District of Columbia, serve every major container port in the eastern United States, and provide efficient connections to other rail carriers. Client has the most extensive intermodal network in the East and is a major transporter of coal and industrial products.

Job Description

Client Corporation is currently seeking an experienced Data Engineer Big Data individual for their Midtown office in Atlanta, GA.  The successful candidate must have Big Data engineering experience and must demonstrate an affinity for working with others to create successful solutions.  Join a smart, highly skilled team with a passion for technology, where you will work on our state of the art Big Data Platforms. They must be a very good communicator, both written and verbal, and have some experience working with business areas to translate their business data needs and data questions into project requirements.  The candidate will participate in all phases of the Data Engineering life cycle and will independently and collaboratively write project requirements, architect solutions and perform data ingestion development and support duties.

Skills and Experience:

Required:

             6+ years of overall IT experience

             3+ years of experience with high-velocity high-volume stream processing: Apache Kafka and Spark Streaming

             Experience with real-time data processing and streaming techniques using Spark structured streaming and Kafka

             Deep knowledge of troubleshooting and tuning Spark applications

             3+ years of experience with data ingestion from Message Queues (Tibco, IBM, etc.) and different file formats across different platforms like JSON, XML, CSV

             3+ years of experience with Big Data tools/technologies like Hadoop, Spark, Spark SQL, Kafka, Sqoop, Hive, S3, HDFS, or

             3+ years of experience building, testing, and optimizing Big Data data ingestion pipelines, architectures, and data sets

             2+ years of experience with Python (and/or Scala) and PySpark/Scala-Spark

             3+ years of experience with Cloud platforms e.g. AWS, GCP, etc.

             3+ years of experience with database solutions like Kudu/Impala, or Delta Lake or Snowflake or BigQuery

             2+ years of experience with NoSQL databases, including HBASE and/or Cassandra

             Experience in successfully building and deploying a new data platform on Azure/ AWS

             Experience in Azure / AWS Serverless technologies, like, S3, Kinesis/MSK, lambda, and Glue

             Strong knowledge of Messaging Platforms like Kafka, Amazon MSK & TIBCO EMS or IBM MQ Series

             Experience with Databricks UI, Managing Databricks Notebooks, Delta Lake with Python, Delta Lake with Spark SQL, Delta Live Tables, Unity Catalog

             Knowledge of Unix/Linux platform and shell scripting is a must

             Strong analytical and problem-solving skills

Preferred (Not Required): 

             Strong SQL skills with ability to write intermediate complexity queries

             Strong understanding of Relational & Dimensional modeling 

             Experience with GIT code versioning software

             Experience with REST API and Web Services

             Good business analyst and requirements gathering/writing skills

Education

             Bachelors Degree required.  Preferably in Information Systems, Computer Science, Computer Information Systems or related field

Thanks & Regards

Deepali Jha

Sr. Technical Recruiter

E-Mail:[email protected]

Website: 
www.sibitalent.com

Keywords: user interface message queue access management sthree information technology Georgia
Big Data Engineer (Databricks) : Atlanta, GA (95% remote) : Need Local candidate
[email protected]
[email protected]
View all
Wed Jul 24 00:01:00 UTC 2024

To remove this job post send "job_kill 1589370" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 8

Location: Atlanta, Georgia