Home

Python Spark AWS ---- Columbus, OH ---- Mphasis at Columbus, Ohio, USA
Email: [email protected]
Job Title:

Python Spark AWS

Location: 
Columbus, OH

Job Description :

Develop and maintain data platforms using Python, Spark, and PySpark.

Handle migration to PySpark on AWS.

Design and implement data pipelines.

Work with AWS and Big Data.

Produce unit tests for Spark transformations and helper methods.

Create Scala/Spark jobs for data transformation and aggregation.

Write Scaladoc-style documentation for code.

Optimize Spark queries for performance.

Integrate with SQL databases (e.g., Microsoft, Oracle, Postgres, MySQL).

Understand distributed systems concepts (CAP theorem, partitioning, replication, consistency, and consensus).

Skills:

Proficiency in Python, Scala (with a focus on functional programming), and Spark.

Familiarity with Spark APIs, including RDD, DataFrame, MLlib, GraphX, and Streaming.

Experience working with HDFS, S3, Cassandra, and/or DynamoDB.

Deep understanding of distributed systems.

Experience with building or maintaining cloud-native applications.

Familiarity with serverless approaches using AWS Lambda is a plus

Best Regards

Shivani Garg

Percient Technologies

M: (646) 978-5220

--

Keywords: sthree information technology Ohio
Python Spark AWS ---- Columbus, OH ---- Mphasis
[email protected]
[email protected]
View all
Thu Aug 22 22:29:00 UTC 2024

To remove this job post send "job_kill 1685045" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 7

Location: Columbus, Ohio