Home

Data Engineer Databricks with pyspark at Atlanta GA (Day One Onsite) at Atlanta, Georgia, USA
Email: [email protected]
From:

Isaac,

Centraprise

[email protected]

Reply to:   [email protected]

Hi Professional,

I am writing to let you know regarding a job opportunity as

Data Engineer Databricks with pyspark

with Cognizant. Mentioned is the job description for your review.

Job Title: Data Engineer Databricks with pyspark

Location: Atlanta GA (Day One Onsite)

Job Type: Contract

Job Description:

As a Senior Data Engineer, he/she will be responsible for designing, developing, and maintaining data pipelines using PySpark.

You will work closely with data scientists, analysts, and other stakeholders to ensure the efficient processing and analysis of large datasets, while handling complex transformations and aggregations.

**Required Skills and Experience:**

             - Bachelor's degree in Computer Science, Engineering, or a related field.

             - 10 years of experience in data engineering, with a focus on PySpark, Neo4j or Neptune DB or any other Graph DB.

             -Strong understanding of data modeling, database architecture and schema design.

             - Proficiency in Python and Spark, with strong coding and debugging skills.

             - Experience with big data technologies such as Hadoop, Hive, and Kafka.

             - Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP).

             - Strong knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server).

             - Experience with data warehousing solutions like Redshift, Snowflake, Databricks or Google BigQuery.

             - Familiarity with data lake architectures and data storage solutions.

             - Knowledge of CI/CD pipelines and version control systems (e.g., Git).

             - Excellent problem-solving skills and the ability to troubleshoot complex issues.

             - Strong communication and collaboration skills, with the ability to work effectively in a team environment.

**Preferred Skills:**

             - Knowledge of machine learning workflows and experience working with data scientists.

             - Understanding of data security and governance best practices.

             - Experience with containerization technologies such as Docker and Kubernetes.

             -Experience with orchestration tools like Apache Airflow or AWS Step Functions.

             - Familiarity with streaming data platforms and real-time data processing.

**Key Responsibilities: **

             - Design, develop, and maintain scalable and efficient ETL pipelines using PySpark.

             - Collaborate across functional areas to translate business process, problems into optimal data modeling and analytical solutions that drive business value.

             - Design data model by interacting with several business teams.

             - Manage data collection process providing interpretation and recommendations to management.

             - Build and optimize graph database solutions to support data-driven decision making and advanced analytics, integrate into data pipelines.

             - Optimize and tune PySpark applications for performance and scalability.

             - Collaborate with data scientists and analysts to understand data requirements, review Business Requirement documents and deliver high-quality datasets.

             - Implement data quality checks and ensure data integrity.

             - Monitor and troubleshoot data pipeline issues and ensure timely resolution.

             - Stay up-to-date with the latest trends and technologies in big data and distributed computing.

If interested, please fill the below details and revert with your updated resume.

Candidate Full Name         

Current Location         

Phone                                                                  

Email ID         

Skype ID         

Linked in ID

Passport number

Total Years of Experience         

Education         

Visa Status         

Willing to relocate         

Rate

Last 4 Digits of SSN#:         

Availability to start on the project         

Previously worked with

Cognizant

(Y/N)         

If Yes Provide the

CTS

Emp ID         

Interview Availability with the time Zone        

Thanks & Regards

Isaac

[email protected]

Direct: 469-923-8111

Centraprise

Keywords: continuous integration continuous deployment access management database Georgia Idaho
Data Engineer Databricks with pyspark at Atlanta GA (Day One Onsite)
[email protected]
[email protected]
View all
Fri Nov 08 07:08:00 UTC 2024

To remove this job post send "job_kill 1913484" as subject from [email protected] to [email protected]. Do not write anything extra in the subject line as this is a automatic system which will not work otherwise.


Your reply to [email protected] -
To       

Subject   
Message -

Your email id:

Captcha Image:
Captcha Code:


Pages not loading, taking too much time to load, server timeout or unavailable, or any other issues please contact admin at [email protected]
Time Taken: 6

Location: Atlanta, Georgia