Pyspark Developer - New York,NY & Wilmington, DE- Day 1 Onsite at Wilmington, Delaware, USA |
Email: [email protected] |
From: dhana, xforia [email protected] Reply to: [email protected] Experience in building Spark Streaming process. Proficient in understanding distributed computing principles. Experience in managing Hadoop cluster with all services. Experience with Nosql Databases and Messaging systems like Kafka. Designing building installing configuring and supporting Hadoop Perform analysis of vast data stores. Good understanding of cloud technology. Must have strong technical experience in Design Mapping specifications HLD LLD. Must have the ability to relate to both business and technical members of the team and possess excellent communication skills. Leverage internal tools and SDKs, utilize AWS services such as S3, Athena, and Glue, and integrate with our internal Archival Service Platform for efficient data purging. Lead the integration efforts with the internal Archival Service Platform for seamless data purging and lifecycle management. Collaborate with the data engineering team to continuously improve data integration pipelines, ensuring adaptability to evolving business needs. Keywords: sthree Pyspark Developer - New York,NY & Wilmington, DE- Day 1 Onsite [email protected] |
[email protected] View all |
Fri Jul 12 04:20:00 UTC 2024 |