Need 10+ Java spark with AWS (Snow flake exposure) Plano TX / Wilmington DE (onsite, in-person interview required) at Wilmington, Delaware, USA |
Email: [email protected] |
Job Description : Design, develop, and maintain scalable data pipelines using Apache Spark and Java. Implement data processing workflows and ETL processes to ingest, transform, and store large volumes of data. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions. Optimize and tune data processing jobs for performance and cost-efficiency. Ensure data quality, integrity, and security across all data pipelines and storage solutions. Develop and maintain data models, schemas, and documentation. Monitor and troubleshoot data pipeline issues, ensuring high availability and reliability. Hands-on experience with AWS services, including S3, EMR, Lambda, and Glue. Snowflake Experience with SQL and NoSQL databases. CI/CD / Jules /Spinnaker -- Keywords: continuous integration continuous deployment sthree information technology Need 10+ Java spark with AWS (Snow flake exposure) Plano TX / Wilmington DE (onsite, in-person interview required) [email protected] |
[email protected] View all |
Tue Oct 22 21:04:00 UTC 2024 |