Phoenix, AZ (Locals Only) || Big Data Engineer (Java or Python) at Phoenix, Arizona, USA |
Email: [email protected] |
From: Anwar Husain, Synkriom [email protected] Reply to: [email protected] Please find the Job Description below: Required Skills: 6+ years of software development experience and leading teams of engineers and scrum teams. 3+ years of hands-on experience of working with Map-Reduce. Hive, Spark (core, SQL and PySpark) Solid Data Warehousing concepts. Knowledge of the Financial reporting ecosystem will be a plus. Expert on Distributed ecosystem Hands-on experience with programming using Core Java or Python/Scala. Expert on Hadoop and Spark Architecture and its working principle Hands-on experience on writing and understanding complex SQL(Hive/PySpark-dataframes) Optimizing joins while processing huge amounts of data Experience in UNIX shell scripting Ability to design and develop optimized Data pipelines for batch and real time data processing. Should have experience in analysis, design, development, testing, and implementation of system application. Demonstrated ability to develop and document technical and functional specifications and analyze software and system processing flows. Preferred Qualifications: Knowledge of cloud platforms like GCP/AWS, building Microservices and scalable solutions, will be an advantage. 1 + years of experience in designing and building solutions using Kafka streams or queues. Experience with GitHub/Bitbucket and leveraging CI/CD pipelines. Experience with NoSQL i.e., HBase, Couchbase, MongoDB is good to have Excellent technical and analytical aptitude. Good communication skills, Excellent Project management Keywords: continuous integration continuous deployment Phoenix, AZ (Locals Only) || Big Data Engineer (Java or Python) [email protected] |
[email protected] View all |
Tue Jun 04 02:38:00 UTC 2024 |