Job Opening || Bigdata Engineer (Onsite) 10 to 15+ years Experience || at Phoenix, Arizona, USA |
Email: [email protected] |
From: Vivek, Smartitframe [email protected] Reply to: [email protected] Role: Bigdata Engineer Location: Phoenix, AZ (Onsite) Experience: 10 to 15 Years, Must Have Qualifications: Bachelor's degree in Engineering or Computer Science or equivalent OR Masters in Computer Applications or equivalent. 5+ years of software development experience and leading teams of engineers and scrum teams 3+ years of hands-on experience of working with Map-Reduce, Hive, Spark (core, SQL and PySpark) Hands-on experience on writing and understanding complex SQL(Hive/PySpark-dataframes), optimizing joins while processing huge amount of data Experience in UNIX shell scripting Responsible for designing system solutions, developing custom applications, and modifying existing applications to meet distinct and changing business requirements. Handle coding, debugging, and documentation, as well working closely with SRE team. Provide post implementation and ongoing production support Develop and design software applications, translating user needs into system architecture. Assess and validate application performance and integration of component systems and provide process flow diagrams. Test the engineering resilience of software and automation tools. You will be challenged with identifying innovative ideas and proof of concept to deliver against the existing and future needs of our customers. Software Engineers who join our Loyalty Technology team will be assigned to one of several exciting teams that are developing a new, nimble, and modern loyalty platform which will support the key element of connecting with our customers where they are and how they choose to interact with American Express. Be part of an enthusiastic, high performing technology team developing solutions to drive engagement and loyalty within our existing cardmember base and attract new customers to the Amex brand. The position will also play a critical role partnering with other development teams, testing and quality, and production support, to meet implementation dates and allow smooth transition throughout the development life-cycle. The successful candidate will be focused on building and executing against a strategy and roadmap focused on moving from monolithic, tightly coupled, batch-based legacy platforms to a loosely coupled, event-driven, microservices-based architecture to meet our long-term business goals. Additional Good to have requirements: Solid Datawarehousing concepts Knowledge of Financial reporting ecosystem will be a plus Experience with Data Visualization tools like Tableau, SiSense, Looker Expert on Distributed ecosystem Hands-on experience with programming using Python/Scala Expert on Hadoop and Spark Architecture and its working principle Ability to design and develop optimized Data pipelines for batch and real time data processing Should have experience in analysis, design, development, testing, and implementation of system applications Demonstrated ability to develop and document technical and functional specifications and analyze software and system processing flows Aptitude for learning and applying programming concepts. Ability to effectively communicate with internal and external business partners. Preferred Qualifications: Knowledge of cloud platforms like GCP/AWS, building Microservices and scalable solutions, will be preferred 2+ years of experience in designing and building solutions using Kafka streams or queues Experience with GitHub and leveraging CI/CD pipelines Experience with NoSQL i.e., HBase, Couchbase, MongoDB -- Keywords: continuous integration continuous deployment Arizona |
[email protected] View all |
Sat Nov 04 00:26:00 UTC 2023 |