JOBS at Phoenix, Arizona, USA |
Email: [email protected] |
From: Anupam Gupta, Klaxontech [email protected] Reply to: [email protected] Hi , My name is Anupam and I am a Staffing Specialist at klaxon technologies Inc. I am reaching out to you on an exciting job opportunity with one of our clients. If you are looking for a new job opportunity, please share your updated resume with your rate expectations. Position : 1 Lead Data Engineer (With strong Informatica, python, Airflow skills) Client: Pioneer End Client: Salesforce Location: Phoenix AZ / San Francisco CA MUST HAVE: [Informatica + AWS + Python + Airflow] Informatica, Python, Spark, AWS Services with EMR, Iceberg, GIT, Airflow, DBT, Trino, Snowflake, DBT, Linux/UNIX, Airflow Description: We have a Data engineer partnering with the Lead Engineer for validation and context. It would roughly 40 informatica mappings where they need to repoint the target to a different database and some source and look up to a different tables and implement business logic to change to new definitions. This is a time bound and very strict time of deliverable ; code base in November and running with definitions and validations in December and support (may be repeat running the code based on UAT findings) Job Overview: We are seeking a highly skilled Lead Data Engineer with extensive experience in Informatica, Python, and Apache Airflow. The ideal candidate will lead our data engineering team, designing and implementing robust data pipelines, ensuring data quality, and optimizing data architecture for scalability and performance. This role requires strong technical skills, leadership capabilities, and a passion for transforming data into actionable insights. Team Leadership: Lead and mentor a team of data engineers, fostering a collaborative and innovative environment. Define best practices and standards for data engineering processes and methodologies. Data Pipeline Development: Design, develop, and maintain ETL processes using Informatica and Python to ensure efficient data movement and transformation. Implement and manage workflows in Apache Airflow for scheduling and monitoring data pipelines. Data Architecture: Collaborate with data architects and analysts to develop scalable and efficient data models. Optimize existing data pipelines for performance, reliability, and cost-effectiveness. Data Quality and Governance: Establish data quality metrics and monitoring processes to ensure the integrity and accuracy of data. Implement data governance practices to comply with regulatory requirements and internal policies. Collaboration: Work closely with cross-functional teams including data scientists, analysts, and business stakeholders to understand data needs and deliver solutions that drive business value. Participate in the planning and execution of data-related projects. Documentation and Reporting: Maintain comprehensive documentation of data engineering processes, architectures, and workflows Provide regular updates to stakeholders on project status, challenges, and solutions. Experience: 10+ years of experience in data engineering or a related field. 5+ years of experience leading data engineering teams. Technical Skills: Strong expertise in Informatica for ETL processes. Proficient in Python for data manipulation and automation. Experience with Apache Airflow for workflow management. Familiarity with cloud platforms (AWS) and big data technologies (Hadoop, Spark) is a plus. Solid understanding of SQL and database management systems (e.g., MySQL, PostgreSQL, NoSQL). ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: Position : 2 - Product owner (With strong Informatica, python, Airflow skills) Client: Pioneer End Client: Salesforce Location: Phoenix AZ / San Francisco CA MUST HAVE: [Informatica + AWS + Python + Airflow] Informatica, Python, Spark, AWS Services with EMR, Iceberg, GIT, Airflow, DBT, Trino, Snowflake, DBT, Linux/UNIX, Airflow. Description: We have a Data engineer partnering with the Product Owner for validation and context. It would roughly 40 informatica mappings where they need to repoint the target to a different database and some source and look up to a different tables and implement business logic to change to new definitions. This is a time bound and very strict time of deliverable ; code base in November and running with definitions and validations in December and support (may be repeat running the code based on UAT findings) Job Overview: We are seeking a proactive and detail-oriented Product Owner with strong expertise in Informatica, Python, and Apache Airflow. The ideal candidate will be responsible for defining product vision, managing the product backlog, and collaborating with cross-functional teams to deliver data-driven solutions that meet user needs and drive business objectives. Product Vision and Strategy: Define and communicate the product vision and roadmap, aligning with company goals and customer needs. Gather and analyze market trends and user feedback to inform product direction. Backlog Management: Create, prioritize, and maintain the product backlog, ensuring it reflects the needs of stakeholders and aligns with the product strategy. Write clear and concise user stories and acceptance criteria, focusing on delivering value to users. Collaboration: Work closely with data engineering, data science, and development teams to ensure successful delivery of product features. Facilitate communication between technical and non-technical stakeholders to gather requirements and provide updates. Data Solutions Oversight: Leverage your knowledge of Informatica, Python, and Airflow to understand technical constraints and opportunities for the product. Ensure that data workflows and pipelines are effectively designed to meet business requirements. User-Centric Approach: Engage with end-users to gather feedback and validate product features, ensuring alignment with user needs. Champion user experience and work to improve product usability and functionality. Performance Tracking: Define key performance indicators (KPIs) to measure product success and user satisfaction. Analyze data and user feedback to drive continuous improvement of the product. Documentation and Reporting: Maintain comprehensive documentation of product requirements, user stories, and release notes. Provide regular updates to stakeholders on product development status and roadmap progress. Experience: 10+ years of experience in product management or a related role, preferably in a data-driven environment. Experience with data engineering tools and practices, specifically Informatica, Python, and Apache Airflow. Technical Skills: Familiarity with data integration, ETL processes, and data workflow management. Proficiency in Agile methodologies and tools (e.g., JIRA, Confluence). Understanding of SQL and data modeling concepts is a plus. Role (3): GCP Data Migration Engineer Final Interview : F2F (Inperson) Client: Pioneer/ AMEX Location: Phoenix AZ, Hybrid Description Bachelor degree in Computer Science, Mathematics or a related technical field or equivalent practical experience. 6+ years direct experience working in IT Infrastructure Experience with Relational Databases, Big Data technologies : Spark, Hadoop. Should be involved in migrating from Big Data System (Hadoop / Spark) to GCP cloud. Experience with application development concepts and technologies likewise : Python or Java. Experience in understanding a complex customers existing software workload and the ability to define a technical migration roadmap to the Cloud. Experience in Managing large scale Windows/Linux environments. Experience in Identity and Access Management, networking, storage and compute infrastructure (servers, databases, firewalls, load balancers) for architecting / Implementing / maintaining cloud solutions in virtualized environments. Experience in architecting and developing software or infrastructure for scalable and secure distributed systems. Experience in advanced areas of networking, including Linux, software-defined networking, network virtualization, open protocols, application acceleration and load balancing, DNS, virtual private networks and their application to PaaS and IaaS technologies Experience automating infrastructure provisioning, DevOps, and/or continuous integration/delivery. Understanding of open source server software (such as NGINX, RabbitMQ, Redis, Elasticsearch) Knowledge of containerization and container orchestration technologies such as Google KubernetesEngine (GKE). Customer facing migration experience, including service discovery, assessment, planning, execution, and operations. Demonstrated excellent communication, presentation, and problem-solving skills. Experience in project governance and enterprise customer management. Certification Preference Google Professional Cloud Architect Alternate: Google Professional Cloud Network Engineer, Google Professional Cloud Security Engineer Keywords: access management information technology Arizona California JOBS [email protected] |
[email protected] View all |
Mon Oct 28 19:32:00 UTC 2024 |